Skip to main content

The MindAffect BCI python SDK

Project description

This repository contains the python SDK code for the Brain Computer Interface (BCI) developed by the company Mindaffect.

Installation

To install the code:
  1. Clone or download this repository:

    git clone https://github.com/mindaffect/pymindaffectBCI
  2. Install the necessary bits to your local python path:

  1. change to the directory where you cloned the repository.

  2. Add this module to the python path, and install dependencies:

    pip install -e .

Installation Test

You can run a quick test if the installation without any additional hardware by running:

python3 -m mindaffectBCI.online_bci --acquisation fakedata

Essentially, this run the SDK test code which simulates a fake EEG source and then runs the full BCI sequence, with decoder discovery, calibration and prediction.

Important: FrameRate Check

For rapid visual stimulation BCI (like the noisetagging BCI), it is very important that the visual flicker be displayed accurately. However, as the graphics performance of computers varies widely it is hard to know in advance if a particular configuration is accurate enough. To help with this we also provide a graphics performance checker, which will validate that your graphics system is correctly configured. You can run this with:

python3 -m mindaffectBCI.examples.presentation.framerate_check

As this runs it will show in a window your current graphics frame-rate and, more importantly, the variability in the frame times. For good BCI performance this jitter should be <1ms. If you see jitter greater than this you should probably adjust your graphics card settings. The most important setting to consider is to be sure that you have _vsync_ <https://en.wikipedia.org/wiki/Screen_tearing#Vertical_synchronization> turned-on. Many graphics cards turn this off by default, as it (in theory) gives higher frame rates for gaming. However, for our system, frame-rate is less important than exact timing, hence always turn vsync on for visual Brain-Compuber-Interfaces!

Brain Computer Interface Test

If you have:
  1. installed pyglet , e.g. using pip3 install pyglet

  2. installed brainflow , e.g. using pip3 install brainflow

  3. have connected an openBCI ganglion ,

  4. have followed MindAffect headset layout.pdf to attach the electrodes to the back of your head.

Then you can jump directly to trying a fully functional simple letter matrix BCI using:

python3 -m mindaffectBCI.online_bci

Note: For more information on how to run an on-line BCI, including using other supported amplifiers, see this document: OnlineBCI_quickstart.md

Getting Support

For a general overview of how to use the mindaffectBCI, hardware, software and how to use it, see the system wiki.

If you run into and issue you can either directly raise an issue on the projects github page

File Structure

This repository is organized roughly as follows:

  • mindaffectBCI - contains the python package containing the mindaffectBCI SDK. Important modules within this package are: - noisetag.py - This module contains the main API for developing User Interfaces with BCI control - utopiaController.py - This module contains the application level APIs for interacting with the MindAffect Decoder. - utopiaclient.py - This module contains the low-level networking functions for communicating with the MindAffect Decoder - which is normally a separate computer running the eeg analysis software. - stimseq.py – This module contains the low-level functions for loading and codebooks - which define how the presented stimuli will look. - online_bci.py - This module contains the code to run a complete on-line noise-tagging BCI, of either a noisetagging, SSVEP, or P300. - online_bci.ipynb - This juypter notebook contains the code to run a complete on-line noise-tagging BCI, of either a noisetagging, SSVEP, or P300. - online_bci.json - This JSON file contains the configuration information to run a full noisetagging BCI.

  • decoder - contains our open source python based Brain Computer Interface decoder, for both on-line and off-line analysis of neuro-imaging data. Important modules within this package are: - decoder.py - This module contains the code for the on-line decoder. - offline_analysis.ipynb - This juypter notebook contains to run an off-line analysis of previously saved data from the mindaffectBCI or other publically available BCI datasets.

  • examples - contains python based examples for Presentation and Output parts of the BCI. Important sub-directories
    • output - Example output modules. An output module translates BCI based selections into actions.

    • presentation - Example presentation modules. A presentation module, presents the BCI stimulus to the user, and is normally the main UI. In particular here we have: - framerate_check.py - Which you can run to test if your display settings (particularly vsync) are correct for accurate flicker presentation. - selectionMatrix.py - Which you can run as a simple example of using the mindaffectBCI to select letters from an on-screen grid.

    • utilities - Useful utilities, such as a simple raw signal viewer

    • acquisation - Example data acquisation modules. An acquisation module interfaces with the EEG measurment hardware and streams time-stamped data to the hub.

System Overview

The mindaffectBCI consists of 3 main pieces:
  • decoder : This piece runs on a compute module (the raspberry PI in the dev-kit), connects to the EEG amplifer and the presentation system, and runs the machine learning algorithms to decode a users intended output from the measured EEG.

  • presentation : This piece runs on the display (normally the developers laptop, or tablet)), connects to the decoder, and shows the user interface to the user, with the possible flickering options to pick from.

  • output : This piece, normally runs on the same location as the presentation, but may be somewhere else, and also connects to the decoder. It listens from ‘selections’ from the decoder, which indicate that the decoder has decided the user want’s to pick a particular option, and makes that selection happen – for example by adding a letter to the current sentence, or moving a robot-arm, or turning on or off a light.

The detailed system architeture of the mindaffecBCI is explained in more detail in doc/Utopia _ Guide for Implementation of new Presentation and Output Components.pdf, and is illustrated in this figure:

https://github.com/mindaffect/pymindaffectBCI/blob/master/doc/SystemArchitecture.png

Simple output module

An output module listens for selections from the mindaffect decoder and acts on them to create some output. Here we show how to make a simple output module which print’s “Hello World” when the presentation ‘button’ with ID=1 is selected.

Note: this should be in a separate file from the output example above. You can find the complete code for this minimal-presentation on our github examples/output/minimal_output.py

# Import the utopia2output module
from mindaffectBCI.utopia2output import Utopia2Output

Now we can create an utopia2output object and connect it to a running mindaffect BCI decoder.

u2o=Utopia2Output()
u2o.connect()

(Note: For this to succeed you must have a real or simulated mindaffectBCI decoder running somewhere on your network.)

Now we define a function to print hello-world

def helloworld(objID):
   print("hello world")

And connect it so it is run when the object with ID=1 is selected.

# set the objectID2Action dictionary to use our helloworld function if 1 is selected
u2o.objectID2Action={ 1:helloworld }

Finally, run the main loop

u2o.run()

For more complex output examples, and examples for controlling a lego boost robot, or a philips Hue controllable light, look in the examplesoutput directory.

Simple presention module

Presentation is inherently more complex that output as we must display the correct stimuli to the user with precise timing and communicate this timing information to the mindaffect decoder. Further, for the BCI operation we need to operation in (at least),

  • _calibration_ mode where we cue the user where to attend to obtain correctly labelled brain data to train the machine learning algorithms in the decoder and

  • _prediction_ mode where the user actually uses the BCI to make selections.

The noisetag module mindaffectBCI SDK provides a number of tools to hide this complexity from the application developers. Using the most extreeem of these all the application developer has to do is provide a function to _draw_ the display as instructed by the noisetag module.

Note: this should be in a separate file from the output example above. You can find the complete code for this minimal-presentation on our examples/presentation/minimal_presentation.py

To use this. Import the module and creat the noisetag object.

from mindaffectBCI.noisetag import Noisetag
nt = Noisetag()

Note: Creation of the Noisetag object will also implictly create a connection to any running mindaffectBCI decoder - so you should have one running somewhere on your network.

Write a function to draw the screen. Here we will use the python gaming librar [pyglet](www.pyglet.org) to draw 2 squares on the screen, with the given colors.

import pyglet
# make a default window, with fixed size for simplicty
window=pyglet.window.Window(width=640,height=480)

# define a simple 2-squares drawing function
def draw_squares(col1,col2):
  # draw square 1: @100,190 , width=100, height=100
  x=100; y=190; w=100; h=100;
  pyglet.graphics.draw(4,pyglet.gl.GL_QUADS,
                       ('v2f',(x,y,x+w,y,x+w,y+h,x,y+h)),
                                         ('c3f',(col1)*4))
  # draw square 2: @440,100
  x=640-100-100
  pyglet.graphics.draw(4,pyglet.gl.GL_QUADS,
                       ('v2f',(x,y,x+w,y,x+w,y+h,x,y+h)),
                                         ('c3f',(col2)*4))

Now, we need a bit of python hacking. Because our BCI depends on accurate timelock of the brain data (EEG) with the visual display, we need to have accurate time-stamps for when the display changes. Fortunately, pyglet allows us to get this accuracy as it provides a flip method on windows which blocks until the display is actually updated. Thus we can use this to generate accurate time-stamps. We do this by adding a time-stamp recording function to the windows normal flip method with the following magic:

# override window's flip method to record the exact *time* the
# flip happended
def timedflip(self):
  '''pseudo method type which records the timestamp for window flips'''
  type(self).flip(self) # call the 'real' flip method...
  self.lastfliptime=nt.getTimeStamp()
import types
window.flip = types.MethodType(timedflip,window)
# ensure the field is already there.
window.lastfliptime=nt.getTimeStamp()

Now we write a function which, 1) asks the noisetag framework how the selectable squares should look, 2) updates the noisetag framework with information about how the display was updated.

# dictionary mapping from stimulus-state to colors
state2color={0:(.2,.2,.2), # off=grey
             1:(1,1,1),    # on=white
             2:(0,1,0),    # cue=green
             3:(0,0,1)}    # feedback=blue
def draw(dt):
  # send info on the *previous* stimulus state.
  # N.B. we do it here as draw is called as soon as the vsync happens
  nt.sendStimulusState(timestamp=window.lastfliptime)
  # update and get the new stimulus state to display
  # N.B. update raises StopIteration when noisetag sequence has finished
  try :
      nt.updateStimulusState()
      stimulus_state,target_state,objIDs,sendEvents=nt.getStimulusState()
  except StopIteration :
      pyglet.app.exit() # terminate app when noisetag is done
      return
  # draw the display with the instructed colors
  # draw the display with the instructed colors
  if stimulus_state :
      draw_squares(state2color[stimulus_state[0]],
                   state2color[stimulus_state[1]])

As a final step we can attached a selection callback which will be called whenever a selection is made by the BCI.

# define a trival selection handler
def selectionHandler(objID):
  print("Selected: %d"%(objID))
nt.addSelectionHandler(selectionHandler)

Finally, we tell the noisetag module to run a complete BCI ‘experiment’ with calibration and feedback mode, and start the pyglet main loop.

# tell the noisetag framework to run a full : calibrate->prediction sequence
nt.setnumActiveObjIDs(2)  # say that we have 2 objects flickering
nt.startExpt(nCal=10,nPred=10)
# run the pyglet main loop
pyglet.clock.schedule(draw)
pyglet.app.run()

This will then run a full BCI with 10 cued calibration trials, and uncued prediction trials. During the calibration trials a square turning green shows this is the cued direction. During the prediction phase a square turning blue shows the selection by the BCI.

For more complex presentation examples, including a full 6x6 character typing keyboard, and a color-wheel for controlling a philips Hue light see the examples/presentation directory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mindaffectBCI-0.9.17.tar.gz (229.5 kB view hashes)

Uploaded Source

Built Distribution

mindaffectBCI-0.9.17-py3-none-any.whl (314.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page