Finding Information using Brain Computer Interfaces

Research objective

The aim of this project is to conduct research into the efficacy of affordable means of brain-computer interaction, for individuals with severe neuro-motor disabilities and customised computer software.

Motivation

It is estimated that over 2.5 million people, of all ages, are suffering from Spinal cord injuries (an example was the late Christopher Reeve known for his role as superman). Spinal cord injury is only one of the many conditions that may cause a severe motor disability (movement of hands and legs) in people. Other conditions include Parkinson’s disease, cerebral palsy and motor neurone diseases, such as neuro-muscular dystrophy, suffered by Prof. Steven Hawking. These conditions can have a detrimental effect on peoples’ lives, as they are unable to perform common tasks that most people take for granted. Such tasks are as simple as reading a book, browsing the internet, changing the channel on the T.V, and being able to interact electronically with their loved ones (through e-mails, social networking sites or chat). Current assistive technology, such as controlling screens using only eye-gaze, is unnatural, strenuous, expensive and can be extremely user-intrusive.

A Solution

When an individual thinks of an action they wish to make, such as pushing a mouse device, electrical activity produced by the brain, penetrates the skull and can be measured at the scalps surface using electrodes. This is called electroencephalography (EEG). Different actions are associated with different patterns of electrical activity. These associated patterns of electrical brain activity, enable the actions or intent of the user to be interpreted without the physical action occurring. This means that we can use someone’s thoughts to understand the action they are trying to perform and translate that electronically in order to map thoughts to actions on a screen. We can use affordable, non-intrusive method to enable disabled individuals control electronic software and hardware.

Emotiv Epoc Headset

Emotiv Epoc Headset

 

Thus far, most state of the art assistive technology between an individual and a machine, using brain computer interfaces, has   required implants, which needed cutting into the brain [1]. Other technology, requiring detecting these wave patterns has been extremely expensive, bulky, immobile (therefore making it restrictive in terms of where it can be used) and specialised, thus not allowing for the effective development of the area [2]. Recent advancements however, have seen the release of a prototype affordable EEG headset (www.emotiv.com). The device is still in its early stages and is mostly sold to developers and researchers, as software is currently limited, in order to build their own software for the device. The device is a non-intrusive, safe, wireless and affordable making it an effective means of communication between a disabled person and the outside world. It also has the advantage of tracking facial expressions (smiles, winks etc.) and head-movement. It currently tracks basic movement such as: push, pull, rotate, lift and allows for developers to further explore and decipher new brain patterns.

 

The project aims to use human thought to effectively control computer actions. These actions would provide the ability to control basic software, such as web browser or document readers, thus mimicking navigation. Currently, the headset recognises basic actions such as push and pull by thought. We are mapping these actions to computer actions such as scrolling up and down as well as selecting the next item on the screen and test how these can be used to assist individuals with motor disabilities to control basic software such as a web browser. A current example prototype is currently in early development.

Update (03/01/2014):

The first working prototype is complete. TAMI Reader (another suggested name for it is Mind Reader) is a bespoke document reader created for the purpose of integrating basic reader functions with brain computer control. For this reason a minimalistic approach is used in order to allow the user to have the best control possible. Although the brain computer device allows for several actions to be detected, the difficulty level increases exponentially both for the user to control different thoughts accurately as well as for the software to detect the intended actions.

In order to distinguish between the actions performed by the user and the actions performed by the system, we use the term ‘action’ to denote user actions and the term ‘reaction’ to denote the resulting action by the system. Currently TAMI Reader allows for two actions; namely, [Next Page] and [Previous Page]. This reactions can be triggered by predefined actions by the user. Pilot testing with participants showed that the easiest actions to use for these two reactions are the ‘smile’ and ‘raise brows’.

 


References

[1] Darmanjian, S.; Sung Phil Kim; Nechyba, M.C.; Morrison, S.; Principe, J.; Wessberg, J.; Nicolelis, M.A.L.; , “Bimodal brain-machine interface for motor control of robotic prosthetic,” Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on , vol.4, no., pp. 3612- 3617 vol.3, 27-31 Oct. 2003.

[2] Proc Natl Acad Sci U S A. 2004 December 21; 101(51): 17849–17854. Published online 2004 December 7.