Since its inception, Google Glass has been demonstrating the potential to offer a wide range of applications within the field of brain-computer-interface (BCI) technology, and it’s about to go one step further. According to a report at Neurogadget, at team at Kennesaw State University’s Brainlab has developed a prototype that will allow a Glass user to interface with and give commands to Google Glass using evoked brain responses rather than swipes, nods or voice commands.
BrainLab’s Executive Director Adriane Randolph says that although the device picks up brain waves and sends feedback to Google Glass, it differs from similar BCI systems because instead of reading input from a continuous brainwave, (such as Alpha waves), BrainLab captures and reads a brain response known as P300, allowing the user more control.
How P300 Works
A P300 is a brain response that is a direct result of decision making, and it occurs as the reaction to a stimulus. In BrainLab’s BCI system, the user is shown a string of characters, which flash in a random formation. He or she must choose and focus on one in particular. When the chosen character flashes, the user’s P300 neural response will kick in just 300 milliseconds after seeing the selected symbol. The response is transmitted to the computer and linked to the appropriate command.
The BrainLab technology takes Google Glass to a new level giving possibilities to people who are victims of Locked-in Syndrome (LIS) — a condition in which the sufferer is unable to move or communicate due to complete paralysis of virtually all the voluntary muscles, and who are currently only able to communicate by blinking an eye. For people with LIS this application offers a non-invasive way to use BCI as a communication tool.
Randolph has been researching BCIs for more than a decade and has been director of the KSU BrainLab since its founding in 2007. She is dedicated to using research to help improve the quality of life for people with severe motor disabilities.
Image credit: xombit.com