Tan Le: A headset that reads your brainwaves
Human communication with machines like turning on the lights using a switch, or programming robotics, has always been limited to conscious i.e. they require a command, or even a series of commands inorder to work. On the other hand communication among people, is even more complex, because here facial expressions, body language, emotions need to be considered apart from the actual message.
Here the main vision is to introduce this human-human interaction into human-computer interaction. By which computers can respond by considering the body language, facial expressions, and emotions of the individual. This can be achieved by considering the signals produced by the brain, which is the center for control and experience.
But it is not
an easy task due to two main reasons. The first one is the detection of
algorithms. Human brain is made up of billions of neurons, which when
interacts leads to the generation of chemical reaction and an
electrical impulse is emitted which need to be measured. The surface of
the brain is highly folded to increase the effective surface area of
the functional brain which is necessary for mental functioning. The
foldings of each individual's cortex is different, and hence the
interpretation of electrical impulses is very complex. very much like a
fingerprint. Hence the physical location of the impulse is very
different and also there is no consistency in the surface signals.Hence
an algorithm need to be created that unfolds the cortex, therefore the
signals can be detected.
The second one is the device EEG, used for observing brainwaves. But it is a time consuming process. It consists of a hairnet of sensors, and the electrodes are placed on the scalp using a conductive gel. The other disadvantage is that it is very economical and costs around some tens of thousands of dollars.
The EEG device explained here is a 14-channel, high-fidelity system. It is adventageous because it doesn't require any conductive gel, wireless, and takes only few minutes to put on and for the signals to settle. And also less economical when compared to traditional EEG system.
To experiment on this cognitive suite the first thing to do is to create a new profile i.e. "to add user", the next step is to start with a neutral signal. This is to create a baseline for the brain of the individual, as every brain is different from the other. The time taken for this process is eight seconds. Now the system is ready to record movement based actions.
The individual or the subject needs to imagine the object on the screen, then the progress bar scroll's across the screen. Nothing will happen for the first eight seconds. But once accepted the cube is live. This is possible even incase of imaginary things which have no analogies. But here the subject need to visualize something that doesnot exist in the physical world.
This is possible due to the leveling system present in the software. On repeated use the system becomes more familiar with the detections, hence able to differentiate distinct thoughts.
There are many applications for this new system. Like in games facial expressions can naturally control an avatar. By which the individual can experience the fantasy of magic and control over the world.
This can also be applied to real world in home like opening or closing curtains, turning the lights on or off. And also facial expressions can be used to control an electric wheelchair.
Finally let's hope for the development of the technology further from here by the involvement of developers and researchers, and with the community's input.
This blog doesnot contain plagiarised material