Tip:
Highlight text to annotate it
X
>>Tan Le: Good morning. My name is Tan Le, I'm here to talk to you about brain computer
interface technology. Up until now, our communication with machines has always been limited to conscious
and explicit forms, whether it is something we do every day like turning on the lights
with a switch or even more complex like programming. We have always had to give a command or even
a series of commands to a machine in order for it to do something for us.
Communication between people, on the other hand, is a lot more complex and also a lot
more interesting because we take into account a lot more than what is explicitly expressed.
We observe facial expressions, body language, and we can intuit feelings and emotions from
our dialogue from one another. Our vision is to introduce this whole new
realm of human interaction into human computer interaction so that computers can understand
not only what you direct it to do but it can also observe and respond to your facial expressions
and emotional experiences. And what better way to do this than by interpreting
the signals naturally produced by our brain, our center for control and experience.
And whilst this may sound like a pretty straightforward idea, the task wasn't easy for two main reasons.
First, the detection algorithms. Our brain is made up of billions of active neurons.
When these neurons interact, the chemical reaction emits an electrical impulse which
can be measured. The majority of our functional brain is distributed over the outer surface
layer of the brain. And to increase the area available for mental capacity, the brain's
surface is highly folded. This cortical folding presents a significant
challenge for interpreting the surface electrical impulses. Even though a signal may come from
the same functional part of the brain, by the time the structure has been folded, its
physical location is very different between individuals, even identical twins.
The other challenge that we face is in the device for collecting the brain waves. EEG
measurements typically involve a hairnet with an array of fences. A technician will place
the electrodes onto your scalp using a conductive gel or paste only after a procedure of preparing
the scalp by light abrasion. So, as you can imagine, it is not the most comfortable process.
What we have been able to come up with is a 14-channel high Fidelity EEG acquisition
system. It doesn't take any scalp preparation, no conductive gel or paste. It only takes
a few minutes to put on and for the signals to settle.
And it is also wireless, so you have the freedom to move around.
And compared to the tens of thousands of dollars that you would normally pay for a conventional
EEG system, this device is only several hundred dollars. In terms of the algorithms, we mentioned
that we want -- our objective is to mirror more closely the way that humans interact
with each other. So we're now able to detect facial expressions,
emotional experiences and cognitive intent, which is your ability to manipulate an object
simply by thinking about it. And as you can imagine, there can be many
possible application areas for this new form of interface technology. In games and virtual
worlds, for example, you know, you can use your facial expressions naturally and intuitively
to control an avatar or even experience the fantasy of magic by being able to control
the universe with your mind. In advertising, market research or usability
testing, you can actually gain true insight into how people are experiencing or responding
or how engaged they are with the material that is being presented to them.
Another area that we're particularly excited about is being able to use this technology
in emotion-guided search. Many of us now have thousands of songs, photos,
video clips in our personal collection. And I don't know about you, but I find it impossible
to find a photo that I took three or four years ago from a volume that's labeled starting
with "DSC" or "IMG." It is impossible. So by actually recording your emotional state
while you are listening to music or viewing a photo album or just watching a video clip,
you could actually create tags for these emotional experiences so that they can be indexed and
used later on so that music can self-select based on your mood. Or by recalling an experience,
you can actually find the photo or the segment within the video clip that you were looking
for. That's pretty fun. The other thing that we can do is we can apply
this technology to actual objects in our physical world, such as being able to control the movement
of a robot or a prosthetic limb. Another area is in smart home integration or smart office
integration where you can control everything from the ambient environment to the lighting,
the sound, the climate control. It acts like another smart remote control that senses your
body. And, finally, there are opportunities to use
this in life-changing applications as well. We've seen some transformative effects on
people's lives when used in the context of rehabilitation. And, you know, we are really
only scratching the surface of what is possible with this technology today. And we hope that
you can engage with us in a dialogue and help us to shape where this technology can go from
here. Thank you very much. [ Applause ]