Tip:
Highlight text to annotate it
X
[ Silence ]
>> [Background music] My research focuses on low power center interphases
and I'm particularly interested in biomedical applications
of these low power center interphases.
One example of biometrical application is cochlea implants,
the reason why low power circuits are necessary for cochlea appliances because they are small
in size and they have to be partially implanted and partially worn by the user.
So whatever electronics that are in this device must be very,
very efficient in the way they consume power.
There are 30 million Americans who are currently suffering from sensory neuro hearing loss
and those who have got very severe sensory neuro hearing loss have no recalls other
than to use the cochlea implant.
What a cochlea implant is it's a neuro-prosthetic device
that is placed inside the users skull and it captures audio signals and then converts them
into electrical signals, by passes the ear and the damaged cochlea
and then directly stimulates the auditory nerve.
Now cochlea implants along with deep in-brain stimulators are actually one
of the most successful neuro-prosthetic devices to date.
But there is one confounding factor that they face and that is
that in busy auditory environments, that is in an environment where there is a lot
of background noise, users of cochlea implants find it difficult
to have intelligible conversations and this is a problem that has been faced
by cochlea implant users and designers for the past 30 years.
We are at the stage today where we know enough about psycho-acoustics
and about signal processing that it's actually possible to separate a sound of interest
from one that is not so interesting.
The problem here is that the current auditory processing schemes
that we use require huge amounts of computational power.
So the idea of my research is to take the cochlea implant and to couple it
with the user's brain and the user just has to think that he wants to listen to say Bob
versus Alice and what the cochlea implant will do is that it will pick-up on this attention,
this change in focus of attention and redirect itself towards Bob for example
and enhance Bob's voice and then suppress all of the background noise.
People who are into neuro-science and neuro-physiology know that neurons compute
and [inaudible] in linear fashion.
On the other hand most computational mechanisms that are man-made are actually linear
and the reason why they are basically is
because linear computation is much more mathematically tractable
than non-linear computation.
So in a nutshell it's easier for us to understand linear computation than it is
to understand non-linear computation.
Now because this is departure from the traditional way of digital computation,
there are currently no tools and no standing methodologies for actually doing this sort
of work and what I'm doing and what I'm doing with the Newcomb Award is
to develop new computational tools for exploring the space of non-linear dynamics.