Tip:
Highlight text to annotate it
X
What we're trying to do is to map human motion into machines so that they can have a certain
level of autonomy so that you don't have to talk to them in minute detail.
There's a lot of non-verbal communication in dance and people who become good at it,
they are able to function as teams.
And there is a great deal of interest in having mixed teams of humans and robots that go out
and do tasks such as performing rescue.
When a nuclear power plant gets into trouble, there are things that you don't really want
to have humans do. And so what you'd like to do is send out a team.
So basically what we are doing here is we are analyzing the non verbal communication
between the dancers during a salsa performance and apply that to a robotic platform.
The first challenge is to come up with a nice representation of the dance.
What we have used for that is the XBox Kinect sensor.
He sees you as a skeleton and from that skeleton view, it observes what is being performed
by a human and it evaluates that with respect to the matrix that we have.
The ultimate goal is to understand human reaction to gestures and how machines, maybe, should
react to gestures.
And then you go on to the things where you want to send teams of robots and humans out
to do significant things that are not recreational--that have to do with rescue, have to do with operations
in hazardous environments.
I think it can be very exciting.