Highlight text to annotate itX
My name's Helen Bowyer. The project's called SiSi, which stands for Say It, Sign It. It
originally came about from working with a deaf colleague. I learned sign language myself,was
fairly ok in a one-to-one conversation, but wasn't so good when I was trying to interpret
for other people. With a few colleagues we sat down and said we must be able to do something
to help out here. We were already working with the University of East Anglia, and they
provided the avatar that you see behind me here, and we also had some voice-to-text technology,
but there was a big gap in the middle of being able to take spoken word all the way through
to automatic sign language. So we put a proposal in, and we had 4 students come to work for
us for the summer, working over 12 weeks. A lot of people thought we were insane, it
couldn't be done, wasn't possible to do something in 12 weeks, but the students did a fantastic
job and came up with the initial prototype and we've carried on developing it from there.
So in 12 weeks they managed to take spoken English all the way through to sign language
for fairly basic sentences, so we can reorder the words based on the grammar and the syntax
of spoken English and reorder it into the grammar and syntax that's natural for British
Q: Can you give us an example of that?
Sure. So the nice simple example that you can see here is if I say "my name is Helen",
that's a fairly easy sentence for instant English, when we sign that what she actually
signs is "name me Helen", so even for a very simple sentence it's been completely reordered
and that's the way that you would have it when you're signing it.
Q: And it does facial expressions as well?
Yes, she does facial expressions as well. Signing isn't just how you move your hands,
a lot of it's to do with expression. So a nice example here is if I get her to say "exciting"
- you can see she smiles there as well. So that's just as important. When you say things
like "I understand" you'll drag a finger across your forehead - if you don't understand it's
exactly the same sign but you shake your head.
Q: Does she do that one?
I'm not sure if she's done that one, I'll do that one for you! But it's a nice example
of just how much is involved in your facial expressions, and when you're learning to sign
one of the things you have to get good at is using your facial expressions more.
Q: So you were saying that it's quite processor intensive because of the underlying skeleton
of the avatar?
So the avatar's actually based on a kinematic skeleton and it's important that it's got
that level of detail because the points where you touch, which finger you touch, whereabouts
on your finger you touch is really key. So being able to control how much a knuckle's
bent and where you touch it. So the Avatar itself is controlled by SiGML, a form of XML,
that actually allows you to control the skeleton of the avatar, and how you touch it, and how
Q: So will that make it difficult to roll it out to mobile devices initially?
Initially at the moment we haven't got it running on a mobile device, we'd absolutely
like to get it running on more devices - on some phones, on some tablets, maybe even built
into TV set top boxes was an initial idea we had. We had some interesting feedback from
the deaf community themselves, as to where and when they'd like to see that kind of technology
used, which has been really key for us. From the start we've been involved with the RNID,
Royal National Institute for Deaf people, making sure that the ideas that we're coming
up with are actually going to be beneficial for them, and not just us coming up with cool
new technology ideas, but actually stuff that they think is going to be right for their
Q: So a future thing would be going back the other way. Tell us a little bit about that.
So yes, I'd love to have the answer to take it the other way. Obviously if I'm here as
a deaf person, you can speak to me, potentially in the future I can hold my phone up, which
can process your voice, and see the avatar here. Now I want to say something back to
you, my natural language is sign language. So we need a way of you being able to understand
my sign language. Now there are some great advances going on at the moment, either with
a pair of gloves that can be recognised with a camera, or some sensors that go on your
arm that actually pick up the electrical signals that get sent to your brain. Not so good if
we're out and about on the street and we just bump into each other, so that's definitely
a current area of future research and something we'd love to solve.