Tip:
Highlight text to annotate it
X
JEFF: Today we're very happy to have Professor Illah
Nourbakhsh who's Associate Professor of Robotics at the
Robotics Institute at Carnegie Mellon University.
He was on leave in 2004 at NASA Ames where he served as
the Robotics Group Lead.
He received a PhD in computer science from Stanford
University in 1996.
He's Co-founder of the Toy Robotics Initiative at the
Robotics Institute at Carnegie Mellon.
Today his talk is on Robotics and Community for Learning and
Exploration.
In his talk he will describe two major community building
projects at the Toy Robotics Initiative at Carnegie
Mellon's Robotics Institute.
Illah?
ILLAH NOURBAKHSH: Thanks.
Thank you, Jeff.
Well thank you for coming.
Let me just begin and jump in to it and explain to you what
I'll be doing.
Actually, let me go back one slide first. So I come from
the Robotics Institute, and for those of you who aren't
aware of what the Robotics Institute is, it's kind of
interesting.
It's a full-fledged department in the Carnegie Mellon
University.
So it's a degree-granting program, just like the
Computer Science Department, Language Technologies
Institute, and Human Computer Interaction Institute.
It's 70 faculty large, so there's 70
professors in just robotics.
So it's a bit of an absurd place.
If you find yourself in Pittsburgh, you should visit.
There are more robots there than you can possibly imagine.
And there are many here who can vouch for that, actually.
What I'm going to talk about today are two projects that
are partly funded by Google, so thank you to
Google, and by others.
And they're examples of a direction we've been going
into most recently.
We have been working on educational robotics where
we're really talking about the educational value of a robot
and a person working together.
More recently, we've been interested in the idea of
educational robotics where, in fact, it is through the glue
of robotics that we create communities of expertise, and
communities of practice.
So we're very interested in the idea of creating
communities of human beings where robotics is really just
a motivational technological tool.
Thank you, Alec.
We have robots.
Naturally, a robotics talk without real robots is like a
fish without a bicycle, so here's two robots.
So I'm going to talk about these two projects.
Let me jump right into Global Connection Project.
Which is the one that has had much more visibility thus far,
and some of you probably heard of that one.
Then I'll talk about Telepresence Robot Kit, which
is a new project.
This is the first real public address on it.
The interesting thing is we have a major release coming up
in March where you can actually build your own robot
as part of that.
So let's start with Global Connection.
Global Connection started with a very, very feel good and
incredibly broad vision.
The idea of getting people to understand each other better,
but doing it using technology, and doing it by creating
community and sharing across people of different cultures.
Now, the place that this came from, sort of the thing that
inspired us, was the work we were doing
on Mars at the time.
So the Mars Rovers were landing, and some of the
people in my group at NASA Ames Research Center were
actually helping the scientists.
What they noticed in helping the scientists was that it was
the images that were drawing the scientists together and
causing really good community building and communication
across a diverse group of scientists.
So in a way, images were becoming a highly explorable
and very interesting
phenomenon for group education.
The other picture I'll show you, and before I go back to
words, is kind of interesting because this is the single,
most popular picture ever taken
during the Apollo Mission.
This is the last picture taken during Apollo Missions.
And the irony, of course, is that this is a
picture not of the moon.
So the most popular picture taken during that whole set of
missions was a picture of us.
So the idea of turning the camera inward, right, of being
able to explore and understand the Earth itself is something
that's always been magical for people.
This was the beginning of the Global Connection Project.
The idea was one way of--
and if you read Gagarin's diaries, or if you read
Armstrong's diaries, or just about anybody else who's been
in space from either side of the fence, what they say is
that once they're in space and they go over the Earth, they
see Earth in a different light, and they feel for the
oneness of the Earth and the idea that political boundaries
are somewhat transient.
So one idea we had was OK, so maybe we should just fund
raise and take everybody that makes political decisions,
say, and send them into space for a while, have
them orbit the Earth.
And that might actually increase global understanding
significantly.
Now, there's a downside to that because we computed that
the amount of carbon monoxide and ozone that you released by
doing that would accelerate global warming significantly.
So that's actually a bad idea.
So we can't put everybody in orbit for a while,
that's a bad idea.
So the next best thing is can we create visualization tools
that help people to understand the Earth better by giving
them a new, more explorable view of the Earth as a whole.
So we wanted images to be the center of this, and we
launched three specific initiatives last year, which
have paid off pretty well.
The first two have paid off really well, and I'm going to
tell you about the motivation at the bottom there.
By the way, I'm going to talk fast and go through the slides
quickly because I want you to be done by
3:50 for obvious reasons.
So first of all, create content.
What we wanted to do is start by having really compelling
professional content, something that people could
really get into because it's professionally done content
that they want to see.
Throw that in a spatial browser in a format that they
can see it, and explore it, and learn more about the world
through it.
And that would be kind of a catalyst for learning more.
Second thing is we wanted to create a technology for
hyperrealistic imaging.
So in a way, if you think about digital cameras and
where they are, we have to turn the gain way, way up on
that, so that you have far better spatial resolution, far
better temporal resolution, in other words, being able to go
back and forth in time, and much, much better dynamic
range, so you can have pictures with lots of
resolution and yet the ability to see across a very broad
range of brightness values.
And then third, we wanted to start to build specific
communities and create communities that are
self-sustaining around all of these topics of interest. So
those are kind of the first three steps.
And this is here because if you think about the driver
behind this project and the other project I'll tell you
about, Aristotelian passions sort of start with imagination
and creativity, the idea that what drives humans and makes
them human is to imagine, and to wonder what
the world is like.
Well, there's two kinds of imagination that Aristotle
talks about.
One is imagination of the natural world--
imagining and wondering what the world is like, and what's
cool about the world.
And the other one is imagination of the creative
world-- imagining what you could do to the world, how
could you change things, how could you
invent something new?
We're trying to do both of these.
So this first project tries to hit on the question of
wondering about the real world.
So what we want to happen to people and the way we want
communities to build is we want people to wonder.
Because you create tools that help them become wonderous
about how rich the world is.
And you want them, by using those tools, to explore the
world and discover things that they didn't know before.
And you want them to discover things that are so compelling,
and rich, and interesting that then they feel
compelled to share.
So that's kind of one trajectory to a
self-sustaining community.
You'll see a different three sort of tuple in the case of
the Telepresence Robot Kit Project.
So the project we did, long story, very short, is that we
talked to National Geographic and got them excited about it.
We talked to Google Earth, or Keyhole at the time, and got
them excited about it.
And thanks to many, many, many people who said yes and helped
out and promoted funding, we managed to take hundreds of
articles in Africa and geocode them onto
Google Earth for Africa.
So that when you spin the globe in Google Earth to
Africa and zoom in, you can see little boxes.
And of course, when you click on the boxes and zoom in
further, you can see articles that are on National
Geographic online.
So those lead to articles.
The other thing we did has to do with these little red dots,
and this is kind of interesting.
Mike Faye is an incredible guy.
He's raised actually billions of dollars for World Hunger
Relief now.
He flew a Cessna 182 over Africa and collected about
90,000 images with GPS coordinates that were rough,
not very good because they were coming asynchronously off
a GPS receiver.
Well, we took that data and we geocoded it as well.
A lot of that is now available too.
When you zoom in on those airplanes,
you'll see his pictures.
So you'll see things like--
oh, this is interesting.
This is the only structure that's man-made that is
visible from space, which is a giant conveyor belt.
It turns out The Great Wall is not visible in space, as those
of you who know about some websites have found out.
And you'll see things like spice trading bars where
people are actually trading spices, and looking up at the
airplane wondering why there's an airplane flying overhead.
And the level of zooming is pretty astounding.
If you zoom into one of these airplanes on Google Earth,
you'll see a super resolution section within Google Earth.
That super resolution section is there because our team did
such a great job of lining up the high resolution image with
the Google Earth satellite data.
And when you zoom in, of course, you see really good
data like that.
And those of you who have played with this know that you
can get to the point where you can see people carrying a baby
on the back, walking with the shadow cast by a sunset sun,
setting sun-- that's the word.
Well, this fiber stuff led to somewhere interesting, which
is that the technology that was developed to make this
kind of thing facile turned out to be very useful when you
were starting to talk about disasters.
So when the hurricanes Katrina and Rita happened, and again
when the earthquake happened, it was a great time, and we
are very lucky to be in a position, to actually take
thousands of images that were much more recent images
post-disaster, and overlay them onto Google Earth.
That turned out to be a really useful tool.
These are examples of some maps that we've overlaid onto
Google Earth that are recent maps and helped Disaster
Relief workers and logisticians in Pakistan plan
routes and figure out villages they can go to and how.
I guess there is a story I wanted to tell about disaster
aid very, very briefly.
The story that's interesting to me is that
there was a real demand.
So Federal agencies were coming after Google and then
coming after us and saying look, please do
this overlay for us.
And there was supply.
It turns out there was a whole network of people who wanted
to be part of a community helping with Disaster Relief.
So a lot of people who were looking at the television and
reading the newspapers learning about the disaster
were reading this going I want to help in some way.
How can I possibly help?
So it turns out a number of people were actually acting as
clearinghouses, collecting data from various sources, and
creating a fusion of data from several sources, such as the
map from here, and a word description of a problem at a
specific village, and then locating that using World Wind
or using Google Earth or any other tools.
The last thing, but not least, there was a set of eager
technology that we've noticed.
So just as an example, very recently we've been approached
by the biggest cell phone company in the world.
What they say is we have--
and I've forgotten the number now--
15 million users who have video cameras and very, very
high bandwidth connections.
How can we make sure that whenever there's a disaster,
anybody with those cell phones can snap pictures and overlay
them automatically into your system.
Because we can triangulate within 10 meters using the
radio towers.
It's a really nice idea.
It's something that we can definitely do by doing some
really interesting image work on the images that they snap
and the satellite images that we have.
So disaster aid seems to be one really good example of
community that has been born completely asynchronously, and
chaotically, and will definitely last. Now, those
were all sort of detours from the original task, which was
we wanted to use images to explore, but we wanted to
create tools so that images are hyperrealistic.
So we want images, to be in a way, much more realistic than
they are today.
So let me tell you, kind of picture this idea and then
show you the results.
So the idea is we want images to contain so much information
that, as I showed you that kind of Aristotelian passion
direction, we want your act of looking at the image to be
filled with wonder.
And we want you to spend so much time with the image that
you discover things in the image nobody
else has seen yet.
So imagine an image that's so rich that it could do that.
And then we want that discovery to be the thing that
leads to sharing, the desire to share it with other people.
So that's what we want to do.
And the thing that we started doing, and again, like
everything I'm talking about today, none of this is
actually crediting me, this is all the students and other
colleagues that I have. I just get to talk about it.
Gigapan is this really neat thing that Randy Sargent came
up with our team.
So the idea is it's two very inexpensive stepper motors
that, in a very clever way, are microstepped using a very
inexpensive chip set that's off the shelf.
The result of this is you can take any old digital camera,
attach it to this base, and then over a USB connection,
the computer commands the USB camera to fully zoom in
optically, and then take hundreds of pictures and
stitch them together.
And do this many, many times.
So if you can put this on a tripod, put it outside, and do
this over and over again for the period of a year, and we
have done this.
What's exciting about this is the chip set that we have, the
board that we have, fabbed and evaluated, and the
microstepper stepper motors, and the mechanism itself all
add up to about $40.
Now, if you look at Michael Jones gigapixel imager, which
he has two or three of here at Google, those are from U-2 spy
planes, and they each cost about $12,000, and there's
very, very few of those in the world.
So the idea here is, this is also a gigapixel imager, it's
not as high quality as that, but it's a lot less than
$12,000, and we can publish an open recipe so that anybody
can build this, and that's kind of exciting.
Let me show you a quick demo just to give you a feel for
what it means to have this kind of explorable image.
So this is an example of an image that's taken by this
thing, and this is pre-stitching, so there's no
stitching technology yet that's run on this, and you
can see that there's a lot of data there.
Explorability in a sense means that you have sort of hyper
resolution.
What I'm interested in is the idea that you cannot just zoom
in, but you can zoom in to a degree that's kind of unheard
of with a regular picture.
This happens to be about one gigapixel image.
So you could really go in and see the leaves here.
You get a level of resolution that's kind of fun.
So you get the idea with that.
Doing that in all different directions, even in this
backyard of Randy Sargent's in Palo Alto, is kind of fun
because you'll find blueberries and such.
Now, remember I talked about temporal resolution.
I want the idea that pictures become explorable in space,
but also in time.
I have one little example to show you of temporal
resolution over the course of about nine months of data.
This is hard without looking the right way.
I'm playing a mapping game here.
So while I show you that lemon, now let's
roll back in time.
So what you're seeing is the passage of time in reverse,
and you'll see the lemon become a--
I guess it's not called a lime, whatever you call an
unripe lemon, a green lemon.
And it's kind of fun because as you go through this with
greater detail, with greater temporal resolution, you end
up seeing some morning glory weeds-- you can see them right
there in the corner top right.
Then the neighbor pulls the weed out and the leaves all of
sudden wilt.
You see that on the top right.
Anyway, that's the kind of thing where we think it gets
exciting to be able to explore and to able to put a camera
like this into all sort of places.
You can probably imagine for yourself several places you
could put a camera like this right now and collect really
interesting data.
We're working on the stitching technology now.
It turns out you need to do some stitching that's a little
unusual compared to the kind of stitching people normally
do to make this work right.
Mainly because we want it to work really, really quickly.
We want people to be able to do this with
massive amounts of data.
So that's the end of a brief introduction to Global
Connection.
Let me tell you what we're going up to next so you know
how to track this.
For those of you who have been at NASA before, and there's
several in the room who were at NASA before they came here,
you can appreciate how hard this is.
But we got a Space Act Agreement on this, which is
kind of cool.
It's a collaboration now between National Geographic,
Google, CMU and NASA that officially allows NASA to give
us resources--
images, data, servers, and people.
So that's kind of neat, and it's for free.
One exciting thing that's happening is we've done
Africa, as you can see if you go to Google
Earth and spin there.
And by the way, something that blew us away was the idea that
Michael Jones and Matt and everybody on Google Earth team
made it the default layer on.
So everybody that has Google Earth, immediately when they
spun to Africa, could see the stuff, and we're really proud
of that, and that's just a really neat thing.
But the good news is that's being expanded.
So you'll be able to see other parts of the world, geocode
it, and you'll be able to see other kinds of data, other
than National Geographic data geocode.
The other thing we're trying to do is we're trying to
launch an international infrastructure for disaster
visualization.
In turns out the disaster stuff has worked really well.
We had a period of time where a major flood hit Pittsburgh,
the building in which our server is housed, which is
serving, I think, a terabyte of this data to international
relief workers now a day.
It turns out that this was a big problem because we started
getting phone calls from Pakistan.
And it was kind of neat, right, because that's OK,
they're actually using this stuff.
So now we want to go after the idea of creating
infrastructure for this kind of disaster visualization.
The exciting thing we're doing is hopefully in about three
months is we're going to do an open recipe release of Gigapan
so that you can build one.
So if you list the digital cameras to support it, all the
source code will be there for the driving it, and the
stitching software, and the directions for how to build
yourself one by going and buying parts from various
hardware stores.
Then we're going to give a bunch of Gigapans to
photographers.
We actually have a bunch of Gigapans right now.
We have 15 and we're building 50 more.
So if you're a professional photographer and you want a
Gigapan, talk to me.
Then we're going to create a bunch of different themed
activities that create focused communities, just like the
Disaster Relief community.
That's Global Connection.
Now, let me tell you about the other project.
And what you're going to see is how the other project is
kind of a compliments to that project.
That project is about wonder and discovery of the natural
world in a way in cultures in the natural world.
This project is about invention.
And here's why we're doing this project.
The problem that exists-- first of all,
this is kind of neat.
Science and engineering enrollments are dropping in
the US, and only in the US.
So welcome to the one place in the world where science and
engineering are becoming less desirable every year.
And this is for real.
The data here, by the way, is from ACR and from the Girl
Scouts National Report, which is an excellent document, and
from the CRA recent report.
Now, there's a women problem in the US.
As you may be aware, we have a very small proportion of women
in technology and engineering fields.
And in fact, in CS they're actually decreasing.
So this is interesting.
This is one of the only areas in engineering where the
amount of participation for women is going down.
What happens, I'll let you read this yourself, but what
happens in narrative form is that some of the analysis
that's been done in this area indicates the following.
The concept is that women are really excited about
technology insofar as technology is a tool for
something that they deeply care about.
Men seem to be satisfied often--
this is rash stereotyping of course--
men seem to be stereotypically happy to invent and create
technology for the sake of the technology itself.
So a lot of time what will happen is in middle school
women are actually perfectly happy using technology, such
as Instant Messaging and blogging, but by the time you
get to high school they're asking what is this for?
If I'm going to intro to CS, why bother with intro to CS?
What kind of tool is that for the things I care about?
So what we've been working on is an idea that, in fact, what
you need is the ability to create a sense of purpose for
the technology.
And robotics turns out to be a really nice way to do that,
because it's a concrete, grounded object, which you can
actually program to do interesting
physical things with you.
It can interact with your sister, it can interact with
your brother, it can take environmental readings for you
and report on the pollution levels over the
course of the day.
So it's a sensing and actuation device.
And when I roll out, I mean much more than a Mars
Rover-like device, but in fact any device that senses from
the real world and actuates into the real world again.
So we've done a lot of testing now.
We've got more than 200,000 children using these robots at
museums across the world.
The results of those testings have been somewhat promising.
The things we've seen that are interesting in class work with
these kinds of robots is gender retention.
We're having people come in with statistically significant
reports of lack of confidence in using computers, and the
rate at which they increase their confidence in using
computers is statistically significantly
higher than the men.
Why is this happening you might ask.
Well, what's happening, and this is kind of funny, but in
many of these courses, the women are programming the
robot to do a task, such as navigating the building and
getting somewhere and delivering
a message to somebody.
Well, the way they do it is they talk as a team, good at
talking, they come up with a solution, they run it, and as
it fails they run it again, and they look at it and they
scratch their heads and psychoanalyze the situation
and do better, because they understand how to robot is
interacting with this complex real time
world, the real world.
What do the boys do?
Well, they program the robot together, they run it, and as
soon as it turns the wrong direction and crashes into
something, they stop it and change the code in three
places and then run it again.
So they just keep trying to change the code to see what
will happen.
And although this may work for simple computer programming
assignments, it doesn't work in robotics because the system
is too complex, and too deeply embedded in
the physical world.
And so observation and the ability to take on the robot's
point of view turned out to be incredibly important.
So this is not really gender specific.
The point is that you want everybody who doesn't have a
sense of empowerment over technology to feel much more
empowered toward technology.
But having said that, just looking at a statistical
analysis of words, when we interview people and have them
tell us what do you expect to learn this week?
When you ask them this question and say your computer
science robotics class, they report programming and
mechanism, which makes a lot of sense.
After all, robots are computational systems,
programming, and mechanism.
Week after week, if you ask them what things they learned
that week and code that, what you get is teamwork,
problem-solving, identification with
technology, robot's point of view.
Far, far higher ratings than you
originally would have expected.
This is exciting because this means that there are life-long
learning lessons to be had in interacting with this
particular robotic complex system.
So it's much more than just learning how to use a robot.
It's about learning how a complex system works and
learning how to put yourself in this point of view.
So that's the introduction.
What we're doing with TeRK--
I'm doing good, OK--
what we're doing with TeRK is trying to create a new way for
throughout the educational pipeline, middle school, high
school, community college, college, and adult, a new way
for people to be creative with embedded technologies.
So we want you to be able to dream up something that you
want a system to do that has sensors and actuators, and to
be able to build it with great ease.
We want it to be really easy for you to build this thing
and realize your vision and then share it with others.
So it's about community building.
But it's also about you being able to do some absurdly cool,
creative things that you can't do today unless you know a
whole lot about tech programming, and about
sensors, and about how to connect something to the
internet, say.
So if you think about the Aristotelian passions, what
we're getting at is the
complement of wonder, ingenuity.
And ingenuity, the root of engineer, is all about the
idea that you start with creativity.
You start with the idea that I want to do something different
than what I can do today.
So first of all, I need to give you an intuition for
what's possible.
What could you do that's creative and different from
what you can do today?
Then I want to be able to actually realize it.
So it has to be easy enough to use that you can actually
build it and then share it.
So what we're expecting is you can create a
different kind of community.
Now, in a way, this is like saying we want to create the
ultimate Lego Mindstorms kit.
Yes?
Lego Mindstorms just announced a new kit, and it has the same
problems the old kit had.
It doesn't have vision, it can't see the world, and it's
not wireless.
So what do we really want to do?
We want to create a Lego Mindstorms kit that you can
build with, but you're not building out of Legos, you're
building out of stuff that can last. So if you want to make
something that lasts all week or all month you can.
And we want it to have vision.
We want it to see the world and react to the world
appropriately.
And we wanted it to not have a ceiling.
We want you to be able to get as sophisticated
as you want on it.
So this is a very, very ambitious goal.
And there's a lot of trust in this work, so we have a very
large consortium of funders and people who
work with us on this.
The first thing we've been doing for a long time now is a
National Survey of Computer Science one courses--
actually, zero and one courses--
across the US, and a survey of all the textbooks that are
used, or rather the top 15 textbooks that are used in
Intro to CS.
Another thing we're doing is developing a new kind of
electronics package.
The project's called TeRK, so naturally the electronics
package is called Qwerk.
The electronics package is actually about to be released,
so that's something I'm going to spend a lot of
time telling you about.
Another thing we're doing is creating a
reference design library.
It's a set of recipes, like a cookbook, on the web.
The recipes are step-by-step instructions for how you can
buy off-the-shelf parts at places like Home Depot, and in
about four to five hours build a robot.
It's a series of recipes that are reference designed, so
that once you've done two or three of these, you can be
very creative and feel empowered about building any
kind of robot you want.
Again, the word robot is used loosely here.
I mean any device that senses from the physical world and
actuates based on that.
And we're doing a lot of curriculum design and then
pilots throughout the educational pipeline across
the country.
So the National Survey, just to show you a feel of spread,
those are the schools across the
country they we're surveying.
In terms of recipes, we're doing a lot
of different recipes.
Right now we're doing test builds of students at CMU.
So we're taking students who don't have mechanical aptitude
and having them actually build the robots.
And we're having them build this robot right now, which is
the base of a large-fingered robot, that's about a human
height robot.
It's fun to have them build these kinds of robots because
we do interviews with them along the way.
What we're discovering, which is exciting, is that during
this five-hour project of building, using just hand
tools, they're actually going from a I'm not sure I can
build this, I've never built something before, to at about
hour 2.5, this is fun.
I wonder if I could build something else that uses this
mechanism in thus and such way.
That's exactly what we want to see, and luckily
we're seeing that.
Really, really simple for those mechanical people
amongst you.
For instance, you want the gear train to be easy, but you
want to teach people something about mechanical elegance and
simplicity.
So it's a friction drive mechanism.
They're motors that have an eccentric axis.
So you just put a hole in a piece of wood, and now by
spinning the motor you can offset the axis at exactly the
right amount to get the right thrust against the wheel.
So really simple things like that.
Now let me tell you about the controller, and this is the
latter part of the talk.
I'm trying to finish early so the before 3:50 people can
actually ask some questions.
So the challenge, and this is the hardest part just to be
clear, curriculum is really hard.
Putting this into the pipeline is hard, but the good news is
we have friends across the world that are interested in
this kind of thing if you could pull it off.
But how do you build a really affordable processor?
You want it to be 32-bit, you want it to have floating
points so you can do lots of cool things on it, you want to
be wireless right out the boot, and you want it have a
webcam on it.
Obviously, you want digital analog inputs and outputs.
Now it's gets a little tricky.
We want you to be able to take any motor and hook it up to
this system.
So that means it's not good enough for you to have to use
a Pitman motor that's $120 with a special encoder.
We want you to be able to buy a drill from Home Depot, take
the motor out, and just use it as is.
No encoder.
And that's possible using something called back-EMF that
we've been working on.
We want you to be able to control lots of servos, so
that you can make a face if you want to and
have the face smile.
Or a multi-legged insect blocks.
And we want this thing to be incredibly power efficient so
that it lasts for 12 to 15 hours on a charge.
And we want it to be about $100 for all the materials.
So that's the goal.
One way you could go about this is you
could say OK, cool.
We're going to use sort of a commercial off-the-shelf
approach of a mini ITX, sort of an embedded PC.
The problem with that approach is that's not a robot
controller.
That's just a PC.
So even if you do this, and you have your mini ITX, you
still don't have analog inputs and outputs all done.
You can't control the servos without putting massive
numbers of timers on this thing.
And you certainly don't have the vacuum of control, or the
power switching regulation system, or the MOSFETs that
amplify for the motors.
So you're missing a whole bunch of things that allow
your box to connect to the physical world, and that's the
part we care about.
So first of all, I'm going to argue an ARM is much more
attractive than this mini ITX kind of solution.
It turns out ARM9 cores are very good.
They can run Linux.
The entire ARM9 solution we have is $50.
So that's an ARM9 running Linux with everything else
that you need except for the robot input and output.
And it consumes far less power than the ITX, so the idea of
running all day is entirely possible.
But you still need the IO.
AUDIENCE: [INAUDIBLE].
ILLAH NOURBAKHSH: ARM is the processor in cell phones.
It's a microprocessor.
Yes, Peter?
AUDIENCE: What if you assume that they had a laptop and you
just put WiFi on it [INAUDIBLE]?
ILLAH NOURBAKHSH: We are assuming that
many people have laptops.
But we want the robot to not have their laptop on it.
So unlike evolution, we don't want you to risk your laptop
on your robot.
We want a robot that's so cheap that it can talk to your
laptop and anybody else's laptop.
In fact, let me spin you the story of how you use this.
I didn't say this part, and this part's kind of important.
So here's how the average person, here's how Alec will
build his first robot.
He will go to the website and he'll
choose amongst the recipes.
He'll order one of these black boxes--
this costs about $200 retail.
So you order the black box, you go to the website, get the
recipe, build it-- takes about four hours.
Once he's built the recipe, he plugs in, plug
into the black box.
He turns it on.
Once he turns it on, green light comes on on the black
box, and at this point he goes to the web browser and goes to
our website at CMU and he can now see his robot, he can
control it, he can write a kind of program for it, or he
can just drive it around teleoperationally.
So the idea is that, in fact, the thing is on the internet.
But using a microprocessor it's a very low power--
lasts much longer than your laptop will, but you can talk
to it from any computer anywhere.
That's the goal.
And it's pretty exciting if you actually pull that off.
Well, let me jump to the solution.
What robots really need, in a way, is something they can do
with closed-loop motor control, something that can
deal with the power management.
And the problem is when you think about what a robot does
and what a robot contains, the mistake people make is often
they end up doing a lot of what could be done in hardware
in software for the robot's sake.
And this is unnecessary and expensive.
So the solution we propose is use an ARM9, which is much
less powerful than the mini ITX, and you add a robot ASIC.
Now, what's a robot ASIC?
What's a great ASIC for a robot?
The answer, the thing that is interesting about robotics, is
that we can use something that is really customizable as you
configure the robot.
So you don't want an ASIC for a standard robot, but rather
something they can customize to any robot that you build.
So our solution's an FPGA.
So our proposal is, in fact, if you can have an FPGA that
implements all the robot-specific options, and in
fact, is the thing that you, in real time, can reconfigure
as you need to, as you change the number of motors on your
robot, say, then you have incredible flexibility.
And it's about $5 in quantity, so very, very cheap.
In fact, better than that, Xilinx has
donated them all to me.
So I have a lot of Xilinx FPGAs.
And just to show you an example, this is something
that Rich LeGrand did at Charmed Lab with my help.
And this is the launching off, kind of what we did.
This was the Game Boy robot controller kit that had a Game
Boy processor, an XBC, and you plugged in an FPGA
into the Game Boy.
The FPGA took care of all the robot IO.
So it had the watchdog timers, it did the servo controllers,
and it had the URs, Bluetooth interface, all those good
things, all on the MPGA.
What we're doing is this.
And the good news I can report today is this is actually
working now.
So I'll show you how much of it is working.
But it's an ARM9 with floating point.
It's got the 300,000-gate FPGA from Xilinx.
It's got USB 2.0.
WiFi and webcam is a kind of a story that's worth telling.
WiFi is easy.
There's several limit patches that make WiFi work just fine
on an ARM9.
Webcam is a tough one.
If we're going to put recipes on the web, we want the recipe
to say you can use any of these five webcams. But if you
look at the webcams on SourceForge that there are
patches for Linux, they're all webcams that are end of life.
They've been around a while, so people could
actually hack them.
So we talked to Logitech, and we got from Logitech copies of
the cameras they haven't sold yet.
So these are next year's cameras, Logitech--
well, this year now, 2006.
So they'll be on market a couple months.
So we got these from Logitech last month.
And then we found with the hackers, a very nice guy named
Michel Xhaard in France.
So he agreed to roll the patch for us for these webcams. So
we gave him the webcams, gave him the processor board, and
he rolled the patch for us, which is really kind of a neat
story of collaboration across the ocean.
And there's lots of other good stuff on this, it
does MP3, of course--
gotta do that these days.
It's got 12-bit analog inputs, it controls 16 servos, it has
a switching power supply into which you can plug in any
number of Radio Shack 7.2-volt battery packs.
It switches between them appropriately for the motors
and everything else.
And right now we're at about $112 building materials.
So that's exciting.
And that includes enclosure and manufacture.
It's got temperature sensing, and it's also got current
sensing on every line through the FPGA.
So we can tell exactly how much current is being drawn,
what motor, which is really fun.
So as of last week, it's running 2.6.8.
We've got back-emf working with 12 and 24-volt motors--
I'll be right with you--
and we got wireless up and running.
So we're very close.
And that's kind of an exciting point, because if you can get
this thing--
If it's really 110 [? MbM ?], and if Rich agrees to sell it
for about 2x, so if we can get this out the door for let's
say $250, then that means you're buying something for
$250 that was our firmware is done, you can literally build
various robots, plug in all the wires, turn it on, and now
go to a wireless connection and talk to it.
And what's more, it can do vision onboard, so you can
really have kind of an intelligence robot.
Question.
AUDIENCE: [INAUDIBLE].
ILLAH NOURBAKHSH: You can do it right from the 09.
It can be configured right there.
So here's the next steps on TeRK.
There's going to be a tech report this month on the Intro
to CS curriculum evaluation.
And the good news, by the way, on the textbooks and on the
teachers that we've interviewed is that they all
seem to agree and recognize that Intro to CS sucks.
And that they want a change, and that they change textbooks
every couple of years, so changing isn't particularly
hard bureaucratically speaking.
In fact, there's an interesting thing that they've
pointed out, they themselves point out.
They say well, our exercises are things like write the
Fibonacci sequence.
Write "99 Bottles of Beer" on the wall.
And our students don't seem to connect to these,
especially the women.
And we go, huh, that's exactly what the Girl
Scout Report said.
Intro to CS assignments, which are so incredibly abstract are
not that exciting for people.
And I find that stunning that so many of the assignments are
like this in so many of the textbooks I see when in fact,
we have the internet, we have sound, we have light, we have
so many interesting ways to imagine writing Intro to CS
assignments to connect your process to the outside world
and read interesting values.
We're doing a Chatham College deployment this month.
We'll be at 60.
We're running a workshop and a bird's of a
feather seminar there.
So we'll be introducing this to a whole lot of computer
science professionals there.
The public release is in March.
So hopefully in March if you go to the TeRK website, you'll
be able to download code and download a recipe and go to
your local Home Depot and build one.
Home Depot is a group we're talking to right now because
they want to be one of the sponsors too.
They'd like to make kiosks in Home Depot where they
accumulate all the various parts from Home Depot that you
need to build one of several robots.
So that'll be kind of fun because you can go there and
buy all the parts and Bob's your uncle.
We're going to also be running the Grace
Hopper Saturday Workshop.
So if you're going to the [? NB ?]
Support Foundation's Grace Hopper series, we'll be there
and we'll be doing a full clinic on this and giving away
TeRKs there as well.
And then we'll be launching at five schools in September, and
we already are running curriculum at Columbia, CMU
and Verona.
Carl DiSalvo is an interesting fellow.
He's a robotics and art guy.
He's going to be going to art workshops.
Art, as you may know, are serious early adopters of
technology.
And so the art community's really excited in this kind of
thing because you can imagine what they can do with some of
these internet-connected, and control [UNINTELLIGIBLE].
And last but not least, I'll tell you very briefly about
these two things.
They're in brackets because we're still in the middle of
getting funding for those.
The good news is the rest will happen, so we have enough
money for it.
But those two are interesting.
If you look at what--
60 seconds--
if you look at what girls undergo in middle school and
high school.
In middle school it's about identity of self.
So they're really interested in the question of who am I.
And in fact, journal writing and blogging are somewhere
that people put a lot of emphasis.
If you look at high school, it's my identify with respect
to my community.
How can I be a member of a friend group, and how can my
friend group have impact on something I care about, like
the environment, like politics.
So what we're doing for middle school girls in 10 seconds is
we're creating a robot diary system.
So it's a set of recipes from Home Depot that build a
robotic flower.
As you write your blog or journal, the emotional content
causes your flower to do various kinds of choreographed
dances, play music and do light shows.
And you can share this with your friends.
You design all the choreography and you design
how it's shared with your friends, so that they can see
how you feel today.
For high school we're doing a sensor net program.
So the same black box becomes a sensor that reads carbon
monoxide, ozone, and a number of other fluids.
And you connect these in a network in a city, and you're
challenged not only to fuse the data together, but to
create public pieces of art that are kenetic sculptures
that demonstrate to the public what's going on with
environmental quality in their area.
So that's kind of a fun idea, too.
So the last slide is the thank you slide.
All the real work on Global Connection is done by Randy
Sargent and company, all the real work on TeRK is done
Emily Hamner and company.
And we have a lot of collaborators right now around
the country, and the funding, as you can see, comes from
diverse sources.
And keeps growing, which is great for us.
OK.
Thank you for your attention.
You have two minutes before 3:50 and you're welcome to ask
questions now.
[APPLAUSE]
ILLAH NOURBAKHSH: Thank you.
I forgot to tell you one very important thing.
March is when the TeRK gets released.
Before March, these are here now.
These are personal exploration rovers that are programmable
in the same way that TeRK will be programmable.
So if you want to program a robot for any reason
whatsoever, these have cameras on them,
they have range finders.
They move four times faster than the Mars rovers on Mars.
So they're even faster than those robots.
These are perk turbos, technically speaking.
We provided them here to Dan Clancy, so he has
access to them here.
And if you want to program a robot, just email me and we'll
get you set up with one of these and you can program
away, you can take it home and program it all you want.
And it has wireless on board.
Everything.
So if you're free to use these,
they're on loan to Google.
Yes.
AUDIENCE: You say it will be available in March.
What's going to be available?
ILLAH NOURBAKHSH: What will be available in March is you'll
be able to buy the controller from charmedlabs.com, and
probably a couple of other companies.
You will be able to go and download recipes that show you
what parts to buy to build robots, and there'll be at
least three recipes up.
Robotic flower, which is kind of a desk/art piece.
The Qwerkbot, which is a very simple robot enclosure that
fits onto the processor itself.
So the processor becomes a desktop robot with a
camera on top it.
And Shenbot, which is a tallish robot with a prismatic
arm that can push buttons in an elevator so you can use an
elevator when you're not there.
A very useful thing to do, right?
With a large, large two-wheel platform on the bottom.
That's more of a human walking speed style robot.
So our litmus test was we want a robot that's good enough
that it can use an elevator, because that requires simple
manipulation, vision, and if you can teleoperate it and do
that, that's kind of fun because you can travel around
and visit people when you're not there.
AUDIENCE: How much would something like [INAUDIBLE]
to build.
ILLAH NOURBAKHSH: I can build 40 of these
at a time for $160,000.
So these are much more expensive.
$2,000 a piece, $4,000 a piece.
This is the problem with robotics.
In general, robots that are sophisticated
cost a whole lot.
The amazing thing is the Qwerkbot, parts will cost you
$80 plus the board, which will cost you about $250.
And yet it's got vision and it's wirelessly connected to
the outside world.
So it's because of the ARM9 and because of the FPGA that
we're [UNINTELLIGIBLE] such a price.
These have a Stargate processor in them that Intel
gave us for free.
So Intel gave us 160 Stargate processors.
That's why these exist. But that's Stargate processor is--
it's an ARM, it's an ARM core, but that Stargate board alone,
which you can buy from Crossbow costs I think $900,
and that's without the power modulation.
And of course, that's just the board.
It doesn't have the robot stuff on it.
It doesn't have servo control or motor control.
Yes.
AUDIENCE: So you mentioned [INAUDIBLE]
earlier.
How compatible is the [UNINTELLIGIBLE]?
ILLAH NOURBAKHSH: They've claimed that this time around
they're not going to be secret.
They're going to be open, right?
If they give us an open architecture, I will make this
compatible right away because it's a no brainer.
That way people can also build out of Lego and control it
from the Qwerk controller instead of the Lego
Mindstorm's controller.
AUDIENCE: Their new release [INAUDIBLE].
ILLAH NOURBAKHSH: You basically put your Lego robot
on the web with camera.
AUDIENCE: So you mentioned that the [INAUDIBLE]?
ILLAH NOURBAKHSH: There's one good one we found.
There's actually two good ones.
One is called something like hands-on approach to CS-01.
It's by two researchers at Sun who wrote it together.
They write at least a million books together.
This one's the best one.
One thing we're going to do to push them, and see if they're
willing to run a new version with TeRK in their book or
[INAUDIBLE].
But even if we don't do that, what we're going to be doing
is publishing on the web a series of exercises that go
with various CS-01 books.
So we've looked at the modules that exist. It turns out
intellectually the presentation material's pretty
consistent.
So we can very easily come in in a surgical way and propose
certain exercises with TeRK along the way that we think
would make it a much more exciting class,
especially for women.
AUDIENCE: [INAUDIBLE].
ILLAH NOURBAKHSH: We've considered it.
We want to start with something surgical, and then
if it catches on, we get good evaluation results of our
educational work, then yeah, we want to do the whole thing.
That would be ideal.
But we should find a CS-01 author to do it though.
That's the best way to go.
Thank you for your attention.