Tip:
Highlight text to annotate it
X
I want to talk a little bit about robots and human behavior
the major difference between cybernated organisms
and human systems.
A lot of people think that programming
is exactly the same in people and robotics.
It is not.
The major difference is that you can design a robot
to walk over, pick up an object and put it in another place
but before the robot moves
if you put the object in a place the robot was going to put it in
it will still walk over and grab nothing in particular.
Do you understand that? That's programmed.
The difference between human systems and robots
is that it's not linear.
That means that the robot can do certain things that you program into it
and if you look at that under a microscope
you can see magnetic domains that will make the robot walk over
to a given area and sit in a chair.
If you pull the chair away
the robot will walk over and sit on nothing and fall over.
That's programmed. The human system differs considerably.
When you work on a human being
or a chimpanzee or any animal....
(I'll work with the chimp this time)
Put the chimp in a big box
and in that box are rods sticking out at different lengths
with a cue (a circle, a triangle, different patterns)
and you don't have to teach it anything. It'll walk in
and sooner or later, it'll touch those things.
When it touches one of them
water will come forth, touches another one, food.
It touches another one and a soft bed comes out of the wall.
If the animal is put there for a long enough time
it will use those rods appropriately.
Any animal has a range of behavior.
When put in an environment, it doesn't respond like a robot.
It looks at the environment and seeks reinforcement: food.
If the leaves are circular where the food is
it will go to the circular leaves.
That's called 'associative memory'.
Programmed computers have no associative memory.
They follow a pattern.
If you look at a phonograph record
with a microscope, you'll see zigzags
cut in the vinyl record.
Those zigzags are representations of the voice of a person.
While the record is playing
if it's somebody singing 'Caruso'
it can't deviate from those patterns.
Robots that are programmed can't deviate from those programs
unless you have path alongside it
and you show variations, instead of a circle a slight ellipse.
The animal will touch that thing
thinking it's a circle because they're not that critical
and it gets burned slightly
so it will never touch that one again.
That's what the animal has that the robot doesn't have.
If the robot touches something and it doesn't reinforce it...
How do you reinforce a robot?
If he gets stung, that wouldn't bother him at all
but a robot can learn to respond to different figures.
When he sees a triangle, presses a button, he gets lubricated
but he doesn't feel good when he gets lubricated
so there's no reason to retain that action.
It's only when a human touches something
and they feel good touching it, that they repeat it.
A robot cannot touch something and say "Hey, that feels good."
They can reach out, do that and pull back
but they can't make anything of it. Do you understand that?
The reason I say "Do you understand that?"
is because there's so much conflict today
about robots and people: Will robots take over?
Not if they're programmed not to take over.
If they're programmed to take over, they can only shoot
a guy in a certain uniform.
If the guy stays put in a given area
the robot can walk over, unless you condition the robot
with the eyes to follow anything that moves and shoot it.
Is a robot an assassin? No, it's programmed to shoot.
That's quite different. Human beings have
some people believe, 10 to 15 billion neurons.
A robot has hundreds of thousands of associative sets, not billions.
If you learn that a cup gives you water
anything that looks like a cup might support water.
We can deviate from our programming.
Our programming appears rigid, but alongside of it
is associative memory: I touched that and I felt pain.
I touched the other thing and I didn't feel pain. I got something.
A robot never looks at a thing and says "That's interesting!"
If you were to float in midair in front of a robot
it wouldn't say "Now that is interesting!"
It can't do that, it can only do what it's programmed to do.
A man can see things
and be programmed and compare it to something else.
Is that very clear, or do you want to question anything there?
That's a major difference between programmed behavior
and human programming.
Humans have a lot of associations prior to programming
so the other associations, if it reminds him of the other
he can deviate.
That's why people walk out of here when I speak
with different interpretations.
(Roxanne) Kurzweil talks about using nanotechnology and
implanting something in the head
like a second brain that may be so fast
that it could take over the other brain in the person.
This is something he's raised. - You can do that
but it doesn't give them leverage to wonder about that.
I've never seen that happen. A man could say that.
A man could look at an event, and say
"That's strange, the way that paper holds up that speaker."
A robot does not do that. It looks at the speaker.
It doesn't even look at it and say "That looks like a speaker."
Unless you put a speaker in front of the robot and say
"That's a speaker" so when its eye sees it, it says "That's a speaker."
When you turn it sideways, it doesn't know what that is.
When you turn it sideways and say "That's also a speaker"
and you rotate the speaker in many positions
so the robot has associations with the shape
in different positions, he can call it a speaker.
If you call an orange an orange, but if you cut it in half
it can't call it an orange.
It doesn't say "It looks like half an orange. " Do you understand that?
(Roxanne) This thing that Kurzweil is putting out like 'singularity'
is when you start to implant things in people's heads
that have so many more calculating ability, or...
- If it is not connected to the other neurons
it won't do anything. -It couldn't take over? -No
unless it's connected.
(Joel) He's proposing that it is connected somehow.
There is some interface between... -If there is an interface
organic neurons respond to certain rate of speed.
Anything beyond that rate, it can't respond.
Electronic systems travel almost at the speed of light.
Neural associations are relatively slow.
If you try to speed up digestion of food in a human
the digestive acids flow at a certain rate.
If you were to triple the rate, it might digest
portions of the intestines.
Do you understand what I mean?
For instance, let's take a bear and put it in a room this size
with a bunch of objects sticking out (they have to stick out)
40 of them. A bear might learn how to use
12 of them but not 40.
It won't remember 40. It doesn't have the neuronal
amount to remember... A bear could remember
because when a bear walks through an environment "This bush has berries."
He remembers where the bush is
"This area has animals that I can eat."
A bear can build up maybe thousands of associations
but first you have to study the range of the animal.
How many levers an animal can remember
will tell you what its outside response will be.
If you find a bear that can learn to work 47 levers
(a little beyond that or a little less if that's true)
then you know what the bear can respond to in the environment:
40 different systems.
A human can generate associations
with thousands of things in the environment.
If a human learns to eat certain food
and he has to climb a tree to get it
if you put that food at the base of the tree he won't climb the tree.
A robot will. Do you understand that?
If you program a robot to climb a tree to get the apple
it'll do that.
If you put the apple on the ground, the robot goes "Ha!
That simplifies things."
No, unless you build in something special in the robot
to handle unforeseen variables
and that's what people don't know how to do yet.
They don't know how to program a robot to say "What have we here?"
because the words "What have we here? " don't mean anything to a robot.
It means something to a human being.
(Roxanne) The roboticists and Kurzweil are bringing up
when the robot does become connected to the environment
and does have enough information that it would
surpass and overtake people.
- No, a robot does not
ask questions. A robot does not say
"I've been here before. I've seen that before."
They don't have enough neurons to build
all kinds of associations.
(Roxanne) Do you think it's possible that they can do that
if they use biological.... -If it's programmed, no.
But if a robot is self-programming
then the reason for a robot's actions
are very different than human systems.
When a human puts something into his mouth, it tastes good
while if a robot does it, nothing. There's no reward.
What's a reward to a robot?
Sitting down does it say "Whew, I'm so tired
and now I feel better. " He doesn't feel.
When he sits down, he doesn't say
"It's good to have a chair in my area."
He doesn't give a *** about those things. If the light gets so bright
that the eyes of the robot turn off, he says "I can't see"
(if you wire him that way) but he doesn't turn down the light
unless you wire it that way.
He turns down the light when it gets bright
not because it's bright, but
because he has senses that turn off the light.
Do you understand that difference?
(Roxanne) You were explaining this last night in terms of
humans have to have experience to react...
Yes, that means a robot doesn't seek experience.
A robot doesn't want to know why some tires wear out faster than others.
He doesn't take a microscope and look at the rubber
under a microscope. A robot is not equipped that way.
They don't have pleasure and pain.
If they had pleasure and pain
they would have preferences.
Do you understand? If a robot cuts
wood with a rotary saw:
All the wood is shoved in there automatically; he cuts it.
If you put a human in there, he'll cut him too.
He doesn't think "That's a person, I don't want to cut that."
The robot cannot do anything unless it's programmed to do it.
It could be programmed to cut wood but it will cut anything else
shoved in there. If you interfere with its programming
it doesn't say "Wait a while, you're interfering with the programming!"
If radar picks up fog in San Francisco
the airplane might fly above the weather.
The airplane moves up based on what's out there.
If it's fog, it moves up, but it doesn't move up to avoid the fog!
That's human projection.
A robot moves up because there was fog ahead.
Its senses bounce back and give it a thing
that controls the elevators and makes it move up.
If it's raining, the robot may open an umbrella
above itself. But when the raindrops hit the umbrella
they make contact with two terminals
so there's a flow across. That opens the umbrella.
But the robot says "It's raining. I'm going to get wet. I'll open it."
No, none of that. Do you understand that?
I'm talking about robotics today.
When people say "Do you think robots will take over?"
There's no basis for it if they're programmed a certain way.