Tip:
Highlight text to annotate it
X
milk now house there is the executive director of the singularity institute
for artificial intelligence is also the author of dozens of popular articles on
rationality neuroscience artificial intelligence great to talk to you lou
uh... this is a topic often on that we've kind of broached on our own on my
show and it's personal interest in it
well there are two forms of artificial intelligence was called in a row if i
and that just means a program that is able to figure out how to solve problems
in a very narrow domain
like chats or playing the stock market or
detecting underwater mines are from a submarine the military
and we have lots and lots of those programs in many of them are better at
what they do than humans
and therefore have replaced humans in that capacity in the work force
another type of a r eight years
what is now called artificial general intelligence
war agee hai
and that is the type of a uh... i that like humans
is generally capable of
cheating goals in the world
so you can drop in a g_i_ in an arbitrary contacts or even a new
environments
and it'll be able to optimize the world according to its preferences
it'll be able to go out and achieve its goals
positive when ever environment you put it in error roughly whatever goals at
harrods
and we don't have a g_i_ yes but it looks like one day we will and so the
singularity institute exists to study the math of the eight
p_r_i_
uh... possible architecture's and try to figure out what they'll do and whether
they'll do things that humans want
so anything right now that falls under that the umbrella of artificial
intelligence
is not
agee i because we've not yet achieve that
that's right so ibm's watson for example which recently pete
ken jennings and other uh... jeopardy players at jeopardy
is another type of narrow a ally it's a particularly impressive narrowly i've
but it's still falls under the heading of narrow chaotic as it's only get one
thing and that's playing jeopardy
is uh... and then maybe this question is different would be differently answer
based on whether it's a narrow an idaho or the a g_i_ that potential is on the
horizon
is artificial intelligence about simulating what we understand to be
human intelligence or is it really something else altogether
well that's a really great question because there are many ways to go about
building in a g_i_
whether it's ten years from now works fifty years from now or a hundred years
from now
one way you might do it used to actually annual aids
a human brain
just like we can emulate the software some system say a nintendo entertainment
system you can and answer al you can emulate that on your computer
and you can make it run really really faster really slow but then the games
become unplayable
you can do the same thing in principle with the human brain if we knew
the structure of the human brain in enough detail but
it would require an enormous amount of computer hardware that i don't think
we're going to have any time in the next forty to fifty years
the other way to do it is to you
invents a new method
uh...
warrior having a generally artificial intelligence agents in the world
so evolution has already produced this one kind of agents that can generally
achieve goals in the world and that's the human
uh... and we can emulate that but we might find other algorithms other
processes
other mac
other information structures that would allow an agent to be generally
intelligence in the world
and there are lots of ways to approached that and lots of people working on that
problem
uh... and and part of it might just be mashing together for
uh... lots of discoveries that we've made on the particular features of
intelligence like
how-to at the world and recognize objects and how to move in the world and
how to move and endemic to rein in how to reason about math problems and how to
reason about us
uh... interacting with others and how to reason about
the emotional states of others if we get any ice narrowly eyes that are good
enough that all at all of those kinds of things that humans regularly do
you might be able to use piece them together and having a g_i_
or it might require some kind of new fundamental math they breakthrough
so what's realistically kind of on the horizon in the next
fibers so you know kind of more short term that we will see translated into
you know either that people will see in consumer accessible devices
or at least that will be hearing about the recent advancements not so much
further into the future but next five years what might we see
right i don't think you'll see a g_i_ in the next five years so you mostly just
see
relatively gradual improvements can various kinds of merrill area
so in five years series on the iphone will be a little bit better
uh... you'll have lots more robots that can move quickly across unusual terrain
like the ones that boston dynamics is building for the u_s_ military
will have breakthroughs in swarm robotics which means that you have a lot
of usually flying robots that can coordinate and not run into each other
and also
dodge obstacles in the environment on the way to their destination
uh... you'll have better translation software you'll have a high programs
that begins to become
uh... competitive with human players on began with go which is a lot harder for
computers to play than chess but dole does start to become competitive with
humans in the next five years
and will have self driving parts possibly from dougall that will
uh... be somewhat better than they are now and actually available
for purchased by the public
let's talk a little bit about this stereotype that exists in a lot of
popular media including movies including books which is this idea
this fear that may be rational maybe they're rational that as they are
advances and maybe it's once we get into and agee i as you mentioned
inevitably artificial intelligence robots etcetera
will turn against humans will become so called evil whatever whatever evil maybe
is that something that there's a penny
backing for in the scientific community amongst people that are actually doing
area researchers there really no reason to believe that human should be afraid
of what would happen if we have an agee i
well the robot rebellion idea that they are as will be coming evil goes way back
to the nineteen twenty one shack play
that introduced the term robots
uh... and in that in that movie the robots rebel against humanity and it's
been a very common story line since then in the terminator the matrix etcetera
but it's very important when we try to think about what area i will do
that we knots uh...
do what's called generalizing from fictional evidence
we don't want to take fictional evidence into account when we're trying to
consider what they are still do
and in particular we don't want to apropos more fires artificial
intelligence
we humans have a tendency to model other agents as being
just like humans but a little bit different so you get
ily ins from other worlds light years away in movies being
basically just like humans that maybe with big i_d_'s and lasers coming out of
the fingers of something you know
uh... and that's just
not respecting the variety of possibilities that there are for having
any agents designed it goes out in the world and does things and so if you want
to figure out what i mean i will do
you can't model with the way he would model of human you have to actually look
at the mouth of one day i will do 'cause a_d_r_s are made of map
and you have to look at the map and watch and look at them and see what it
wanted
so with agee i in particular
uh... agee eyes will be the kind of intelligence agents that are able to you
know that have goals about the world
and will go out and try to
uh... optimize the world for the achievement of that of those goals
and when you have a system like that
uh... yeah i will probably be motivated by very uh...
specific set of goals that's called its utility function for for the map the
except that
uh... these very specific set of goals is what people care about and it will
care about anything else n_b_a_ i
unless it has human common-sense in the maths of the area i somehow
will not have our normal human sas and so it's suppose that a paper clip
factory
uh... programs the a heart to maximize the number of paper clips that it
produces
and so then it makes a bunch of efficiency improvements in the factory
has all these robots whizzing round factory in iran over sand dunes and kill
some people
and then the uh... programmers realized await okay so
maximize the purpose
and also don't kill kyun
and so then that the airline will start doing that anyone killed humans but
maybe it runs over injures signs of the air programmers and we forgot to put
that in the map of the uh... i'll pay a lot of them
you know they're taking away
optimize the number of maximize the number of paper clips you produce and
don't kill humans
and protect humans from harm
well as it turns out to unions harm themselves a lot and so now they are
tires up all the humans and feeds them with feeding tubes
and so the i program so i don't know that we wanted so we have to go back and
program again in the problem is that
to represent what humans actually care about
uh... it's sounds simple in math in nineteen eighteen glitch
because we all have common sense about things like below and also don't kill
unions
it's very very complicated to put human goals into math
and very difficult to do that and so if you have the math just a little bit
wrong
then the power for an official general intelligence will be
pursuing gould goals that are slightly at cross purposes to you
and that's a big problem because
humans do that all the times we have cross-purposes with each other and
sometimes we killed millions of other humans but it never gets so bad that we
are but what about the entire species or something like that because in the end
it
even hitler can be killed with a single bullet or not that much more powerful
than each other
but an artificial general intelligence could pockets up across the internet
made billions of copies of itself
uh... do new science in invent new weapons technology
uh... it could have a brain the size of the warehouses
but what's restricted to the human skull
so when you've got a powerful system like that that is
trying to achieve goals that are just slightly different from your own
it becomes a very worrisome for the creature that is now the the lesser of
the powers in the world
alright fascinating stuff we've been speaking with luke mel hows ur executive
director of the singularity institute for artificial intelligence
really great to speak with you will definitely have you back
great to be here
okay we'll take a break we'll be back with plenty more statesmen