Tip:
Highlight text to annotate it
X
>> Good afternoon! Distinguished guests, college faculty, staff,
and students and friends. It's a pleasure to welcome you to the
Third Annual Eugene H. Fram Chair in Applied Critical
Thinking lecture. And this is actually becoming a significant
fall tradition here, and I think it's a wonderful way for us to
start the academic year. As you might recall, the Fram Chair and
Applied Critical Thinking was established in 2012 by an RIT
alumnus, who wishes to remain anonymous, but he established it
as a way to honor his former teacher, Dr. Eugene Fram, the J.
Warren McClure Professor Emeritus of Marketing in the
Saunders College of Business, where he served on our faculty
for, get this, 51 years. And he's still here to tell a tale.
Gene has traveled all the way from California to be with us
today. Gene, would you please stand to be recognized.
[ Applause ]
I see you out there somewhere.
[Applause ]
Critical thinking remains one of the
most important and widely discussed
topics in contemporary higher education. And frankly, I
believe it is essential if our democratic society is to
flourish if our students are to become actively engaged global
citizens and if they are to become life-long learners and
frankly if they don't do those things, they won't live truly
meaningful lives. In a deeper and more profound sense,
critical thinking and critical reflection can help them to
understand themselves, the world and their place within it. And I
should actually replace the "them" with "us." Just this last
week inside higher education, probably the most active online
higher education website published an in depth article on
the Fram Chair and our specific institutional efforts to date
under Professor Sheffield's leadership, it has been another
very busy and productive year for him but much important work
remains to be done. He has continued to lead faculty
workshops and to sponsor related programming. A preliminary draft
plan for integrating critical thinking across our
undergraduate curriculum has been written and presented to
key shareholders including the provost council, the Fram Chair
Faculty Advisory Group and several academic departments. He
has also met with many faculty students and staff individually
to discuss their teaching pedagogy and to suggest ways in
which to make critical thinking a more intentional learning
outcome. When RIT hosted the second annual meeting of the
Assessment Network of New York this past April, he organized
and led a panel on the assessment of critical thinking
challenges, opportunities, risks and rewards, which included
several, regional and national experts. This past July, he led
an RIT team to the American College of the
Colleges--American Association of Colleges and Universities
Faculty Institute on Integrative Learning across the departments,
that's a big mouthful. In order to discuss his plan and to seek
extra expert feedback and guidance and in recent weeks,
Chip has been networking and discussing critical thinking
with a number of influential international scholars. Now many
of you remember if you attended last year's Fram lecture by the
noted NYU sociologist, Dr. Richard Arum. He'll be pleased
to know that he is coming back to RIT on May the 7th, 2015 to
discuss his newest book, "Aspiring Adults Adrift". It
sounds like me actually. It is already receiving a great deal
of critical attention so please mark your calendars. And I
couldn't be more delighted that Chip persuaded Dr. Bell, an
Intel Vice President and Director of User Experience
Research at Intel Labs to give this year's lecture. Chip, I'd
like to ask you to come forward and formally introduce our
speaker. Chip.
[Applause ]
>> Thanks very much, Bill.
Over 20 years ago, on a beautiful autumn day,
not unlike the present, at the majestic and sprawling grounds
of the Pew Family estate, known as Glenmede, which served as
Bryn Mawr's graduate student housing, I first met Genevieve.
I was in Sconston my first semester graduate study for the
PhD and my dear friend, Phillip Jay Kent, an Australian from
Sydney introduced us. His subsequent death left a
tremendous void for all of us who knew him. She and I spoke
only very briefly. It was nothing a more than a very quick
exchange before she went on her way, yet I've never forgotten
that fleeting encounter. Phillip remarked to me afterwards in his
characteristic way, "She's off to Stanford." Sorry, I'm having
trouble reading here. "She's off to Stanford now for more
graduate study and you better watch her. She's something
special, a real martyr. Tough as nails, fearless and smart as a
whip." I followed his sage advice and I watched her
meteoric rise with great interest. I'm immensely grateful
to her for agreeing to be with us here today, Genevieve, thank
you. I'm not finished. I have more, I'm sorry. Dr. Bell's work
at Intel reminds me of an old Swedish expression, "Bara doda
fiskar foljer strommen," which translates roughly as "only dead
fish follow the stream." In other words, you shouldn't just
go with the flow or follow the herd. You should strive to be
different, to have the courage to follow your own path, to
listen to your convictions and to prize eccentricity, dissent
constructive criticism and difference. In other words,
originality is the key to life, an immitation is death. Often
described as the godfly contrarian, the center and
outspoking social scientist, a female in the traditionally male
dominated field of corporate engineering, Dr. Bell has always
been nothing but fearless. Her tweets under the name feraldata
are followed by a growing worldwide audience. Feraldata is
a perfect notion for her electronic presence, after all,
she spent much of her childhood down under in the remote
Australian outback, the daughter of renowned culture
anthropologist, an activist, a feminist and a some time
politician. The wild and the domestic, the colonized and the
unexplored, the sacred and the profane, the cosmopolitan and
the provincial, the most complex technology and the simplest oral
narratives all play a vital role in Dr. Bell's world view. I
think Australians are deeply aware of their place in the
world map, as a contemporary vestige of a collapsed European
empire and in many ways the archetypal antipode, I'll jump
ahead. Seventy five years ago, Freud today died today in
London, shortly after having a ride as a refugee from Vienna,
Freud, like Copernicus, Darwin and that other notable refugee,
Karl Marx, posited an entirely new paradigm for understanding
human beings, our sense of self and our relationship to the
world. Several scholars have recently argued that we
currently stand on the cusp of yet another comparable
technological revolution, or a paradigm shift,
the age of big data and the info-sphere,
smartphones wearable smart devices, social
media, customizable personal robots, the internet, the cloud,
ubiquitous computing, search and sort algorithms, as well as
affect and the internet of things. These are no longer the
stuff of science fiction. Dr. Bell's talk today of course will
touch on a number of these issues and also provide
important insight in the future possibilities, Dr. Bell.
[ Applause ]
>> Wow, way to make a
graceless entrance on my part. It is my extraordinary privilege
and pleasure to be here. It is always an honor to be at the
universities of America. I consider myself to her being a
product of a liberal arts education on the east coast I've
just been reminded quiet some time ago, thank you for that.
But I thought what I would do this afternoon was talk a little
bit about a piece of speculative work that I have been doing and
it's fascinating to be reminded that at 75 years since Freud's
death, because I think in some ways, he would resonate with
this topic. I entitled this lecture "Making Life" because
I'm very interested in the ways in which technology has served
both literally and figuratively to shape our imaginations. I'm
also really interested right now in the topic of robots. I've
just learned design for robots. Very obsessed with the topic of
robots because I find them fascinating as a proxy for a
certain set of conversations about what is technically
complicated but also always the subject at the center of our
imaginations. So those were a number of gracious introductions
for me. I think probably it is best to say two other things, I
am indeed the child of an anthropologist, but I'm also the
child of an engineer. My father was an engineer as were both of
my grandfathers. And whilst I grew up living with indigenous
people, I also grew up dismantling engines. So, I have
a certain affinity for the mechanical that I tend not to
talk about very publicly. My father would be saddened by that
I'm sure. I was lucky in my childhood to have moved to
Central Australia in the late 1970s and lived there in the
early 1980s with communities of people who still remember their
country before Europeans came. And my childhood was full of
stories about what that country was like before fences and
cattle and white fellows. And most of my childhood was spent
as a result without shoes, not speaking English, and never in
school. I know it's a dreadful thing to say at a university.
And I hasten to add I wasn't homeschooled, I simply wasn't
schooled at all. My long division has never recovered
from this, but I know how to get water out of frogs, and that's
not something you can say about everyone. It also is in some
ways unexpected given such a peripatetic childhood that I
would end up in the United States and in American industry.
That was never my chosen path. My mother, as Professor
Sheffield rightly points out, was an academic, but she was
also an activist and a firm believer in social justice and
she raised my brother and I to believe very strongly in the
notion that if you could see a better world, you are morally
obligated to help bring it into existence. But through your
actions, and your intellect, and your heart and your passion, and
if it needs be your life, you should seek to make that better
world be. And that's a hard charge for a small child, it's
no better as an adult. But that notion, that one was morally
obligated to make a better world is something that has shaped my
entire intellectual and personal life. So I came to the United
States to go to university, I stayed for graduate school and
in the late 1990s, I was teaching at Stanford, Native
American studies and anthropology. And in March of
1998, I've met a man in a bar in Palo Alto and he changed my
life. I hasten to add that's not career advice. For those of you
who are still students in the room, the advice here is not
hang out and meet strange men in bars, although it sometimes
works to one's advantage. This man changed my life not because
I married him, not because I had his children, not even because I
had an affair with him, but because instead he asked me a
very simple question. He asked me what I did. And I said I was
an anthropologist, he turned out to be a serial entrepreneur in
Silicon Valley and didn't know what that was. So I explained it
to him patiently. He then said, "What are you doing with that?"
And I said somewhat proudly I will admit that I was a tenure
track professor at Stanford University, and he looked at me
and said, "Surely, you could do more." And that was an
extraordinary thing to be told, as someone who was raised to,
you know, believe in the university system. And I left at
that point, that was really enough of that conversation. So
imagine my surprise when he called me at home the next day
because my mother, like I'm sure some of the mothers in this
room, had cautioned me not to give my number to strange men in
bars, and I hadn't. And this was in 1998, before Google, before
LinkedIn and Facebook, before a white box on the internet into
which you could have typed up my name. And instead, he'd done it
the old fashioned way. He called every anthropology department in
the Bay Area looking for me by description. She's redheaded,
Australian, loud. To the infinite credit or discredit of
the secretary of the Stanford Anthropology Department, she
said, "Oh, you mean Genevieve, would you like her home phone
number?" And so here I have Bob on the phone saying, "You seem
interesting," and I'm saying, "But you don't." And then he
said words that for some of you will much like me haunt me for
my life, he offered to buy me a meal, and when you're fresh out
of graduate school, the prospect of free food is still a
motivating factor. So I had lunch with him, ultimately he
introduced me to the people at Intel, who interviewed me and
made me a job offer. I turned them down, six times, because I
couldn't understand what it was that the job would be. It didn't
have a description, it didn't really have a title, it mostly
came with the invitation you seem interesting, which as far
as I was concerned wasn't really a job description. And every
time I push them and said, "What would I be doing," they said,
"Well, we don't know yet." And that was not necessarily
comforting, but sometime about 16 years ago almost to the day,
I woke up in my bed in Menlo Park and I realized I was
facing a decision. I could choose to stay in the university
system and pursue tenure, and I knew what that would take, and I
knew just how good I was going to have to be for the interview
six or seven years before I accomplish that mission and I
wasn't sure if in so doing I would have lived up to my
mother's expectations on my own. And I realized that there was
this offer on the table that I was perhaps misunderstanding.
And that a company like Intel was in the middle of the largest
technological transformation of my lifetime, a technological
transformation that stood the possibility of shaping the way
we did everything, not just the way we resulted, you know, and
engage with information, but how we engage with each other and
with money and with God, and with love, and our bodies, and
all of that seemed kind of amazing. And for those of you
who were still children in 1998, I know it sounds odd to imagine
that it was ever up for grabs that the internet was not going
to be this thing, but there was a brief moment in the late '90s
where it could have gone either way. But here was Intel saying,
"You should come join us." And I realize that I was being given
the invitation and the opportunity of a lifetime, that
I was being asked to sit in the middle of the most exciting
conversation as someone who could be a different kind of
voice, because they knew I wasn't an engineer, they knew I
was an anthropologist, they knew I was slightly difficult because
truthfully when they interviewed me, their last question was, "Is
there anything else we should know about you?" And I said,
"Yes, I'm a radical feminist and prone to, you know,
neo-marxism." And my now colleagues said to me, "Will we
like that?" [Laughter ]
And I could quite honestly
say probably not for the first six
months, which turned out to be accurate. So they knew what they
were getting. I didn't really know what I was really walking
into, but in September 9th, 1998, I joined Intel. And I
turned up on my first day of the job. My new boss said, "We're so
pleased you're here because we need your help with two things."
And I remembered thinking to myself, you've had six months
and there was no job description, and now you're
limiting my scope of work to two things. I wasn't happy, and I
said diligently, "What two things are they?" Because you
know, you like to make a good impression on the first day of
the job. My new boss says, "Well, we need your help with
women." I'm like, "OK, which women?" My new boss says, "Well,
all of them." "All 3.2 billion," I said. And she said, "Yes, that
would be excellent." Like, "What is it you imagine I will do with
3.2 billion women," I inquired. And my new boss said, "It would
be great if you could tell us what they want." So in my
notebook of the day I wrote down, women all and underlined
it a lot and tried to imagine what was the project you would
do that it would explain two things, one, that women all
wasn't actually a meaningful category, and two, some data
that said something interesting about women all, and those of
you who are researchers or who have a life ahead of you in
research, those are the best moments there are, the moment
when the research isn't yet clear, but the question is.
That's like a moment of Catholic grace, at least for me. But then
I realized she said there were two things. And if
number one thing is that women all, it is horrifying to
contemplate what number two might be. I think I secretly
hoped the answer would be men. Because then I'd know what was
in scope, right? But no, my new boss said, "Listen Genevieve, we
have this ROW problem and we could use your help." And I had
to confess that I didn't know what ROW stood for, and my new
boss said, "That was rest of world." Rest of world, and I
said, "Where's world?" Because that seemed to be an obvious
next question and she said, "That's America." I'm like, "OK
good." So then to recap, my new job will be explaining women and
everyone who doesn't live in America, so everyone in the
company, "Yes," said my boss and I'm like, "OK." And I went back
to my desk. I will confess at this point, I think I went back
to my desk in a kind of mixed state of fear, terror and a
vague nagging sense I might have made a bad decision, because
when you're told to explain to everyone who isn't in the
building, to everyone who is in the building, that is an
awesomely frightening task. But it's also been my job for 16
years and it has been a job I have loved more than any other,
and I consider myself to be an extraordinarily privileged and
lucky person that that has been my job. Because I get to spend
my time in people's homes all over the world getting a sense
of what makes them tick, what they're passionate about, what
frustrates them, what they want for themselves, their kids,
their families, their countries, and think about how to use all
of that to shape next generation technology. And that's kind of a
lovely job to have and there is amazing work in that job, but
all of that is the stuff for another talk. In this talk, what
I want to do is tell you about the other piece of my job.
Because when I'm not actually looking at what people are doing
today, I think critically about what are the stories we tell
ourselves about the future, because those stories are
incredible insights into the way our minds work, and I think
there is a call for all of us, particularly I think here in an
applied critical thinking program, a call to be critical
about those stories and to be able to interrogate them. So
part of my job in addition to thinking about people is also
thinking about what I would call here the sociotechnical
imagination. So, the ways in which we imagine the
intersection of people and tech. There's lots of ways I could
illustrate that, as Professor Sheffield suggested earlier, I'm
a bit manic on Twitter at the moment so I thought I'd use just
a couple of examples from Twitter to talk about what I
mean here. William Gibson, many of you may know, is a preeminent
science fiction writer. His work help shape the way we think
about the internet, and data and all sorts of things. But
recently on Twitter, he posted that he had woken up from a
dream, which had taken place entirely in Google Map street
views. So here is a man whose imagination shaped the world
finding technology, in turn shaping his imagination. And
there's something to be said here about the interplay between
technology and the ways we think about the world. This is not a
straight singular line, right? It is always complicated and it
is a constant interplay. Because that didn't necessarily resonate
for everyone, I realized there was a better example I could
pull from Twitter which comes from my favorite follower on
Twitter, someone who is tweeting as a robotic vacuum cleaner,
under the tagline the SelfAwareROOMBA. Now, many of
you may not know this, but the ROOMBA is the most highly
adopted computational robot in the world. There are 10 million
of them on the planet, that means there are more ROOMBAs
than any other kind of robot. Just pause and think about that
for a minute. The robot uprising is not going to come with the
terminator destroying you. It is going to come with your robotic
vacuum cleaner disconnecting your wireless router. We're
going to experience the apocalypse with a lot of dirt,
not necessarily, you know, death and destruction. Nonetheless,
this robotic vacuum cleaner has a delightful inner life and
spends a lot of time thinking about the things around it and
imagining the world in which it lives. At the moment, it is
convinced that the household appliances are talking to it,
which is excellent. It is also unsurprisingly as a vacuum
cleaner obsessed with dirt. So it dreams about toast, and dust,
and glitter, and leaves, and dog food, in no particular order.
Why it is that we want to imagine or that someone would go
to the trouble of creating a Twitter account, and then
tweeting about a vacuum cleaner tells you something about the
nature of our relationship to technology, that we secretly
wish they had inner lives and that those inner lives had
something to tell us about the nature of the world, or in this
case, toast. There are other ways of thinking about this.
This is probably my favorite piece of video on YouTube. It
was posted about two years ago, and it features a Furby in
dialogue with Apple Siri. And this will be fun to watch how it
is signed. Because the Furby goes, "Eh, eh, eh, eh, eh!"
Makes the Furby sound. And the Apple Siri says, "I cannot find
Graham in your address book." And the Furby goes [singing]
"Lala-la-lala!" And the Siri says, "Would you like me to look
up Shell Oil." And there are 45 more seconds of that and it's
kind of fab. There's a very nice young artist in Portland who
made this video and he knows I am obsessed with it and he is
always very sweet about it. He brings me Furbys as gift because
he says my impersonations are good, but could get better. I
hope this was so delightful that I brought it into work one day
and tried to have a conversation with my engineers about it
because I said, "Listen, I think there's something really
important going on in here," and they did not. So I had to
explain it to them. I said, "Listen, I think this is about a
genealogy, granddaddy talking thing to grandbaby talking
thing, you know, a talking thing from 20 years ago and a
contemporary talking thing." And then I pushed it one step
further and said, "I actually think this is like a tree of
humanity, this is a diagram of evolution from a thing that talk
to a thing that knows how to both talk and listen," and the
listening is important, right, because the listening starts to
suggest a different kind of model of engagement. Because if
a thing can listen, it suggests that maybe it will think, and if
it is listening, it suggests perhaps it is listening to you
and maybe what it will do is have a relationship with you. At
this point, the engineers with whom I work stopped me and said,
"Listen, here's the deal. If this object speaks and listens,
and thinks by itself, you know what happens next." And I'm
like, "Yes, we have a relationship," and they said,
"No, it kills us." And somehow, my colleagues had gone from the
Furby to the Siri to the Terminator, like boom-boom-boom.
And I thought what is it about our anxieties about things
coming to life that so shape our imagination that we have to
imagine that any technology that gets close to us has ill intent.
And I started to think about that, right, as an
anthropologist, as someone who studies the ways we think about
culture and I realized a couple of things. One is that as human
beings over I would say the last at least three or five thousand
years, we have had a fascination with stories about things coming
to life. In the Greek and Roman traditions, those were the
precepts of gods. Gods could bring things to life, not always
things that was supposed to be brought to life. Sometimes
bringing themselves to life in odd bodies and getting up to no
good. In the Christian tradition, we know that the
foundational stories are about a god bringing human beings to
life, bringing us into existence. In most of the other
major world traditions, there are stories about a god figure
making human life. And those stories are hugely important,
right, because they start to suggest who it is that gets to
make life and who doesn't, because there are lots of people
who attempt to make life who shouldn't. And in most of the
stories, we tell ourselves, "Those don't end well." Whether
it is the benign stuff of Disneyland like the "Sorcerer's
Apprentice" and "Pinocchio". The earlier stories are less benign,
but moral is always the same, whether it is the alchemist
attempting to make homunculi, whether it is the various other
stories we have told about men making things come to life, the
moral act is usually the same. If you're on to god, making
something come to life is stepping out of the order of
things and there will always be consequences. In some ways the
most important of those stories, the one that shapes our
sociotechnical imagination today is actually the story of the
golem, that comes to us out of Jewish mysticism and out of the
Kabbalah, and is a story about told in many ways, the most
dominant one is told about the rabbi, Lowy, or Levi [assumed
spelling] in Prague in the 1600s, who carves a creature out
of mud and brings him to life to look after the Jews in Prague
and he is a creature who can only do god's work and the
rabbi's work. He can't do ordinary tasks. When he does,
nothing good comes of it, think "Sorcerer's Apprentice" kind of
activity. But he is seen as being this extraordinary figure,
right, carved out of mud and clay, brought about by god's
words, slammed into his forehead and given this incredibly
important role of protecting the Jewish community. But
that story of men bringing something to life of a creature
brought to life from other sorts of material, that story never
goes away, and it haunts our imaginations through to the
current day in many incarnations that you will recognize and I
will touch on later. What is fascinating, however, is by the
1700s, we stopped just telling stories about making life
because a whole series of new technologies start to stabilize
that make it possible to at least start to make things look
like they're alive. So weather it is early experiments with
electricity, whether it is early experiments with vivisection and
autopsies, where you start to be able to trigger nerves, there
are a whole set of things that start to happen in the early
1700s that transform the way we think about bringing things to
life because they start to make life not just a narrative but a
possibility. Possibly the kind of first blush of this, the
moment that is really significant, comes as there is a
surplus of watchmakers and watchpaths, a kind of classic
thing, right, watchers take off, a lot of people flock, people
send their sons to become watchmakers and there is
glut in the market, and watchmakers who don't know what
to do with themselves. This is to overly simplify a fairly long
period of history. Nonetheless, you have people who are good at
tinkering with mechanical things and realize that the same thing
that makes a watch tick could make something else tick too.
And there is a period of time in the 16 and 17, even into the
1800s where people made a thing we now know as automatons, so
small things that wound up and did stuff. And there's many
varieties of them that run the gamut from small dancing
children to large sort of objects that do of manner of
things. My favorite of them and arguably the most important of
them is by a man named Jacques de Vaucanson, and it came to
life in1736 in Paris. It is called "La Canard Digerateur" or
The Digesting Duck. I know that doesn't sound particularly
significant or you may be wondering why would a
duck shape robotic history, but it did. So, Vaucanson was an
incredibly talented mechanical engineer, before we knew that
was what the word was, right? And he was also really
interested in simulacrum or verisimilitude of things being
like the thing that they resembled and he set about to
make a duck, that was really duck-like, like he wanted it to
be like as ducky as you could make it. And clearly if you were
living in France in the 1700s, lots of people had encountered
ducks, so it's a pretty high bar, right, you know, what
this duck needs to be. And so, this duck had 400 mechanical
parts in it, it was about this big, it could waddle, so foot
over foot, it actually walked. Its beak clattered, that was
good. Vaucanson decided that this duck should digest because
as you do, so you could feed this duck. And his first pass at
it, he made a digestive tract of metal. But when you put water in
it, it rusted. So, Vaucanson became the first person in
Europe to use vulcanized rubber for anything, effectively for
the intestinal tract of a wind up duck but nonetheless, maybe
not an honorable start, but a good start. And so you should be
able to see where this is going. So, this duck walks, this duck
eats, this duck digests, and there's only one thing left for
this duck to do. Now, in 1736, it was very hard to fake
digestion, so this duck had a trick up its sleeve or possibly
somewhere else. Because this duck actually had a pre-cashed
reservoir of duck poop in the bottom of the duck, which was
collected every night by someone, I imagine an intern or
a postdoc. Because those are the kind of tasks we all got given.
And so, as this duck would walk across the stages of Europe, it
delighted people with its extraordinary resemblance to
ducks because it did everything they knew ducks did, like all of
it. Walking, eating, digesting, pooing. I mean, it was like a
full wonder. In fact, so spectacular was this duck that
it--you know, it's made its way around Europe, it was on many of
the courts and stages of Europe. [Inaudible] set off this duck,
that without this duck, there would be no glory for France,
that may say more about [inaudible] France than it does
about the duck. Nonetheless, this was understood as being a
transformative moment, a moment where you could take technology
of the day and make something come to life, something that
resembled life enough that, you know, had Freud been around
then, he might have called it uncanny and truthfully, I'm sure
these were the things he was meditating upon when he thought
about things as being uncanny. But at least you think that the
similitude is the only price for life here, there is a
counterexample I want to bring you from Japan, from the late
Edo period on a Japanese imperial mechanic by the name of
Takan, who took the very same--T-A-K-A-N, Takan who took
the very same mechanical object, The Watch, deconstructed it and
made a different kind of automaton. They called them
Kaurri, K-A-U-R-R-I, twice, Kaurri. But you could just call
them automatons because that would be easier. They were never
meant to look like life-like things, right. So this
particular Kaurri is about 12 inches high, it works by putting
tea in a tea cup. When you put tea in the tea cup, it runs
across the table top. When you pick up the tea cup, it bows and
runs backwards. There was nothing that did that. There's
not like, you know, some small animal in Japan that is being
animated to deliver tea. This is someone's notion of if you could
bring something to life, why did it have to be human things,
maybe it could be something else. And so the idea here is
that not all stories about making life are about making
human life, and that there are always possibilities that making
life doesn't necessarily have to be from verisimilitude, but it
could be from the sense in this case of grace, wonder, beauty or
perhaps you know, ritual. Nonetheless, if you were making
things by the 1800s, you start making stories about the
consequences of making things as it becomes clear that the next
generation of technology moving now from watches to electricity
is going to be another game-changing technology. And a
young woman named Mary Shelley wrote a work that in some ways
has shaped our imaginations ever since. She wrote a story about a
doctor named Frankenstein who made a man out body parts and
brought him to life with electricity, where golem was
brought to life by the shaman his forehead, Frankenstein is
brought to life by electricity. But the genealogy is clear,
these are in some ways the same story. And the monster here who
goes nameless, lumbers through life, desperately trying to work
out how to be human is ultimately rejected by us and in
a fit of rage turns on us and then the book ends. Of course,
then the book ends is a misdirection because this book
hasn't been out of print for 200 years, it has been the subject
of more plays and movies and television shows and moments in
popular culture, such that I now know that there is a sign
explicitly for Frankenstein and you all know what it is, the
[inaudible] right here, we all know that image right? It's
indelible and the story has stayed in some ways to shape our
imaginations and is the legacy of the golem story, right. It
also forms the basis of why it is that James Cameron could have
a multiyear franchise about the terminators that also resonated
because there is something here in the etho, the zeitgeist,
about what it means to bring things to life and what the
consequences are. Doesn't ever end well is what you should take
out of this. What is fascinating is despite the fact we all know
that, we keep building stuff. So you know, OK, so Shelley says,
"Ah! Bring it to life with electricity, nothing good will
come of it." And people go, "Oh, let's try bringing it to life
with electricity." OK, good. So, in the back half of 1800s from
the late 1800s into the early 1900s, people started
experimenting with the next generation of technologies. At
this point, electricity is still not stabilized enough out of
Leiden jars and galvanic batteries, but steam engines
have come along and boy, are they excellent. And you can
power all kinds of things with a steam engine. Mostly, it should
be said mechanical elephants because you could hide the steam
engines inside their bellies so having them lumber along wasn't
so bad. But there was a whole period of these things called
walking men or steamed men, which were this first attempt to
animate the body. Now of course, you always had a steam engine
dragging behind you, so it wasn't a very good trick. But
you can see people are trying to work out, how do we harness
electricity to make something interesting and how do we
harness steam and how do we make life. And of course, in some
ways, all of those technologies become foundational,
electricity, steam, mechanical, mechanized parts. But also by
the time we entered the 20th century, we're talking about
radio and movies, and the beginning of telegraph, the
beginning of mass production, there are many things going on.
And a hundred years this year, the arc of all of that is
transformed by World War I, which arrives and starts to ask
some questions about what is the nature of technology, what will
it do, what will it bring, what does it mean to imagine that all
of those technologies that had seem so speculative and
spectacular before 1914 were also now suddenly repurposed to
danger and death. And coming out of World War II,
there were a series of conversations that was sparked
about the nature of technology, the nature of social
organization, should we think about democracy or socialism or
Marxism or utopian communities. Did we have to think differently
about labor, about the roles of woman, about the notion of the
relationship between the citizen and the nation state? And all of
those conversations where incredibly rich and incredibly
fertile, particularly in the arts. And in 1920, there was a
Czechoslovakian playwright and writer by the Karel Capek, who
was working on his second play. He'd written some novels, he'd
written some short stories, he'd watched most of his friends die
in World War I, he had been highly politicized by that and
he started writing a play that was his response to everything
he saw around him, and that play premiered in Prague in January
of 1921. The play was called "Rossum's Universal Robots", and
it brought into existence for the very first time the word
"robot," because up until then, the word robot didn't exist,
it's a made up word, it's a borrowed word, it's a word in
Czechoslovakia, robota, that means serf, like as in feudal
lords and serf, so a power relationship, right. And he
thought about using other words and his brother encouraged him
to excuse cyborg and a few other words in favor of this word
because he knew it had a political context, it had a
power context, right. And so this play debuts in Prague. The
narrative is familiar, we would all recognize it. There is a
man, his name is Rossum. He has a factory in which he makes
robots. The robots are a biological mechanical interface.
They're designed to do drudgery tasks and they live about 25
years and they're not capable of happiness. The arc of the play
is there are many robots, then there are more robots, people
become less, the robots eventually get really cranky
about the fact that they die at the age of 25 and they get don't
tend to be happy and they go find their maker and demand
longevity. We should recognize the story because it's come up a
few times since. There are ways in which these plays should have
never gone any further, I mean in Prague, 1920, 1921, not
exactly the epicenter of you know, many things. However,
Capek was well known in the US. This play was seen as being
important and in October of 1922, it debuted on Broadway to
mixed reviews. The New York Times said that the robots
didn't raise goose flesh the way Frankenstein did. So, they
understood robots are not so scary monsters, not very
effective and they thought the play was a bit banal. That will
be the New York Times. The New York Post thought it was
excellent and everyone should go see it at once, the paper of
record not always right as it turns out. The play had a very
long run. It was a very successful. It was the talk of
the town, and in 1923, it moved from New York to London on the
west end. In 1924, it was in Tokyo, in 1927, '28, it became a
piece of televised radio science fiction and in 1939, it was the
first piece of science fiction to ever be broadcast on
television. Clearly, it resonated, right? There was
something about this play, about the story that held people's
imaginations. I would argue that partly worked because we only
knew what robots were, this--you know, it might have been a new
word, but the idea had all this baggage that now had someone new
to land, right, and everything just kind of went, "Oh! What,
automatons?" They're really robots. Walking men, they're
really robots. Golem? Really a robot. Frankenstein, probably a
robot." Boom, boom, boom, boom, boom. And it probably--you know,
lots of descriptions of robots, but in the play itself, it talks
about this notion that it's flawless and better designed
than a human being is designed. So there's already a kind of an
aspirational sense here of these things being better than humans.
From the first moment the word is articulated, the idea is that
it is better. And of course, once that gauntlet is thrown
down, you know, there are people all over the world going,
"Right, I'm going to build a robot." And here is the first
one. 1929, a man named W.H. Richards, who is the head of the
Motor Engineering Society in London, has his annual
conference. And he invites the Duke of York to come over in the
conference and the Duke of York is busy, so he built a robot. As
a child of the British commonwealth, I would like to
point out that substituting a royal for a robot is really
interesting. And possibly a little naughty. So, he builds
this robot. The robot is six feet tall. He looks a bit like a
medieval knight. He has bright blue eyes. He has RUR emblazoned
across his chest, i.e., Rossum's Universal Robot, just in case
you didn't get what it was. He is very clearly articulating
what he is. He sits on a battery because there was no other way
to power him. And he bows incessantly, he's a very polite
robot, and oh, because it's England, he's called Eric. So,
Eric--meet Eric the robot, first humanoid robot after the play.
He tours. Both the New York Times and the Post thought he
was terribly debonair, and polite, and charming in an
English way. I don't even know what that means, but there you
go, first robot. Second robot, a year later in Osaka, in Japan,
he has a complicated name but in English it means, "a robot that
follows the laws of nature," which is interesting. He was
built by a biologist, unlike Eric who is run off of vault
battery, this robot is powered by air--pneumatic air vents
basically, so there's air running up through him that
makes him move. He's 10 feet tall, he has a pen and he
writes, and he basically will write things for you, right? You
can ask him a question and he will write you an answer. He
goes on tour in 1930 to Germany and has never heard of again. I
don't know how you lose a 10-foot high
pneumatically-operated robot, but he's missing, so if anyone
finds him, that would be excellent. Time passes and the
Americans have to get into this game too as you can imagine.
There are a number of versions, but arguably, the kind pivotal
moment comes with this robot here, whose name is Elektro. He
was built by Westinghouse. He first debuted in America in 1939
at the World's Fair in New York. He is of course taller than the
English robot, so he's seven feet tall. He's powered by
electricity and those things that look like spurs are
actually what drags the extension cord behind him. He
could speak up to 800 words. He had a voice command system in
his belly. He could do a little dance. He told appallingly bad
jokes, and because--you have to remember, this was 1930s, he had
bellows in his head that let him smoke cigarettes, which he did
all the time. He was routinely photographed with Johnny
Weismuller, i.e., Tarzan. He was routinely photographed with
starlets and he was in a series of movies in the 1930s, late
1930s. He came back to the World's Fair in 1940 with his
pet dog Sparko, who could wag his tail. And I suspect had
World War II not summoned the United States abruptly, you
might have seen more of these things. But, the
commencement--well, the American joining of World War II, those
being two different things when the World War II started and
when you turned up, in that period, no more Elektro. And
sadly, much like his Japanese counterpart, he disappeared for
a bit. His last known public appearance was in 1955 in a
Hollywood movie called "Sex Kittens Go To College", which I
know it sounds like soft ***, but it's not. It is
actually in the beach blanket bingo format of movies, so
really just, you know, sorority girls and robot because as you
do. And then for 40 years, he existed mostly with his body in
a box in Pittsburgh and his head on the coffee table of a frat
house because the mustache was good for opening beer bottles.
He was rejoined and all of his body parts in 2010. But he's
kind of delightful, right? So you have this early period, the
word exists, a whole world of technologists embraced the idea
that they probably ought to make this happen. But of course,
they're not the only people playing with this, right, you
know Hollywood says, "Oh, that robot thing, that looks like a
really good idea." And Hollywood and this play are coterminous.
So, from of the very first movies are about robots, right.
You know, whether it's Buck Rogers, whether it's Yul
Brynner, who may well be a robot. In west world, whether
it's, you know, Jane Fonda's "Barbarella", "Stepford Wives",
there are many robot stories out there, right. And they have many
different arcs and trajectories and types. Clearly in the early
days, the robots are like cowboys as they attempt to
locate them in a narrative genre because science fiction hasn't
really mainstreamed yet. By the time science fiction turns up,
robots are just part and parcel of the scenery. And usually we
know how it goes, but the imagination here runs far ahead
of the technical challenges, right. It is much easier to make
a robot walk in Hollywood than it is in a laboratory. So you
have these kind of interesting moves that happen, right.
And ultimately that leaves us in the post-war period
in this really interesting position of what is it going to
take to make a robot real and how are you going to do it. And
may sense is that this actually turns on four things. And while
these are true about robots, these are also true about most
technologies. These four questions are residual questions
that are as much about what it takes to make a robot real as
they are about what it makes to take any technology and make it
real. The first question you have to answer when you think
about making a robot or any piece of technology is what is
it going to do. Is this a robot a substitute human being? Is it
just doing dangerous tasks? Is it just doing drudgery? Which is
what Capek surely wanted? You know, is it going to be only a
partial human? Or maybe just a partial bit of a body? So the
first kind of robotic human-like things that turned up in this
country were mostly robotic arms in car factories in the '50s,
'60s and '70s, right? It wasn't a whole body, but an arm was a
useful metaphor for thinking about how you move things. But
there is always the question and I think, again, it is a question
not just for robots but for technology about what are you
doing? What is the thing up to? Why is it doing that? And for
who? Who benefits from that action? Right? Who benefits from
the production of the object? The second question is also a
question about, what is the form or in the case of robots, what
will the body look like? Have a colleague of mine in Melbourne,
who has composed an extraordinary website that is a
catalogue of every humanoid robot form he has ever managed
to find anywhere on the planet. This is what the internet was
invented for as far as I'm concerned and it is a remarkable
treasure trove that he has created. When you go through his
list of robots, however, there's something that is really quite
startling. About halfway down the list in the 1940s, early
'50s, there is a robot that was built in Chicago and next to its
name, it says in parentheses negroid. If you look down this
list, you realize there are a couple other robots that are
flagged as female, which then starts to ask this really
interesting question. What are all the other robots? Are they
men? Are they white? What does it say about class and caste and
race and gender, that you have to mark a robot as black and a
different robot as female, but none of the others need
classification. So it becomes a fascinating question, right,
about what are the bodies that we are building? What is being
naturalized and normalized in that move? And what will that
mean? Not to mention, technically speaking, it is
really hard to make something walk. Every time--when you walk
out [inaudible] and you go up the stairs, focus on every piece
of your body that is moving because let me tell you,
mechanizing that is like one of the hardest technical challenges
imaginable. So even if you could hold in abeyance questions about
gender and race, you are stuck with some really interesting
affordances of the human body that become really quite
complicated and if you could solve the question of purpose
and the question of form. So form and function. I think you
are left with this really interesting question about how
much autonomy, how much independence or agency do you
grant in this case the robot, but I would say in the case of
technology, any technical object. How much can it act on
its own? When I was in Tokyo recently, I was with some of my
colleagues and I so this sign and I made them stop so I could
photograph it. And then I asked them what it said. And it says
"robot zone." OK, good. What does it say in Japanese that I
can't read? And they're like, "Oh," it says, "Robot zone,
autonomous robots two meters in from the curb." Like whose
robots are they, I inquired. And they said, "Well, they're
autonomous." Like, "OK, how did they get here?" They're like,
"Autonomously?" I'm like, "Ah, what are they doing?" They said,
"Being autonomous." "Aren't you concerned?" "No, that's why
they're two meters in from the curb." I'm like, "Aren't you
worried the robots will do something?" They're like, "No
they're two meters in from the curb." Like, "Yes, but whose
robot?" Like, "They're autonomous," like "But aren't
you worried?" "That's what the sign is for." At which point you
realize you're having one of those conversations that is
ethnographic in the extreme,
where I have an assumption and they
have an assumption and they are not lined up assumptions. And
I finally said to my colleagues "Aren't you concerned that
something bad will happen?" And like, "That is what the sign
is for, to let us know that there are robots two meters in from
the curb." Like, "Aren't you worried?" And my colleagues
finally just looked at me and said, "Genevieve, what is your
problem?" I'm like, "OK," I said, "if we were in America,
people might be concerned that the robots would not be two
meters in from the curb and would try to kill us." And my
colleagues just laughed. "That's just American science fiction.
Here, the robots are our friends. And we know that
they're two meters in from the curbs." OK fine. And then you
realize that what it means to think about agency, to think
about affordances, to think about what it means to grant
something the ability to do something on its own is going to
mean something different in different cultural context. That
what it means to imagine subjectivity or power or
autonomy, while we know how to engineer some of those things
technically, the cultural regulatory and moral
consequences of those are conversations we are yet to have
and possibly urgently should have.
And they're not as simple as just saying,
"Well, Isaac Asimov had three rules if the robots
were done, because trust me, that is not going to get it
done," and even if you solve these problems, even if you can
work out what should the body be, what should the purpose be,
and how much autonomy are you granting, there is a fourth and
I would argue deeply philosophical question that all
of that raises, which is the question about what will be its
inner life? People talk about the singularity, [inaudible]
moment when computational power will eclipse you in intelligence
and we will just be some ways absorbed to the machine. Are
those right about artificial intelligence exceeding human
intelligence at a particular date and what does he know? So
we will be either vulnerable or replaceable. My suspicion is
that there's a different way to think about that question.
Because it presupposes a notion about human consciousness that
is linked inexorably to an idea about intelligence and
inexorably to an idea about brains, which is not the only
place of consciousness, right, consciousness also in some ways
involves things that are more intangible, faith, love, soul,
things that are harder to contemplate. But not out of
bounds. I think many of us when we imagine what an inner life of
a robot be can't get very far away from Hollywood. Because
Hollywood has taught us what is in the inner life of robots and
there's only two things. Find John Connor and kill him,
relentlessly for at least 25 years, and then if you want to
imagine that a robot might have a poetic soul, there's only one
other place you can go and that's to I've seen things you
people wouldn't believe. I've seen attack ships on fire off
the shoulder of Orion, because then you go to blade runner.
Because those are the only two examples we really have of what
might be the inner life of a robot, right. But the good news
is people have been thinking about this for a really long
time and the Japanese robotic tradition is an extraordinary
book, was published 30 or actually 40 years ago this year.
And this man who was one of the principal robot makers of the
last hundred years argues that in fact rather than imagining
that the consequence of giving agency the robots is our
inevitable death, he says that the consequence of giving agency
to robots might be the rise of a consciousness that follows a
different arc. He suggests that robots, because they are capable
of infinite patience, might actually accomplish Buddhahood
before people do. And he says every time you put down a robot
and suggest that its inner impulse is to kill you, what you
are secretly doing is externalizing your own fears
about the human race. And that much of the anxieties we put on
robots are really just the unexamined and unexaminable
anxieties we have about the world we have created for
ourselves. And in some ways coming from a Japanese man whose
country has been bombed with nuclear weapons, it's easy to
imagine that he might want to think about the notion of the
arc of human sort of horror, in a particular kind of way. I
think my argument here however is really to say, as we think
about any technology, whether it is a robot, whether it is an
algorithm, whether it's even a technology object, there are
those questions that you have to ask every time. They are my
offer of critical questions. What is its purpose? What is its
form? What is its level of agency? And what will be the
consequences of that? Because it turns out for me at least when I
think about the politics and business of making technology, I
realize that it's not as simple as just making technology. We
are also always and already in the business of making culture.
Because you can't set about to make a piece of technology
without intersecting with 200--2000 years of stories about
what it means to make life. And every time we bring something to
life, whether it's something as simple as Facebook or as
complicated as a self-driving car, it's the same set of
questions. What's it for? What will it look like? How much
authority will we grant it? And what will it
become and become to us over time? And as I think about what
it means to drive a conversation forward, to drive a critical
conversation forward because I think that is the right kind and
I think about the kinds of technologies that are on the
horizon, the internet of things, big data, autonomous vehicles,
drones, self-reproducing algorithms, and just more social
networking technology. These are always the questions. What's it
for? What will it look like? What are we going to let it do?
And what will it say about us when we have finished all of
that. And with that, I want to say thank you.
[ Applause ]
[Inaudible discussion]
I just disappear and not come back. I
will give you one [inaudible]. Thank you! Professor Sheffield,
have a chair.
>> Thank you.
>> You're very welcome.
[ Pause ]
I shall summon my own chair.
>> Can every one hear us, hopefully?
Genevieve, thank you--
>> You're welcome.
>> --for that fantastic, very rich and
incredibly stimulating presentation.
>> How can you go wrong with robots?
>> Right. We wanted to break a little bit
this year from our format that we've had the last couple of
years and actually, instead of having formal respondents, which
we'd have and they had been superb, to have a bit more of a
dialogue between the two of us and then we certainly are very
mindful of the time, and want to open up the remaining for some
questions from all of you who've been so patient, especially with
my long introduction. Your talk was just such a rich and
wonderful presentation, but it got me thinking about this
distinction between the orient and the occident, the east and
the west, and what an extraordinary difference there
is. The western notion of fear and I think this eastern notion
of as you say perhaps even grace or imputing or allowing for the
possibility of values and it reminded me of the sort of
notion of the self and the other. And perhaps, how do we
engage with that other? And is there a difference then across
this east-west faultline for someone who spent such an
enormous amount of time in--
>>Asia.
>> --Japan and Asia and
Eurasia and Australia. One of the big questions I've often
thought about in thinking about critical thinking is this
tendency to take a very I think almost American imperialistic
view or a very European view of critical thinking itself--
>> Yeah.
>> --and I mean, even reading a book, you know,
reading--what makes us think we read left to right or right to
left, these conventions that are adopted. But do--certainly
authority the notion of authority is often very, very
different in Asia, having had Korean students who are so
respectful at times and I do generalize, but when it comes to
this question of critical thinking, global critical
thinking or how we think?
>> Wow, that's a lot of questions.
So I'm going to answer the one I feel like answering--
>> OK, absolutely.
>> The privilege of just having done that. So I
think there's a couple of things going on in the fear space that
are instructed thereon. When you look at the relationship between
the appearances of new technologies, particularly in
the post-enlightenment west, and human responses to it, the
places where the greatest anxiety comes is as technologies
come closer and closer to being like us in the west, right? But
the like us piece is really important. So people like Alan
Turing and the artificial intelligence community who set
the bar of humanness as being about cognition, and you know,
it's usually going back to Descartes, "I think therefore I
am," you know, we have understood in the western
tradition that intellect and reason are the hallmarks of
humanity and as things get closer to suggesting that they
can think for themselves or think, it raises this troubling
question of what are we? I mean, what are we left with if they
can think too and surely, they will think that we are very
boring and then they will kill us because that seems to be the
arc until you get to Spike Jones, right, where the story is
now they will find us boring, and break our hearts and dump us
for 700 other people, which is an interesting kind of
repurposing of that tale. I think what's fascinating is you
look at lots of other cultures, many other cultural traditions.
There isn't a bright line between what has sentions and
what doesn't. So many things are allowed to be alive, many things
are allowed to have some limited form of agency, some limited
form of self-awareness or consciousness. And when you
imagine that there is a range of possibilities of things that
could be alive that could be thinking, you are far less
frightened when something else proposes to join that, that
collection, right? It is less disturbing to say, "Only we can
think, so if you think, ah, all bets are off." As opposed to
saying, we have many things that think in a different set of
spectrums, so adding to that collection, not so troubling.
>> Right.
>> And I think what you have to imagine sitting under
that are different philosophical impulses, different ideas about
the role of rational forward and thinking and cognition and as a
result, different ideas about what gets to be in the
conversation as alive versus-- >> Right. >> --what isn't?
Right? And I think you know we've had a very particular set
of intellectual possibilities about what in the west, that are
not the same elsewhere. >> Right. >> And it's probably--I
mean, I also hate to generalize but I think there's something
very interesting about the Kaurri in Japan in the Edo
period, the puppetry that happens at the same time where
all those objects are seen as being alive, right? When the big
puppets die, they used to have to bury--physically bury their
heads in the cemetery to make sure that they went quietly. And
so this notion of things that were alive that weren't human.
And to get to a contemporary period where there is an
enormous set of possibilities about what robots are allowed to
be and do in the Japanese tradition is very different than
here. And my suspicion is there are underlying culture
philosophical thoughts that make for great differences. So I
guess in some ways, the answers--the long-winded answer
to the question is, my starting point whenever I do any work is
to always ask the question, how cultural is the thing I'm
looking at, right? You know, chances of it being a human
universal truth is slim, so how cultural is it and what are the
other possibilities of different world views and how do you
explicate those? How do you make those visible so that other
people can also think about automaton and Kaurri as doing
two different kinds of work and it being identical technology,
but doing two different kinds of kind of cultural work, so to me,
that sort of, I don't know, trying to opening up the
conversation is always important. >> Thank you. I think
that's a really wonderful way of helping me reconceive it, you
know. I had another question I think it kind of piggybacks upon
this, I mean, you just mentioned this idea of the internet of
things. It's a wonderful term and I don't know that everyone
here is familiar with what we mean by the idea of an internet
of things and I just wondered if you could briefly maybe
elaborate on it. Very briefly, but also even furthermore
perhaps say why you deem this to be such an important idea? >> Is
it bad to ask an anthropologist to explain technology and I
apologize to the nice people from IBM who know better than I
do in this room as I give this bad answer. So think of the
internet as having been a technological infrastructure
that was built with a series of nodal points through which
information moved in different kinds of ways. The ends of those
nodal points have mostly been things we could see and engage
with, that had operating systems that we interacted with,
servers, computers, tablets, phones. The internet of things
suggest the next iteration and generation of that where the
things that hang off the end are less necessarily objects that we
would program and more objects that are sensing the world and
helping drive different kinds of decisions on that basis. So
things that are smart, connected potentially collecting
information and providing analytics on it. So doing a
level of interpretation but frequently that stuff is
happening somewhere else. Many of these objects will be tiny
and embedded-- >> Right. >> --but they will ladder up to
bigger systems-- >> Right. >> --through we think of being
things like smart homes, smart cities, transportation systems,
irrigation systems, many kind of version of that, right?
^M01:10:13 Ultimately what it also is though the next
generation of the conversation we are having about the
internet. So it becomes a place where we have conversations
about more information will let us build better cities. More
information will let us be better environmental stewards
because we'll know how much resources we are expending. You
know, these--the stories we tell around the internet of things,
which is still terribly nascent. I think for me as a social
scientist. there are a couple of things that are always missing
from those stories. It kind of fall into three categories,
right? There are no people. Remember we talk about the
internet of things as just like the internet or things and there
are no people, and you think, well, there are people who are
building it, there are people who are going to install it,
there are people who will have to fix it, there are people who
will regulate it, there are people who are going to be the
objects and the subjects of it, and those are all different
people. >> Right. >> And that's a lot of people already. I don't
know what I'm talking about. >> Right. >> And then you have this
problem of where it is happening. It's one thing to say
smart city, it's a different thing to say smart London, smart
Jakarta, smart Rochester, New York, smart Cleveland or
Shanghai or Addis Ababa. Because each one of those cities is
going to require a different sort of intervention, there'll
be different built infrastructure, there'll be
different regulations, there'll be different habitats, different
ways people occupy those sites. And so, the specificity is going
to mash it right. If you are making a city smart because you
want to regulate traffic, that is really different [inaudible]
for making a city smart because you want to regulate the
population. Very different technological infrastructures.
And then the last thing is every time we say the internet of
things, it's just the sort of floating thing, and sitting
inside that category, you have toothbrushes and telephone
poles. All they have in common is that they are long with a
pointed thing on the top, right? But what it means to make a
toothbrush smart and connected and frankly, why, is a very
different question than what it means to make a telephone pole
or a light pole smart and connected and why. And so
sitting inside this, there's a series of questions we haven't
yet worked out how to ask about what those things are going to
be and what they need to do and what all the rituals are about
them and the landscape in which they exist isn't even. So you
know, connecting your toothbrush so that it doesn't work without
a gamification engine and your cellphone in the bathroom,
there's three of them on the market right now, is very
different than connecting a traffic light, right? And so
part of what makes it interesting to me as a
researcher and as a scholar is that there is tremendous
richness in all of that, but it is waiting to be excavated. >>
It's a wonderful response I-- >> [inaudible] I don't how to
answer a question quickly. >> Well, you did a great job of
proving that. One really other question I've really wanted to
ask you is why the ferality of data and what is gained or what
is the advantage of conceiving of data-- >> As feral? >> --as
feral. >> So, I come from a country. I'm a settler in that
country and in the 200 plus years of Australian history, we
have routinely brought things to my country and assumed we could
control them. Bunnies. Bunnies in Australia? In Australia,
this. Bunnies. >> Thank you. >> You're welcome. Bunnies.
^M01:14:00 [ Laughter ] ^M01:14:02 Camels? We can do
this. Bunnies and camels and goats and cane toads and like
there's just a lot of stuff, right? >> Right. >> And every
time we brought it to Australia, we went, "We can control it,"
and every time it got there, it literally jumped the fence and
ran away. And two years later, we are dealing with the
consequences of all of that jumping the fence and running
away. And I have always thought there was something
theoretically powerful in the notion of feral, that it is in
some ways like the Frankenstein story, right. It's a story
about-- >> Yup. >> And I think there's something about
imagining that we can record everything, or have everything
recorded and that it won't end up out of control in some way to
make it seems to delightfully and terrifyingly naÔve and that
we have every example in the history of the last at least 200
years to suggest that arguing you can control something
because you put guardrails around it is to set yourself up
to not have the right conversation. >> Right. >> And
so for me when I've used the tag feral data and when I have
talked about feralness as an analytic category has been
probably about trying to bring into the theoretical framework,
a thing that is very clear from my [inaudible] experience in my
homeland, but also a way of starting to ask them different
questions about, you know, what does it mean to think about
domesticating something? What does it mean to think about the
moments of losing control and frankly, sitting inside feral,
there are multiple option,s right? In Australia, we all knew
the rabbits have gone feral because you couldn't miss them.
They're everywhere, it was like a lot of bunnies, which is like
bunnies everywhere. When the camels went feral in Australia,
no one noticed, which was interesting because camels are
much bigger than bunnies. It's true they are, it's a scale
problem. But the camels went feral in a place where there
weren't that many people, mostly aboriginal people. And so for 50
years, Australia didn't know it had camels, so this kind
of--they got forgotten, right? There were camels, then trucks
came along, we got rid of the camels. The camels went into
Hinterland and for 50 years, the camels do what camels do, which
is have no native predators and have a lot children. >> Right.
>> And so round about 1990, Australia woke up and went-- >>
Uh-oh. >> We have 300,000 camels and they live to be 60 and they
have a life, you know, they had babies from the time they're 18
months old until they die and they have at least one a year
and two in good years and that's a population that grows really
good, big. It's an excellent [inaudible] if you will, you
know, in the business world. It's not good if you want to
control the-- >> Right. >> --the--you know, the camel
population. But for me what was fascinating about two examples,
right, is that the feralness of some things will be immediately
apparent because it happened clearly and in front of you and
you will know it is happening. >> Right. >> And I think in some
ways, you know, we've had a few of those moments, right, where
we have seen technologies that is feral and we know it looks
like, I mean, you know. You can make an argument that
smartphones are in some ways feral in our hands right there
everywhere, you see them all the time. I think the moment of
realizing that data was feral is more like the camels. It went on
for a long time and suddenly, we went, "Oh my god, why are all
these fences falling over and the water tanks don't stay
upright and why is this hill broken," and then you go, "Oh
god, we have 350,000 camels we didn't know about. >> Right. >>
And at one level, I would say if pushed and I can't believe I'm
going to say it on a camera, that Snowden was the moment of
discovering we had camels, was that, you know, there was
something about that moment in July of last year of realizing
that there was an ordinate amount of data and tracking that
was happening that hadn't necessarily being something
people understood, maybe they did vaguely in the back of their
heads, but I think it's a bit like the camel problem, right?
You wake up one morning and go, "Oh, eek," and then you have
this problem of how do you deal with 350,000 camels or the moral
equivalent, which in this case would be a rather great amount
of data. >> Yeah. Thank you. >> You're welcome. >> We should
open up certainly what time remains to questions from our
very patient and wonderful audience. I'm completely blinded
by the light. >> If there is an audience, it's almost news to
me. >> Does anyone have a question they would like to-- >>
I promise I am capable of a yes-no answer. [Inaudible] in
some circumstances. >> Oh yes, hi. ^M01:18:17 [ Inaudible
Remark ] ^M01:18:39 >> That is a good question. Is a smartphone a
robot? I have a colleague of mine who says that any algorithm
surrounded by an object is a robot. I'm not convinced that
smartphones are there yet, but I think there are some things that
are experiments around the edges that start to suggest that it's
possible. They may not be able to move themselves, but at the
moment that Cortana, Google Plus, the Firefly, SDK, Siri,
all of those are a runner of personal assistance technology
that sit on various phone platforms, right, that propose
to know about you, your preferences, your activities,
they are reading both the hard sensing data from your
technologies, physically way you are, a little bit about your
orientation, as in stationary or moving. And then also an ability
to read some of your social feeds, so your calendar, maybe
your email, maybe your preferences. When those things
start to combine and your phone could start to say to you things
like, you know, there's some stuff we--things are done for
you and some stuff we should talk about and some stuff you
need to make some decisions about. That becomes more
interesting. I don't think it's a robot yet but the potential of
it being artificially intelligent, absolutely.
^M01:20:00 Now it's also the case that the robot that was
pictured here at the very end is a project we've been doing in
our labs. You can also follow that on Twitter. 21st century
robot. 21c robot. And it was a developmental platform. We were
actually really interested in how do you engage the next
generation of innovators with programming technology and
effectively at the moment, you can make Jimmy work by sticking
a cellphone in his head. So at that point, your cellphone would
be robot, but it would be a stretch. >> It's a wonderful
question, thank you. Are there questions there? [Inaudible].
Yes. ^M01:20:43 [ Inaudible Remark ] ^M01:20:48 >> Can you
get a microphone please? >> I can repeat the question if you
like? >> It's coming. >> I'll just repeat it, [inaudible].
That's OK. >> Yes. Hello, so my question was do you think
someday people will have affection for robots, like I
have seen that some robot that are pets, people use them as
pets, so do you think that's a possibility in the future? >> I
think it's already happening. I think it depends on how you
regard affection, right? If the statistics are to be believed,
50 percent of Americans who have smartphones sleep with them less
than four feet away from their bodies. I'm willing to bet there
aren't that many things that you sleep with four feet away from
your bodies and know where they are at all times that you don't
have some kind affection for. We have a mild anxiety when the
battery dies. When the lights blink, we want to respond. I
think as human beings, we have routinely had affection for
things that were not necessarily traditionally animated. I, you
know, can think of many examples, B.B. King loved all of
his guitars and called them all Louise. >> Lucille. >> Lucille,
thank you. It's like Louise, Lucille, all of them, every
single one. Most you know knights name their swords.
People had names for their horses, people had longstanding
relationships with their cars. I suspect there are some with
bicycles. My suspicion is we have had affection for many
things. What that will look like with the next generation of
technology things is interesting to contemplate, right?
Particularly if there is any capacity for those objects to
return our affection in some way. So you know, I could argue,
if pushed, but you know, my relationship with Amazon is
actually quite fun. When I turn up on Amazon, they go Genevieve,
you're here. Here's all these things we think you should like
and buy. Now the buy part isn't necessarily a good relationship,
but it's remarkable like the one, one has with one's
children. Here's the stuff I want, now buy it for me. So at
some level it's like, yeah, kind of get that, I get that dynamic,
right? And Amazon knows it's me and have clearly thought about
me. Now it's an algorithm, it's now Amazon, but I am perfectly
capable of anthropomorphizing that website and imagining I
have a relationship with it. I also suspect that as human
beings, we frequently imagine relationships with other people
that also aren't true, and we attribute a level of affection
and care and possibly, the reciprocation of those things
that aren't always true. So imagining that there will be an
emotional landscape in which objects also fit seems almost
inevitable, and particularly when you imagine what some of
those objects are, I certainly have colleagues who are deeply
and profoundly grateful for their robotic vacuum cleaners
because they vacuum things and they don't ask to be told what a
good job they did afterwards. They just go back into a corner
and plug themselves in, which is very different than managing
other people who might vacuum in your house. Notably your spouse.
So there is a thing I knew people who are very grateful for
their vacuum cleaners. I also know that as we move into a
world where certain kinds of robotic care objects may appear,
it's possible to imagine those will also be relationships of
affect. >> Right. So you were saying before about feral data
versus domesticated data? OK. You were talking about feral
data versus domesticated data, would you say that robots now
are domesticated and where would you see that they could become
feral where they aren't already. >> Wow. Gosh, that's such a good
question and one I hadn't thought of. I love you, young
man, that is an excellent question. That is most
excellent-- >> A plus for the day. >> Yeah, well possibly, for
the week. I suspect part of why I think the SelfAwareROOMBA is
so funny is because it is the beginning of feral. I mean,
there was a moment last October when that vacuum cleaner decided
to celebrate Halloween by breaking out of the house and
running down the street. And there was something about the
notion of a runaway vacuum cleaner that was both delightful
but also possibly [inaudible] and I wondered what that would
be like. I think one of the fears that is always played on
here is that in fact, these objects will be feral, you know,
that they will run wild and because feral in some ways is
also just another code for [inaudible] and self-determining
and not bound by the rules on the fence, I think there is
something about that that is troubling, right? For my
colleagues who work building robots, one of the hardest
things they have to build for is for mixed environments, right?
So it's finding robots with other robots. When you put
robots with humans, it gets really complicated. So you
actually have to work out how to manage those interfaces and I
think in some ways in the short term, it's those interfaces that
become more interesting than what will happen with the robot
uprising. Because again, frankly, I'm increasingly
convinced the robot uprising might just be the vacuum
cleaners which would be excellent but it will happen
down here. ^M01:26:10 [ Laughter ] ^M01:26:11 [ Inaudible Remark
] ^M01:26:13 >> Exactly. >> Yeah, exactly. Well, there was a
couple of really interesting incidents that got made more of
in the press than I think was necessary about robotic vacuum
cleaners that in the present language killed themselves which
is interesting to--that the constitution is already
interesting, right? What does it mean to have a vacuum cleaner
that decides it's so miserable but in one case it appeared to
have climbed on to a stove and set fire to itself. I think that
was probably not true but it was an interesting kind of moment.
And there are some lovely design experiments. I have a colleague
of mine in Europe who's just finished a project that was
a--you'd like it. It was a design intervention called
addicted toasters and he imagined the world of the
internet of things that were toasters. So, you know, rather
than telephone poles toasters and in a world of quantified
self. So, you know, when we can measure how far we walk, how
much we eat, you know, we're starting to metricize ourselves.
These toasters were quantifying themselves and so they knew how
much toast was put in them by comparison to all the other
toasters around them and so they became sad if they didn't get
enough toast. This is a--Let me be clear, this is a vision
video, not reality, your toasters are not sad yet. And
they would sit there for--lonely making the little buttons go up
and down hoping that someone would put toast in them until
eventually they sent out a distress call and asked to be
removed from the house in which they found themselves, which is
very different than feral but I wonder if there isn't this space
that will mediate those two things where it may not be so
clear about domestic and feral may not be your only options,
there may be a third ground. But I really do seriously have to go
think about that because I'm not not sure. But--Yeah, I like
questions that I don't know the answer to so thank you. That was
lovely. >> You mentioned earlier the-- >> Where is that voice
coming from? >> Over here. >> Thank you. >> Yeah. The example
was brought up of robotic pets and you can think of, you know,
the affection that someone has for their pet. And in general,
pets have shorter lives than we do. How will the equation of
affection change in people's perceptions when we recognize
that the robot will outlive us? How will that affect the
affection relationship? >> Wow. I don't know about you but I
don't have any technology that proposes to outlive me. The
operating systems break, they get corrupted, the power charge
doesn't work anymore. I mean there are some interesting
technical problems you'd have to solve before one might be
imagine a digital object will outlast you. That said, I think
there is some really interesting pieces about what does it mean
to start to imagine affection to things that don't necessarily
change? I mean, you know, phones are interesting one right? Your
phone gets old, you replace it with a new phone. Does your
affection last with the old object or the new object that
has all your stuff in it? So, robot dog dies, you get new
robot dog that has same personality as old robot dog, is
it the same dog or a different dog? It gets a hardware and
software upgrade, how do you feel about that? I happen to
load the push upgrade on my last laptop, it made me really cranky
and I wanted to go backwards. You know, what that will look
like with your pets would be interesting, right? So, one of
the things I think we need to remember is whilst it is
terribly seductive to imagine a world in which these robots will
have solve a series of deep technical problems? Some of
those deep technical problems are years in the resolution, a
power, just for instance. When do you have to plug something
in? Where is the power coming from? Power, connectivity,
agency, viruses, hardware and software updates, all of that
stuff isn't going to be miraculously solved. It's years
in the making to get some of those things right. And in fact
I suspect it's one of the impediments to certain kinds of
technology progress is that there are some other really like
prosaic things we need to solve that are kind of like deeply
mainstream. ^M01:30:05 That said, I have just
remembered--your question makes me think of it. There is a
lovely feral robot project, [inaudible] somewhere in my
head, by a woman named Natalie Jeremijenko. Do you know
Natalie? She's amazing. Performance artist and
technologist out of Yale, I think at the moment, Natalie.
And she took a bunch of Aibo dogs, so the Sony Aibo dogs. She
hacked their hardware and made them run as a pack. >> Well. >>
And put sensors in their noses so that they would react to
certain kinds of biochemicals and then she set them loose in
Central Park. Natalie Jeremijenko. >> How do you spell
her last name? >> Oh god. >> J-E-M [inaudible]. >> Just look
up feral robot dog and if you don't get to Natalie, I would be
surprised. ^M01:30:52 [ Laughter ] ^M01:30:54 But it starts with
a J and it has a lot of Is and Ms and Ns in it, Jeremijenko. >>
Yes sir? A couple of I think more questions to the back? >>
You talked about artificial intelligence and you said that
they can--at some point, they will be reproducing themselves.
Is that a term that you used? >> At some point, they will be-- >>
At one point, they'll be reproducing or like recreating
themselves. >> Reproducing? >> Yes. So, the question I have is
do you think that at one point robots would be working on their
own and how do we regulate that? >> That is the question. Well
that is certainly one of the questions. So, in the world of
programming, there is certainly a move to what would be thought
of us either autonomous or self-reproducing algorithms. So,
you program them once and they then go on and create the next
generation of things, right? Derivatives basically. That is
hard to do and hard to be successful. Not out of the
question and certainly a lot of work is going on in that space.
In the robot space, one of the challenges for all sorts of
robot, right? So whether we're talking about humanoid physical
looking robots or things like self-driving cars, there are
some really interesting technical questions but there
are also philosophical and moral questions that might need to be
surfaced, right, that have to do with how do you program a
decision making tree into a technology object so that it
knows how to respond to certain kinds of situations. So,
Asimov, you know, the famous science fiction writer basically
says there are just three laws of robots. They should do no
harm to people, they should do no harm to their maintenance
officer. I never remember them because I don't think they're
relevant, just not helpful to you. But there are three rules.
They're not big enough in scope to be useful. The challenge is
going to be much more about how do we think about what is the
nature of the human machine interface? What is the value of
human life? If you say a self-driving vehicle needs to
privilege life first, whose life is it privileging? The life of
the driver, the life of the passenger, the life of a person
outside the vehicle? How do you decide which life you might save
first because you probably can't save all of them? Where does
property sit inside all of that depending on whose self-driving
vehicle it might be? Is the vehicle unoccupied but there are
people outside of it? If there are people inside of it and you
have to make a split second decision, which many of us do
when driving, you know, are you saving yourself, your kids, your
parents, your wife, your husband, your partner? Are you
saving the pedestrians outside, are you worried about the
expensive property because the self-driving vehicle is actually
powered by an insurance agency? That's going to be a series of
kind of questions there, right, that mean there are a series of
decisions that you will need to have thought through in advance
to hardwire into the decision making properties? And those I
would argue aren't just technical questions, right?
>> OK.
>> Those are philosophical questions, those are moral
questions, those are legal questions and my suspicion is
they are going to manifest themselves differently in
different legislative frameworks, in different
cultural frameworks, in different regulatory frameworks.
And if history is any indicator, some of those questions may not
be raised in a timely enough manner and we will come back and
have to have them later. And we will ask ourselves why we didn't
think about it sooner which makes me sound a bit Cassandra.
I don't quite mean it that way. [ Inaudible Remark ]
>> I think we have time for probably one final
question. Their questions are superb and I regret the length,
I guess, in which we're trying to answer them.
>> Lovely. Your questions are great.
She takes too long to answer. [ Inaudible Remark ]
>> I like that. That was good. Yes?
>> I have a deep interest in transportation and sort of, you
know, the things side of that. And I want to know how you view
the change over from focusing on the data that are automobiles
and that the people that are interacting in the space with
automobiles. How--Where would you take that data and turn that
into a way to sort of create I guess a close loop or a
transportation system that sort of eliminates the inefficiencies
that I guess that at the human aspect of transportation today
now creates?
>> Wow, an interesting question. I think
one of the things about notions of making the transportation
system--so, efficiency of transportation systems,
technology therein. I think one of the things that have been
true for at least 60 years, maybe 80, is we have imagined
what it would be like to make a safer transportation system and
it is a perennial kind of constant set of aspirations I
guess for one of the better word. There was a publication in
Time Magazine of advertising collateral from a California
lobby group in 1958 that imagined the world in which you
needed more electricity because clearly they were lobbying for
electricity which is excellent. And they said it may be big
calls about the future, right? So this is 1958. In 1958--I say
in the future which they imagined to be 10 years later,
just over the time horizon. There will be a box outside your
door that heats your house in winter and cools it in summer.
Check, you know, they called, you know, centrally--centrally,
yeah, good on them. You would have an oven in your kitchen
that would cook food in minutes rather than hours. Microwaves,
check. That your lights would come on automatically when it
got dark and your blinds would shade when the sun came in.
Yeah, home automation, not so much.
They imagined that television would stop
being a box that sat on the floor and become a panel
window on your wall, not bad. They also said there will be
self-driving cars because it would make us safer. You will
notice they will write on many of those things and
spectacularly unable to deliver on that particular pace. My
suspicion is because one of the things about cars is that it is
not as straightforward as saying they are objects that get us
from point A to point B. They are symbols, they are cultural
objects, they are about moving us from one social situation to
another social situation. They are also nested inside this
incredible complex of car manufacturers, regulatory
bodies, insurance companies, people who would make tax on
roads.
>> Liabilities.
>> Liabilities, exactly. I mean this is
incredible kind of ecosystem and my sense is that
one of the challenges in that is that to move that whole
ecosystem forward is actually quite tricky. It took Ralph
Nader an extraordinary long period of time in this country
to make seatbelt wearing a standard. And it's hard to
imagine that was an argument. It's like, put on the seatbelt
and not die. No, we think that's a bad idea. Really? And there
are a whole series of things that happened in this country
around the deployment of airbags and all kinds of other auxiliary
safety that were built into American cars because getting
people to mandate seatbelt laws was almost impossible and yet
that seemed to be a very good idea, and the data was firmly on
its side. So I think part of the challenge here is one that says
it's not simply a matter of everyone agreeing that human
beings are dangerous when they are behind the wheel. That data
has been true since cars were invented. And there are
particular categories of human beings who are more dangerous,
mostly young people between the ages of 17 and 25 and within
that mostly young men, sorry. You know, and if we just ban all
of you from the road, we'd be better off but that's not going
to happen either. So there's a piece that says we don't
disagree about the data. We don't necessarily disagree about
what the pathways for it are to resolve that data? But the
number of stakeholders who have to be brought along is actually
quite complicated. So, much like I said about the robots, every
time we propose to make a technology intervention, we're
also making a cultural one? One of the challenges here is
precisely that which suggests to me you may find the adoption of
self-driving cars or semiautonomous cars because that
seems to be more worth heading. You may find that happen sooner
somewhere other than here where either there are a few moving
parts or an easier capacity to coral them all into some kind of
common thing. But I will say the thing that's
fascinating about it is since the first moment cars appeared,
the disclosed about their lack of safety and human's craziness
that went with it. They've always been twins, right?
They're inseparable at some level. It isn't a shame that
always comes with an anxiety about its cause and consequence.
>> Absolutely. Thank you so very much.
>> My pleasure.
[ Applause ]