Highlight text to annotate itX
HAYES RAFFLE: Hello.
HAYES RAFFLE: Hi, I'm Hayes Raffle.
I'm here today with three colleagues
I've been working with a long time, Bob Ryskamp, Emmet
Connolly, and Alex Faaborg.
Actually, the four of us go back a long time in different ways.
Alex and I were at the MIT Media Lab
together about 10 years ago.
And Bob and Emmet were on the watch design team,
I think, about three years ago, doing the very early design
iterations of Android Wear.
And now Alex and Emmet work together on the Wear team,
and Bob I have been working on Glass for about three years.
And you guys are sort of getting four for the price of one
We're here to share with you some of the things
that we've learned along the way because over the last three
or four years, we've been finding
our way through this new space of wearables
and trying to understand what it means, what works,
and what direction it makes sense to go in this new space.
And so I wanted to start with a little bit of philosophy, some
of the things that we've found useful
as we've worked through this space.
And a couple of things that have been most helpful for me
are some of the things that Sergey and Larry have
talked about over the years.
Larry talks about how technology should do the hard work,
and you should have a chance to live, have a good life,
and get on with it.
And that's great.
And you see that in products like Google Search
that try and get you what you need and get out of the way
as fast as possible.
And Sergey talks about how computing
needs to be more comfortable.
And on their surface, these might actually
seem like different statements.
One is about form.
The other one's about function.
But they're really the same idea.
And the idea is that computing should start to disappear.
It should fade into the background of our lives.
It should be ephemeral, not the foreground
of our attention all the time because life is beautiful.
It's full of beautiful people and puppies
and sunsets and things that are just really lovely.
And computing shouldn't be taking us away from that.
It should be helping to bring us closer to it, if anything.
And so one of the phrases that we
use a lot when we're doing design is we
talk about the world being the experience.
The user experience, the experience with the product,
could never possibly compete with the beautiful things
At best, it can provide timely information and help and things
that can help you be connected to others
and be connected to the moment that you're in.
And you see that in lots of aspects
of the design of the products that we're building.
You see it in the placement of the Glass display.
It's not in front of you.
It's off on the periphery.
You see that even in the font weights,
very thin, light strokes with a black background
which shows up as transparent on the Glass display.
You see that in some of the use cases, some of my favorites,
this one of Sebastian Thrun swinging his son around
in a circle right after we got the camera
working on Glass a couple years ago
and showing how technology could really help transport us
into that moment of joy that he was
having with his three-year-old.
This is what it means when we say,
"The world is the experience."
And now my background actually is in fine art.
And the thing that I care most about when
I do my work is creating a sense of empathy between people,
getting people closer together, because while we talk
so much at Google about the user experience
and designing for the user, none of us live in isolation.
We live in a world surrounded by people that we care about.
And how can technology help to bring us closer to others?
And I want to show you a couple examples
that I think are starting to get in that direction.
This is a picture of a journalist named Tim Pool.
Tim works for "Vice Magazine."
And about a year ago, he was in the streets of Istanbul
documenting the riots that were happening there
and broadcasting live from Glass to a huge viewer base
that he has.
And Tim's been doing this for a long time
with cameras on his shoulder, with his cell phone,
now with Glass.
And I think for me the transformative thing that's
happening is that Glass is allowing him
to interview people in a more intimate way.
When he talks to people in the street with Glass,
they're not talking to his camera.
They're talking to him.
And what that means is that when you're
watching that broadcast from far away
and trying to understand what's happening in Istanbul,
you're that much closer to the action.
You're that much closer to the way
that people are feeling in the streets there.
There's another thing that we think a lot
about with wearables and this idea of being connected.
Wearables themselves are very, very intimate.
You're wearing them on your body.
In fact, these glasses are so specially designed for me,
if you put them on, you probably can't see then
because I have my prescription on them.
They're very hard to share.
And in the same token, the way that we design the experience
for wearables needs to be very personal.
It needs to reflect the things about me as the wearer
that I find important.
It needs to be about the closest people, the people
that I care about who are close both in time and in space,
whether it's my family or you who are in this room with me
It's about surfacing the information from the people
you care about in a way that feels personal to you.
It's about being able to bring your niece to her grandmother's
birthday party to see 100 candles get blown out,
even if she couldn't be there in person.
Again, this is what we mean by, "The world is the experience."
Now one of the tactics that we talk about, how do you do this?
And we're going to talk about some different ways today.
But the first one that I want to talk about
is called micro interactions.
And the idea with micro interactions
is that wearables are on the periphery.
You're not designing the windshield.
You're designing what goes in the rear view mirror
because whatever is happening in front of the user in their life
is going to be demanding.
It's going to be happening fast.
And it's going to need most of their attention.
And whatever you're giving them needs to be very glanceable.
And this idea of glanceability is really
about getting to the essence of the information
that that person needs in the time and the place
that they are.
And so what does that mean in how you design software?
Well, if you look at driving directions on Glass, when
you ask for directions, for the most part,
the screen is turned off.
But before you have to turn, it'll
turn on and tell you, turn right on Greenwich Avenue
in 100 feet, show you a map of where you need to go,
and then after you've completed the turn, disappear again.
This is glanceable.
This is what it means to be the rear view mirror, not
It means when you're designing a messaging app for the watch,
how minimal can you make it?
What is the simplest amount of information
that the user needs to get the task completed?
Here's two designs.
And the big difference between them is that one of them
has twice as much information as the other one.
On the left, six pieces of information for a user
to look at, on the right only three.
That difference is about 900 milliseconds
of attention for someone who's focused.
Now 900 milliseconds, why would you care about that?
Well, you have to remember this person's wearing it
on their wrist.
They might be running to the next meeting.
Which one of these would you rather
have if you're running to the next session in room seven?
We've even played around with, could we
put an emulator like this into the developer tools
that you could play with this?
But I think the real idea, and Timothy mentioned earlier,
is that none of this stuff is particularly intuitive.
It's taken us a long time to get to where we're at today.
And there's certainly a long way to go still.
And the way that we make progress
is by continually testing on the device in the context
where we mean to use these things.
This is sort of a paradigm shift, I think,
because if you look at the way that people use their phones
today, people get out their phone,
and then they do their task.
And then they get distracted by a lot of other things
that are there.
They got lost in their phone, and they
are taken out of the world.
And what we're trying to do with wearables is actually
get to a place where people have the same utility
and benefit from computing, but there's actually
less computing in their life because we've
designed experiences for them that
are much more compact so they can get in and out,
get their goal completed.
Put another way, I think our job now
as designers and developers is to create experiences that
are as short as possible, as fast as possible,
to complete the task that the user needs to do.
So we're going to talk more about different strategies
that we've developed and ideas we have to do that.
And I want to turn over the stage now
to Bob, who's going to talk about some of the things
he's learned about voice.
Bob's been wearing computers since they weren't quite so
wearable, and so he has a lot of perspective on it.
BOB RYSKAMP: Thanks, Hayes.
So ever since the bike helmet days,
we've been designing our wearable products
to help you be more engaged with the real world.
And one way that we've done that is through creating
natural language voice interfaces for our products.
We've tried a lot of different interaction techniques
for Wear and for Glass.
And one thing we found is that when you can speak naturally
to a device, just like you and I could talk,
it makes your interactions much faster.
It makes it much easier to stay connected
to the people you're with and the places you're in.
And for wearables, this is even more important
than for some other devices.
For instance, Hayes and I both love to go cycling.
And this is a fantastic place in the world to go cycling.
It's a sport that's all about the outdoors,
but it's also got a bunch of just amazing technology
that you can really get into.
And it's an example of an activity where,
like Hayes said, the world is the experience.
And it's very, very important to pay attention to it.
And I also love my phone.
But that same gorgeous phone interface
that works so well when I'm sitting and standing
doesn't work as well when I'm active.
Maybe my hands are sweaty.
I'm trying to ride at the same time,
trying to hit those small icons on a screen.
Maybe the sun is causing some glare.
So we felt that Glass and wearables
could be great to use while cycling.
You can keep your hands on the handlebars.
You can keep your eyes on the road.
It works with your voice.
But when we first started designing the interfaces
for Glass, we tried to use a lot of the same interaction
and visual design patterns that we
knew from designing for phones and desktops.
We had a screen.
We put it roughly in the middle of your perspective.
You'd first try to choose an app to run,
and then you'd select a few features.
You'd choose an option.
Then maybe finally you could view individual items.
You could input some information.
And with all of those steps, we found
that we weren't really getting a different experience
from the phone.
You still have to think about, what's
the structure of my operating system?
And what features does each app have?
And where do I input those information into those fields?
Now this is a very powerful system,
which is why we use it on desktops and phones.
But it didn't feel appropriate for a wearable device.
And when you step back and think about it,
when you're engaged with the real world,
your interactions aren't like that.
If you're cycling with a group of friends,
you're not opening menu options and clicking buttons.
You're looking at people, and you're talking to them.
You're seeing things, you point at them.
You reach for things and hold them.
These are all very natural interactions.
You don't need a manual to tell you how to do them.
And so we wanted as much as possible to make
our experiences on wearables more like that.
For instance, we designed the messaging experience on Glass
to be as close as possible to the way you'd
talk to a close friend.
OK, Glass, send a message to Jane Williams.
It was great to see you today.
We even show a photo of the person in the background
while you're talking.
Overall, we wanted to make it feel
like that face to face conversation,
except now you could do it across time and space.
Now this cuts out a whole bunch of the decisions and steps
required to do that same action on a phone or a desktop.
And it turns out to be tremendously faster
than pulling your phone out and manipulating all those options.
That's because when we designed the messaging Glassware,
we chose just a single experience,
sending one message to a single contact.
And we designed a unique voice command and a unique flow
for just that experience.
Now if you want to look at an older message
or you want to edit your contacts,
you want to do any of the other things you think
of belong a messaging application,
you can do those in other ways in other places.
But this experience is very singular and very focused.
You do just one thing at a time, and you see just one thing
at a time.
And with Android Wear, we wanted to bring that same kind
of experience to a lot more devices.
As you saw in today's keynote, if you
have a car service or a ride sharing application installed,
you can simply speak a command.
OK, Google, call me a car.
It's a simple natural language command.
It instantly gets a car headed your way.
Again, it's almost like you're able to talk across
the city directly to the driver.
So one thing you can do to make your wearable interface more
natural is to carefully design that voice experience.
Don't just pour it over the structure of your app
from your mobile device or your desktop.
Think very carefully about, what's
that individual short experience that people want to have?
And break up that bit app into individual flows.
And then you can design and craft one single voice action
just for that flow and just get that person
that perfect experience and make that as much like normal
speech as possible.
We believe that natural language speech, when
it's connected to all the amazing services
that all of you are building, is going
to make interacting with wearables even easier
and faster than using phones and desktops.
But this example of calling for a car
also does something else to improve the experience.
And that's use knowledge about your context and where you are.
So I'll next hand over to Emmet, who
is one of the founding designers of the Wear project.
And he'll walk you through how we've
been designing using context.
Emmet and I worked in Zurich together
while he was working on some of these early ideas,
so I got to see all of his crazy embarrassing early prototypes
EMMET CONNOLLY: Hi, everyone.
So Bob talked about how speaking a command
can be one of the fastest and easiest ways of actually
performing an action.
And I'm going to talk about one way that's
potentially even faster than that.
And that's to not even speak a command at all,
to just have the right information appear
automatically based solely on the context
that the user is in.
So let's rewind for a minute.
This is a wooden prototype of the Palm Pilot
that Jeff Hawkins made when they were first designing
and developing it, one of the first portable computers.
It's a little stylus, a chopstick stylus, there.
I love that.
So before they ever started building anything,
Jeff used to carry this around with him every day.
And if someone said to him, hey, are
you free at 3 o'clock today, or whatever,
he would pull out his little wooden computer
and tap away on it with his chopstick
and pretend to actually check if he was free.
He would do this every day for months.
And what he was actually doing was
trying to figure out what it was like to use a device like this,
a new type of device at the time, on a day to day basis
and in a regular day to day context.
He was trying to figure out what kind of UI
might feel right for this new form factor.
And in retrospect, he was trying to avoid a common mistake.
It seems like that very often when these new devices come
along, the general reaction seems
to be to take the dominant paradigm, UI paradigm,
of the day and just slap it on these new devices.
And the truth is, that never really works out all that well.
You have these new types of devices,
and they're often really begging for some new interface ideas.
And to a certain extent, we've seen this play out
with the early wearables market as well.
A lot of these devices are really
just taking the grid of apps and putting it on this tiny screen.
And again, we see that this doesn't often really work out
To start with, these are really tiny tap targets.
And so especially on a moving target, it's hard to hit.
You can't see very many of these icons at once,
so it's hard to build up a spatial memory of even
where everything is located.
And it can just take a lot of swiping
to go through all these screens and actually access
what it is that you're trying to access
to even start with your action.
So we took a step back from this.
And we tried to think about, what
if we didn't require any input at all, that we just
had the right information show up at the right time.
And this is what we came up with.
So some of you notice the subtle detail in this photograph.
Yes, it's just a phone strapped to a wrist.
But there is something interesting happening here.
This is an actual prototype that we built.
And there's no grid of apps here.
There is just one simple clear piece
of information showing up at a time.
And if another piece of information comes along,
and that's more important for the user to know about,
then we'll show that instead.
So we kept thinking about this.
We thought, what if there's more than one piece of information
that's useful to know about?
Maybe we could arrange these simple screens as a row
or as a group of cards, and you could just
rank them and have the most important stuff
appear at the top.
And you see this thinking today reflected
in the philosophy of the Android Wear UI
and also of the Glass UI.
In both devices, there's just this targeted relevant piece
And in both devices, the way that you interact with them
is the same.
You're just swiping through this really clear stream of cards.
And it feels like roughly the right level
of interaction for these devices in terms of the ergonomics,
visually how they appear, and just the overall interaction.
So it's kind of a nice UI model for wearables.
But we still have this problem of,
how do we know when to show the right piece of information?
When do we put this information in front of users?
Well, there's something special about these devices.
They're packed with these sensors.
They're aware of their situation and state.
They're aware of the context that they're being used in.
For example, your application can
know where the user is, potentially
where they're headed.
And then you can ask yourself, what's nearby?
What might be useful for the user to know about?
Your app obviously knows what time of day it is
and what date it is.
And the user may have granted permission
to access their events and what's
important to them coming up.
There is of course the identity of the user, their patterns,
their preferences, their habits.
And then these devices also have motion sensors.
And because they're worn on the body,
we can transfer this raw motion information into activities.
And we actually have APIs that are available to you
as developers that can sense these simple activities
that you can pattern match against, things like walking,
cycling, driving, and so on.
These devices are connected, of course, and often
to other nearby devices.
So you can start to ask questions
like, what is the phone doing right now?
Is the TV being used?
Maybe there's music streaming to the speaker.
Maybe the thermostat knows something.
And these are all signals that add into this context.
And finally there are additional sensors,
some of them built into the Wear.
I could get a spare-- no, I'm back.
Is it me?
I'll try and keep going.
Yeah, so there are nearby devices
that can provide the other information, perhaps
like a precise location from Bluetooth beacons.
So the real interesting thing happens
when we take the combined total of all of this sensor data
and put it together into one single rich picture
of the user's situation, the scenario that they're
in right now.
So as developers, we can look at this situation
and we can ask ourselves the question,
how can we present the user with useful information
that will help them?
And for you guys as developers, what you can do
is define detailed contextual trigger conditions
and have your app show up at precisely the right time
based on those trigger conditions.
So let's look at a practical example now.
So first we'll look at a typical interaction as it exists today.
Let's say you're going for a run.
You probably pull out your phone,
then you launch a running app, maybe tap in some goals.
You might decide to switch over to a music app,
queue up that album that you had been listening to earlier,
switch back to the running app, tap Start
so that you can get going, strap the phone onto your arm.
And off you go.
Fairly typical interaction that we're probably used to.
Now we'll try and redesign this experience for wearables
using context to drive the interaction.
So in this case, assume you're using an Android Wear device.
It's a pleasant Sunday afternoon.
You're at the head of your running trail
that maybe you run at most Sundays.
You probably just stretch out, plug in your headphones,
and start running.
So based just on the simple inputs from our sensors,
the things that we can detect, things
like the time, the location, your habits, physical movement,
even what the headphone jack is doing,
it really looks like this person is going for a run right now.
So why shouldn't we do the obvious thing
and present them with helpful information?
In this case, it would be something like start tracking
their run and perhaps also offer to pick up playing that album.
So the user didn't have to do very much at all here.
They just acted as they normally would.
And the technology does the right thing automatically.
It was like Hayes said.
The world is the experience, and the technology just
adapts to what the user is doing.
So again, rather than asking the user
to manually tell the device what they want to do and then
have to manage state on an ongoing basis,
on wearables we're going to do something much simpler
We're going to do all the heavy lifting for them based
on context and present them with just the right information
at the right time.
So that was a simple example.
Next up, Alex is going to talk you through some more examples
and also introduce you to some design tools that
will help you apply this thinking
to your own applications.
ALEX FAABORG: All right.
So that was a lot of information.
And this is a new form factor, so that
means thinking about building entirely new types of software.
But it also means a lot of opportunities
to build breakthrough applications
because this is a new form factor
and we're just getting started.
So first let's summarize.
Hayes talked about how the world is the experience, sunsets
and puppies and these things that we
care about more than having to interact with devices.
And how do we achieve the world being the experience?
Well, we achieve it by having the user be in the world more.
And we can do that through micro interactions
so that you're more present ad engaged in the real world.
But you're also more connected virtually
because you have more check-ins.
It's just your engagement with the technology is shorter.
How do we make those engagements shorter?
How do we achieve micro interactions?
Well, the two core components, voice, as Bob talked about,
and context, as Emmet talked about.
So now you're thinking, OK.
That sounds cool, but where do I actually start?
I want to build an application.
We've got this OS that's built around voice and context,
but how do I translate my app?
How do I build an entirely new app for this platform?
And of course it's good to start sketching, but at this stage,
you're just looking at a blank piece of paper,
and you're kind of lost.
So I want you to consider two thought experiments
to kind of ground your thinking in how
to approach wearable applications.
So the first one's about voice.
Now imagine that you are your app.
And the only way that the user can communicate with you
is through voice.
So as the app, you're sitting in a room.
It's a very nice white room.
And the only thing in the room is a pedestal,
and there's this red telephone on it.
And when the user needs something,
that telephone's going to ring.
Then you're going to answer it, and it's the user.
And they're going to say exactly what they need from the app.
So what's the first thing that the user
says when you pick up that phone?
What's the range of calls that you're
expecting to get throughout the day?
How does the user phrase the request?
And the great thing about building
on top of Google's voice recognition system
is that we're building out all of the capabilities.
We're actually handling all of the transcription
and natural language processing and grammars.
And all you have to do is just subscribe
to a particular intent.
But then the question is, which intents do you want?
So as we're building up the system,
we really definitely want to hear from you
about the applications you're trying to build
and which voice intents you're interested in so we can start
working on those and getting those into the system
so that your apps can subscribe to them.
So you can go to this form, just fill it out and tell us
what you're interested in the voice recognition
system being capable of.
And we'll show some examples of what it can currently do.
So the second thought experiment is about context.
And as Emmet said, even faster than voice
is the application being able to anticipate
the information that you need.
So for here, I want you to think about this moment
where a surgeon reaches out their hand
and immediately, without ever having to look away,
the tool that they need is placed in that hand.
And what's interesting about context
is contextual currents aren't meant to be surprises.
Users are going to reach out their hand for your app
at various times.
They're going to adapt to the system
as much as the system's adapting to them.
And they're expecting the app to be there.
So imagine that you're with the user
and you have the app ready.
And you're ready to give the user the app at any moment.
When do you expect the user to reach their hand out?
What's going on in that situation?
What's the environment?
And then as you think about that,
you can think about how you can build the contextual trigger
conditions so that the card is there
on their device at just the right moment.
So we're almost ready to start sketching our app,
but we still have this blank piece of paper.
And one thing that's really useful for sketching
applications is, of course, stencils.
This is a stencil that was made for Android phone applications.
And it's really great.
You have all the patterns, and you can quickly
sketch out all of your screens.
So then the question is, well, what
does a stencil look like for wearable applications?
So we started playing around with that.
And we haven't actually built it,
but here's a picture of what we think it would look like.
So what's sort of interesting is of course voice, right?
We're just hoping to have something with speech bubbles,
where you can draw speech bubbles, sort of sketch what
you expect the user to say.
And then next we have context, all the contextual trigger
conditions that Emmet was talking about.
And only then do we move on to then sketching
the actual UI on the watch, the card that appears
or the screen that's a result of the voice action.
So let's look at some examples of voice.
The talk's been pretty high level,
but I want to run through some very specific examples,
a few apps that are actually already available.
So imagine you're going furniture shopping,
and you see a new couch that you're
interested in that you want to remember.
You can just say, OK, Google, take a note.
And apps can subscribe to the take a note intent.
So in this case, Evernote is the user's favorite note
application, but it could be any number of note applications
that the user likes to use.
Say you're going for the run, back to Emmet's example.
And as you're running, you're curious
what your heart rate is.
So you can just say, OK, Google, what's my heart rate?
And if the device has the sensor,
that'll be available just with a quick voice command.
And this is another intent that we currently support.
Then at the end of the run, if you want to stop recording,
you can just say, OK, Google, stop running.
And apps can subscribe to that intent.
So we have all of these various natural language processing
grammars built up for all the different ways
that users can say these things.
But from the app side, all they have to think about
is they're subscribing to stop a run or take a note
or what the basic intents are.
Let's look at some examples for context.
There's a few really good ones on Glass
that are already shipping.
This is LynxFit.
Developers are actually in the room.
They did an awesome job.
[? Huzzah, Bob ?] might say.
It's so good.
So how this works is it's going to use motion sensors
to actually watch you do the workout.
And it guides you by speaking to you
and showing you quick little video clips.
And it really works like a personal trainer would
in real life, in that it's giving you instructions
and it's actually observing your motion.
And this is great.
I mean, it doesn't get more contextual than this.
It's like actually recording each motion.
Another example of context is Field Trip,
where this shows you information about your surroundings.
In this case, the user's interested in history,
so they're seeing some historical information.
This one's pretty crazy.
It uses a special basketball that
has sensors that can sense your shot style.
And it gives you feedback on the shot on Glass.
This is also kind of a good example of the world being
the experience because, really, the experience
here is you're shooting a basketball.
But this is just giving you some additional data
to help you have a better shot.
Similar example with golf.
This is Swingbyte, which connects
to a sensor on your club.
And this logs all sorts of really useful data
as you're playing.
In some ways, it's kind of like a caddy,
but it's even more accurate than a caddy
because it has really detailed information.
So let's look at some examples on Wear.
So imagine your friend has an Pinterest board of the best
gummy candy in North America.
And of course, you're going to subscribe to that.
So you subscribed to it a long time ago.
You've since forgotten.
But as you're traveling and you're walking around,
Pinterest fires, and it says, hey,
one of the pins that you're interested in
is actually walking distance.
You can go check it out because it knew that that was something
that you were interested in.
And this is currently available.
Another great application from Trulia, this one's really cool.
So as you're going to open houses, when you're
near the property it's going to show information
about the property.
It gives you actions like you can call the agent.
You can quickly favorite the property.
And again, this is very much the world is the experience.
It's kind of like you're right clicking on the world.
It's like the world has a contextual menu where
you can just say, like, favorite this property, which
I think this one's really cool.
So let's look at a few more were hypothetical examples.
So imagine you're at home.
Your device detects that you're at home, gives you
controls for your thermostat.
Imagine you're going skiing and the conditions are pretty icy.
But since you're inside the grounds of the ski resort,
your device could just tell you which
lifts are running, which trails have been groomed,
all the contextual information that you want right then.
Imagine you're at the airport.
And as you're scrolling through your cards,
you see that the airline that you're on
is providing information on how many miles you've acquired
and how you're doing.
You're staying at a hotel, and the hotel chain
recognizes that you're on one of their properties.
They can give you quick access to actions
like requesting a late check-out.
Imagine you're at a conference.
Social network could tell you, hey, here's
some friends that you have that you haven't actually
seen in a long time.
But they're also at the conference,
if you guys want to meet up.
If you're at a restaurant, a nutritional application
could detect which restaurant chain you're in,
quickly look up nutritional information for that chain
and provide suggestions of the healthiest items on the menu.
Say you're getting the oil changed in your car.
You could have an assistant application
that recognizes that and just offers
to set a reminder for another six months.
You're at the zoo, and you have a watch that automatically
knows when the penguins are going to be fed.
Imagine if you're able to ask real time
And people using the service could
choose if they wanted to respond.
And these wouldn't be interruptive.
But if you saw one come by as you were using the device
and you felt like responding, you could help them out.
So questions that you could otherwise never search
for, you could use, like saying, are there
any picnic tables free, and getting an answer to that.
Imagine you're using a car sharing service.
And as you approach the vehicle, you
get a quick action to unlock the car.
So what's sort of interesting about all those examples
is that the UI wasn't actually that complicated.
The UI's usually just a card or a button.
And when you look at the stencil,
there's not much of the stencil devoted to the UI.
This isn't about sketching a variety of different UI
What the stencil is about is the user and their world.
It's about what the user's going to say.
It's about what the contextual trigger conditions are.
Even with our mock-ups, we focused more
on sort of the background of the scene
than when the device came up.
So what's really important when thinking
about these wearable applications
is thinking about the world and what the user needs.
And we ran through a bunch of them,
but this brings us back to our overall notion of the world
being the experience.
It's really being the thing that drives the use
cases for these wearable applications.
And the world's a big place.
So there's really a tremendous amount
of opportunity for really interesting applications built
in this space by designers and developers.
Google's crafting the infrastructure with APIs
for voice and context that you guys can build on.
And this is in the same spirit as our initial work on Google
Now, but we're opening up the platform
to the entire ecosystem for contextual cards and voice
And really, together we'll be able to build things
that are really far beyond anything
that we could build on our own.
So a quick announcement, which you
may have heard in the last session.
If you're watching this on video,
check out Timothy's session, Wearable Computing at Google.
Wear notifications are going to start
appearing on Glass in the next few months, which
will make life much easier for developers because they can
develop for both devices simultaneously.
And this will get you access to pages, stacks,
voice replies, and actions.
And with that, we'd love to take your questions.
BOB RYSKAMP: We've got microphones in the center
if you want to step up to those.
Give people a second.
I have a question about the Wear.
And [? blocking ?] out the SDK doesn't
allow you to modify the notifications
that it can post from your phone APK.
Is there a reason why the design has been in such a way
that the developer doesn't fully control
the layout of the notification?
You have to actually build an APK for the Wear
in order to do design a layout that uses the full screen.
You're sort of bound in that tiny square
of the notification.
What was the reasoning behind?
ALEX FAABORG: So the nice thing about developers
using templates is then as new form factors come out,
those notifications can be automatically adapted
to the new form factor.
Even for Wear, we have square and circular devices, right?
So that's pretty useful.
So it provides less work for the developer
when you're using one of those templates
because you just know you're sort of guaranteed
for all future form factors.
But you can of course create an activity view
and control every pixel if you want.
Then it's more overhead for testing on the new devices
and making sure everything's working correctly.
Is there a plan in the future where
you would allow developers to completely custom design
the layout of the notifications?
ALEX FAABORG: Yeah, that actually launched today.
The Wear SDK lets you do activity views
inside cards, where you can do the full UI.
ALEX FAABORG: No problem.
My name is Jonathan.
I want to ask when will we see some more
sensors on the Android Wear devices?
I mean, temperature, moisture, some more.
I mean, the LG doesn't have a heart rate.
And most of the really interesting applications
would be around more sensors.
ALEX FAABORG: Yeah.
Well, I think as people are seeing the types of apps
that developers want to build and the ecosystem's growing,
we're going to see a lot of innovation in this space.
And one of the great things about Wear
is we're going to have lots of devices.
So that will enable competition in the marketplace for people
to add sensors and have really cool use cases.
AUDIENCE: Any known watches coming out
with multiple sensors?
ALEX FAABORG: I can't-- I'm not going to preannounce other
ALEX FAABORG: Theoretically.
AUDIENCE: Thank you all for the presentation.
So you've been mentioning all these things
where smartwatches and smartglasses are similar
and how Android Wear is going to work with both Glass and Wear.
So my question is exactly the opposite.
Where do you see Glass and smartwatches being totally very
And this is for any of you.
What's the feature where a smartwatch makes sense
but Glass doesn't, and vice versa?
ALEX FAABORG: You want to take that, Bob?
BOB RYSKAMP: Sure.
I think as Timothy Jordan said in the last session,
a lot of it does come down to the individual user,
what people's preferences are.
I think one thing I've found while working
on wearable devices is they're much more like other things you
wear like shoes and socks and shirts and jackets and hats
and actually less like phones and tablets in a lot of ways,
in that you might one day wear one,
another day wear another, depending
on what you plan to do that day.
For example, myself, I love to wear the watch around my house.
It frees me from my phone, which can be downstairs
and I can still get my notifications.
But I love to wear Glass when I'm cycling or when I'm playing
with my one-and-a-half-year-old.
It's the world's best baby camera.
So I'll take one off, put one on depending
on what I want to do that day.
I think that's probably the future of wearable technology,
is that sort of use case, very flexible.
And people get to choose and customize for themselves.
ALEX FAABORG: Another thing to consider as an app developer is
if the user has to maintain eye contact with something,
then Glass is definitely better.
So, like, the basketball example works really long on Glass,
but I don't think you'd necessarily
want to be looking down at your watch while playing basketball.
So there's some significant sort of form factor differences
to consider for your specific case.
AUDIENCE: My name's Julie Stanford,
and I run a UX design agency called Sliced Bread where
we do a lot of interactive prototyping
like the kind you showed.
And we use a lot of HTML jQuery to do just quick Wizard of Oz
And I'm wondering if Android Wear is
going to support doing that type of quick prototyping
without having to go through creating a back end and di di
dah, or if there's just some quick way
to do rapid prototyping on that platform.
ALEX FAABORG: Well, it should.
It doesn't currently.
But it will?
ALEX FAABORG: Yeah.
Well, I mean it should in theory.
It's still pretty early.
The team's been really focused on getting a product out
before we've been able to do more robust things like helping
prototypers and stuff.
So you can't just, like, create something HTML
and quickly show it to see how it might work?
ALEX FAABORG: No, it doesn't have a rendering engine.
BOB RYSKAMP: But I would say, don't let that discourage you.
Strap a bicycle helmet to your head or a phone to your wrist.
AUDIENCE: Oh, I'm not discouraged.
BOB RYSKAMP: We'll all designers.
And the real important thing is that you can just
wear it and try it in any way possible.
ALEX FAABORG: You could also just draw out
a smaller area of a phone screen and have people read that.
ALEX FAABORG: I mean, that stuff works OK.
Just a quick question.
I'm just thinking about on average
most people have, like, 50 apps.
And there's a lot of competing apps
and a lot of competing information.
So let's say, where should I eat tonight?
Which card comes up?
Something from Foursquare, something from Yelp?
Or, where should I stay tonight?
Does Airbnb give me a suggestion or Hotel Tonight?
I'm just wondering how does wearable kind of tackle that,
or I mean Android Wear?
ALEX FAABORG: So the first time you
say it, if you have multiple apps installed
or after you've installed a new application,
the user has a menu of choices they
can choose for the application at that moment.
In the future, it's going to default to the one
that you previously selected.
But it gives you a moment to pause and choose
one of the others, which is kind of nice for voice
because you can say a command, just drop your arm,
and it's just going to happen.
Or you could say a command and then quickly pause it and say,
now I want to switch over to this other service.
Also in the campaign app, you can
set which defaults are associated with everything.
AUDIENCE: And in terms of the passive cards
and the ranking of the order, is that determined
by Google in terms of what's most relevant to you as you
kind of scroll through the different decks of cards?
ALEX FAABORG: At the moment, it's
determined by a number of signals that the developers are
providing, priority levels.
In the future we're looking at ranking based off
of how contextual things are.
So it's kind of working hand in hand with the developers
to understand what's important about their card
and how it fits in.
My name is Conrad, and I work at the University of Washington.
I'm a grad student there.
And a number of my colleagues are working on accessibility,
and they're all really excited about wearables
because they're adding more modalities for people
that are blind or that can't hear.
And it's all really exciting.
And so I'm wondering, is your team
working on any accessibility technologies with wearables?
Like for instance, you have a screen,
and you can put Talkback, which is on Android phones,
on to watches as well.
Are you working on anything like that?
ALEX FAABORG: Yeah.
I don't want to talk too much about what we're working on
in the future, but I think something
that is really exciting is, particularly
with Glass, the voice feedback.
And building natural voice interactions, of course,
are incredibly accessible, and they also
benefit everyone, just as the sidewalk ramps benefit
So even if it's done initially for accessibility,
it's super powerful to give everyone access to it.
HAYES RAFFLE: And I think on the Glass platform
we've seen most innovation for accessibility
from third parties, a lot of it actually from academic sectors,
people doing really innovative stuff.
I think that from our point of view,
we're trying to design more towards what
I'd call universal access, which is how to make stuff that's
as usable as possible for everybody.
But a lot of these ideas transfer over
to the accessibility space.
And of course there's some that don't and some custom work
that needs to be done.
But the developer community's been amazing and productive
in that space so far with Glass.
And I expect to see more about with Wear
as the platform emerges.
AUDIENCE: All right.
I have a question regarding context.
Is it possible to provide custom contexts?
So for example, if I can actually
have an algorithm that determines whether I'm
in a noisy room or in a quiet place,
or any other type of custom context,
is that something that I can actually
use to trigger certain mechanisms in wearables?
EMMET CONNOLLY: Yeah.
So from the user's point of view,
we'd prefer to keep it really simple,
where they didn't need to do a lot of setting up of trigger
conditions and so on, just have things to appear.
But I think you might be asking from an app developer's
point of view.
And we would absolutely encourage
you to use every single signal possible to really
focus and target content to the users.
So all of those examples that you specified, I think,
are great signals and really paint
that picture of what the user is doing.
AUDIENCE: So I'm asking if I can actually
do a custom context to create a trigger, a custom trigger.
ALEX FAABORG: Yeah.
So it somewhat depends on the hardware being used.
So like for Microphone, there's permissions surrounding that.
Also there's battery implications.
And the other aspect is we're trying
to build systems that are pretty robust that everyone can
benefit from, like with activity detection.
It's really hard to do machine learning on accelerometer data
to figure out the difference between biking and driving.
So by us providing that to all the developers, then
they get all of that learning for free.
But for the things that you can't access,
yeah, we totally encourage you guys
to build your own models of context.
It'll give you a leg up in the market against your competitors
if you're more contextual.
EMMET CONNOLLY: And raw sensor data is available to you.
For example, in the accelerometer case,
like if you don't want to, you don't have to use those.
ALEX FAABORG: Yeah.
If you have a better model, then yeah.
AUDIENCE: Could that be triggered?
I mean, I can't use that to trigger anything, right,
raw accelerometer data?
Accelerometer data, I can't use that to trigger something
from the background, right?
BOB RYSKAMP: It seems like maybe we can follow up afterward?
We can talk in more depth.
ALEX FAABORG: Generally, you can put a card into the stream
whenever you feel that it's contextual.
BOB RYSKAMP: We'll take just one more question, thanks.
AUDIENCE: Well, this is crazy.
You just stole my question.
But I'll add a little bit more.
You guys have been working on this space for a while now.
Do you have any sort of conceptual frameworks
for thinking about multiplexing contextual data?
You can often be overwhelmed by a particular signal
or have a particular signal that is not represented as well.
Is there any sort of, more towards the theoretical side
of, like, where to go to combine these things together?
ALEX FAABORG: It would be a really good problem
for us to have because right now we're figuring out context
and then matching that against all the available sources
of information that have cards that can match.
At the moment Android Wear is just launching now,
so we don't have a tremendous number of applications.
But it'd be a great problem for us to have,
that suddenly there's too many contextual cards
for your environment.
And then we'd have to think about ranking.
And then that's the sort of fundamentally a search problem.
So I think we're hopefully pretty well
equipped to tackle it.
I'm looking forward to that being something we can work on.
BOB RYSKAMP: So go forth and give us that problem.
ALEX FAABORG: Yes, please.
BOB RYSKAMP: Thanks, everyone.
HAYES RAFFLE: Yeah, thanks very much, everybody.