Tip:
Highlight text to annotate it
X
PAUL SAXMAN: Welcome, everybody, to our talk on
building second screen apps that integrate with Google TV.
My name is Paul Saxman, and I'm a senior developer
advocate at Google on the Google TV team.
And with me today is--
DAVE FISHER: I'm Dave Fisher.
I'm actually a faculty member.
I work at a college in Indiana, and I'm on my
sabbatical.
I've been working at Google for the last year or so.
And in the fall, I'm going to go back to teaching.
PAUL SAXMAN: Cool.
So Paul Saxman, Dr. Fisher.
And if you guys want to follow along today, we got the Go
link up on the top there to actually get to the slides.
They're going to stay there indefinitely, too, so if you
want to check them out afterwards, you can
go check them out.
So you're all probably familiar with this type of
TV-viewing experience here.
For the sake of this conversation, we're going to
call him Mr. Leanback.
I can see some of you today are kind of in the leanback
mode as well, probably because it's second day, Google I/O,
you guys had your lunch.
So this is a fairly common TV-viewing kind of
arrangement.
And so Mr. Leanback, let's say he comes home from a long day
at work, or he's been outside with activities.
And so he grabs his favorite beverage, he grabs his remote
control, and then he goes into what we call the leanback
mode, which is approximately 45 degree from the floor.
If you're any more than 45 or any less than a 45-degree
angle from the floor, you're no longer in leanback mode.
That's laydown mode.
And we're not going to talk about laydown mode today.
So the thing about the leanback mode or what Mr.
Leanback, the way he's watching TV, is that this is a
fairly common way to watch television.
This is exactly why, when we talk about developing apps for
Google TV, we give you UI guidelines about kind of
toning down the user interface.
Put less information.
Make the interface a lot easier to use.
Support D-pad navigation, because obviously Mr. Leanback
is not going to want to put his drink down to use the
remote two-handed.
And this is also why, when we designed Google TV, we put
apps on there, or we provided services like Quick Search and
the TV and Movies application, which allows users to actually
really quickly and easily get access to content.
Because they don't necessarily need the overload of
navigating through a list of 200+ channels to find what
they want to watch.
They can just search for it or actually launch the app and
browse for it.
However, there's a lot of data coming out recently that
actually paints a slightly different picture of the way
that we interact with TVs.
This is something from Nielsen.
It was actually just published the end of last year.
And they found that about 90% of tablet owners and
smartphone owners actually use their devices while watching
television.
And the way they did this actually, they broke it down
into how often they use their device.
They said, do they do it monthly, daily, weekly,
multiple times daily.
And it was actually heavily skewed towards daily usage.
So most users, 90% of users or more, actually use their
devices while they're watching TV.
And I'm sure that's probably the same with most of us.
So the other interesting thing, too, about smartphones
now, is that as more people have smartphones and tablet
devices, they're actually designed for pushing
information to the user as opposed to, let's say, other
devices like computers or laptops, where's it's more,
the user actually is driving.
So smartphones, for example, if you're watching a movie and
you get a text message that comes in, or you go a Google
Talk, or you get an email that comes in, you're more likely
nowadays to pick up your phone and start interacting with it
while you're watching TV.
So this whole idea of like multi-screen or second screen
interaction is kind of new and very much on the rise.
So we actually conducted a study, very recently.
This was at the beginning of this year.
And we surveyed people that actually have and actively use
Google TV devices.
And asked them, do you have certain devices, other
connected devices in your home?
So we found that actually 92% of these homes have laptops.
And a very large number, 90%, also have smartphones.
And actually, the large majority of them also have
desktop and tablet computers as well.
So we see there's actually quite a few--
given this data, with the data in the previous slide, we can
see that the probability of somebody actually having and
using a second screen device in their living room while
they're watching TV is actually very, very high.
I mean, it's well into the majority of people.
So there's actually--
what you may think of now is this situation here.
We're going to call her Ms. Multi-screen.
So not only does Ms. Multi-screen have her snack
and her remote control, but also now in the living room,
she has her laptop and smartphone.
And most likely, actually, has a tablet computer as well.
And this whole idea of like kind of multi-screen
interaction, or actually, what we'll call just multitasking,
for lack of a better word, is that this isn't a new idea.
I mean, people have been doing this for quite a few years.
Just a few years back, instead of a smartphone, she probably
had a feature phone.
Or even a few years before that, she might have had a
cordless phone.
Or maybe if you're like my family, you might have had a
corded phone, but you had one of those 20-foot curly cables
on there that was kind of stretched out, just so that
you could sit in the living room while you were
talking on the phone.
And so the interesting thing about this, though, is that
really, portable devices are really on the rise.
So smartphones are on the rise.
We see that laptop computer usage, people are
transitioning from desktop computers to laptop computers.
So really, this scenario of kind of the multi-screen or
multi-devices in the living room is pretty much here.
So in the same study that we conducted before about what
percentage of users actually have other devices, we also
asked them how many devices they have.
And we found some pretty interesting numbers.
On average, they have about two laptops.
Which is pretty surprising, because most of these homes,
they're about two and a half to three people.
So you can say that either most of the adults in the home
or most people in the home actually have access to a
laptop computer.
Same thing with smartphones.
Desktops and tablets, these are actually one per home, on
average, for people that actively
use Google TV devices.
And we actually found that there's a median of six
connected devices in these homes.
And when I first read that, I was kind of shocked.
I was like, six devices connected in the living room--
or not just in the living room, but in their homes--
is pretty high.
But when I started to count all the connected devices that
I have my home, it ended up a lot higher than 12.
Or, a lot higher than 6.
It was actually around 12 to 15, depending on if I had my
work computer at home or not, or my work
devices at home as well.
So yeah.
There's a number of connected devices that people can use to
connect to their TVs.
And this number, this median number of six, it doesn't just
include smartphones and tablets, desktops, and
laptops, but it also includes things like set-top boxes,
media devices, and that type of thing.
So I challenge you to actually count the number of connected
devices you have in your homes.
I think you've probably got about four of them here at
Google I/O, so you can add those to your number.
And you'll probably come out to a pretty
astronomical number.
So given all these numbers that I've presented, this is
kind of maybe a slightly more typical arrangement for being
in the living room.
I mean Ms. Multi-screen was a little bit of an exaggeration.
We're going to call this Team More Typical.
And what we kind of expect to see in the living room these
days is that--
you have to remember that the TV-viewing experience is a
social experience.
So chances are there's probably
multiple people in there.
It's not always the case where people are having a shared
TV-viewing experience, but it commonly is the case.
You can definitely expect that there's one or
more laptops available.
For a group this size, there's probably a couple.
People don't always take their laptops over to their friend's
house if they're going to watch a TV show, but that
certainly does happen if they're maybe doing some study
work, and then they're switching over
to watching TV later.
Or, if you're like me, when I was in school, watching TV and
using the computer at the same time, which my
parents always hated.
But you also kind of expect that people have at least one
tablet per household, for people that are Google TV
users and owners.
And for a group this size, probably approximately three
smartphones.
So there's a number of different types of
configurations, a number of different ways of people can
use devices to interact with the TV.
So really, the opportunity for developers like yourselves is
that people really want to do more than just watch TV.
I mean, they really want to start interacting with TV.
And the way they can interact with the TV is they can use
their physical remote control, which nowadays,
they're pretty elaborate.
They have keyboards on them.
They have D-pads.
Some of them have touchpads or gyroscopic sensors.
Or you can actually give them the ability to actually use
their devices that they have and that they're using in
their homes to actually interact with the TV.
So start thinking about how you'd use their smartphones,
their tablets, their laptop computers.
A number of different things they can do to interact with
TV nowadays.
The other thing is that you can also take advantage of the
fact that both of these markets--
the Smart TV market and the second screen, or let's say,
the handheld portable device markets-- are actually very,
very much on the rise.
As we heard yesterday, there's I think 400 million Android
activations now.
That's a million activations a day.
So the number of people that actually have access to second
screen devices in the home is constantly going up and is
going up at a pretty rapid rate.
And the other thing is that Smart TVs
are also on the rise.
It's projected that by about 2015, there's going to be 500
million LCD TVs that have shipped with internet
connectivity.
And that is just LCD TVs that are shipped with connectivity.
It doesn't include devices that actually are connected to
the internet with a set-top box or another device.
So the number of Smart TV devices or connected-TV
devices on the market in the next few years are definitely
going up, so this is definitely a great time to
take advantage of this kind of arrangement.
So to kind of pique your creativity, I'll give you a
few examples of developers that are building second
screen apps now, or applications that are second
screen apps.
For example, this one here, this is the Able remote.
The developer of Able remote, he took the Google TV remote
control application that we launched
open-source last year.
He added a number of really, really great features.
It still actually is a universal remote control.
It has all universal remote control functionality, like
the original application.
But he added things like you can favorite channels and
actually quickly change channels using the device.
So if you're like me, you have 200 channels, but you really
only watch about 5.
This makes it kind of brainless to actually get to
your favorite shows.
He has the same functionality with applications and
websites, as well.
So if you have a favorite application on the device, if
you have a favorite website on the device, you can actually
get to it in just a few clicks on your handheld device.
And then he also built in some really interesting integration
with Google Music.
It's a widget on your handheld device that can actually
control the playback on Google Music.
The Peel Smart Remote app--
they're actually in the Sandbox today, so if you
haven't had a chance to see their application,
now's a good time.
The Peel Smart Remote app is a TV and movies discovery app
with some ties into social ties.
So the integration, the way that it integrates with Google
TV, like the Able remote, they actually give you the ability
to control your TV with your handheld device.
So if you've actually launched into a VOD service on Google
TV, like Netflix, you actually can get play controls.
And you can actually can get navigation controls as well.
So if you're in the application, you can navigate
around, get more information about
the show you're watching.
You can actually navigate out of the application as well and
use the Peel app to navigate the Google TV interface.
On the left-hand side, this is actually their phone
application.
On the right-hand side is the tablet version.
Trivialist is actually a little bit different.
So Trivialist is not a remote control application.
So on the right-hand side is actually the TV application.
So what they've done is they've built a trivia
application.
They're putting that on Google TV devices and then putting
those Google TV devices in sports bars.
You go into a sports bar.
If you don't already have the Trivialist app, you can see
that you have the opportunity to download it.
You put it on your smartphone, and then you can actually play
trivia with other people at the sports bar.
The interesting thing about this application on the
technological side is that they're not actually
communicating from the handheld device to
the Google TV device.
They're actually using the cloud.
So any command, or when you make decisions on the phone,
it actually sends it up the cloud.
And when it's time to change the question, actually that's
being pushed down to your phone as well.
So the communication's not direct.
And obviously, the reason for that is that not all sports
bars have open networks, so you can't really rely on
phone-to-device or phone-to-TV communication directly.
And last but not least, MOVL.
MOVL actually was at Google I/O last year with us.
They're also at Google I/O with us this year.
I think they're in the Google TV Lounge now.
So they actually have Android and web-based multi-screen
applications and APIs that integrate with Google TV.
The applications that--
on the right-hand side here, this is the Poker Fun game,
which is really cool.
You can play poker with multiple of your friends in
your living room, or actually--
I think it actually works now with people in other living
rooms as well.
And you have your personal experience on your handheld
device, and the TV actually has a shared experience, which
is the poker table.
They have WeDraw and WeTeli as well.
These work with their APIs, the Cloud Connect and the
Direct Connect platforms or APIS, for both cloud-based and
direct communication with the Google TV device.
And they also have what they call the Kontrol TV platform
or the controller, which actually puts all the apps
into one handheld control.
So the goal for us today, so now that you hopefully are
kind of inspired to build second screen apps, we're
going to teach you what you need to know to start building
these applications.
And to do that, we're first going to teach you how to
share data, basically share any data, between a second
screen and a first screen device.
Then we're going to talk a little bit about the Anymote
Protocol and Library for sending input events--
input events specifically--
from the second screen to the first screen or to Google TV.
And then we're going to show you how to implement a Chrome
extension using Anymote for actually
controlling Google TV.
And that's Dave's specialty.
So on to the technical side of things.
So like I said, I'm going to tell you how to share any data
between second screen and first screen
or Google TV devices.
And to do that, I'm going to do a quick demo to kind of
show you what I mean.
This demo, what we did is we put together a few demos for
using sensor information.
So the handheld device actually becomes kind of a
sensor proxy for the TV.
So what we'll do is we're going to bring up the sensor
application on the TV.
These are just demos.
I mean, they're very simple applications.
But hopefully it'll kind of give you an idea of what's
possible when you can just basically pass any data
between the two devices.
So this is called the remote sensor data.
So basically, we took some of the sensor demos that we had
for Android, and we ported them to Google TV.
This is actually the ColorCube Cube example that you can get
for your Android devices.
And so what I'm going to do now is I'm going to pair my
handheld device with the Google TV.
This pairing--
it's a little bit dark, but I hope you can see it.
So the pairing process can be automatic.
There's technologies that can make it automatic.
Let's see if I'm paired already.
Nope.
Find Google TVs.
So what I'm going to do is--
since we don't have auto-pairing on this network,
I'm actually just going to enter the IP
address real quick.
And this is kind of a one-time process, because next time I
try to connect, it should automatically be there.
And we conveniently put the IP address on the display so that
you can quickly launch this.
So now I'm going to--
on the phone, I'm going to launch the
Colored Cube interface.
You probably recognize this from Android phones as one of
the Android samples.
And what we're doing now is basically, the handheld device
has become a proxy.
So any of the sensor commands on the handheld device are
just sent directly across to the television.
So if I pick this up, you won't be
able to see it anymore.
But as I rotate the phone around, it
actually rotates the cube.
And the first time I actually ran this demo, I was little
bit confused, because as I rotate the phone, let's say,
to the left, actually, the cube rotates to the right.
It doesn't seem-- it seems like the perspective's a
little bit backwards.
But what it's actually doing is on the phone, since you're
looking into the screen, when you rotate the handheld
device, you're actually kind of rotating your perspective
of the cube.
And we actually maintain that type of perspective.
But you'd actually think when I rotate the phone down, you
might want to rotate the cube down.
Just a caveat if you try to do this yourself.
I'll give one more quick demo, and then we'll move on.
We also took the Sensor Graph demo.
And let's see if I can click that.
So this is also one of the Android examples.
We also ported it to TV.
And this is kind of to show that we're taking pretty much
the main sensors--
I think we're taking the accelerometer, the gyroscope,
and the orientation sensors--
and we're just actually passing all that information
to the TV and rendering it here.
So you see, if I pick up the phone, if I start to rotate
it, move it around--
all that information is being sent
back, pretty much real-time.
We haven't necessarily benchmarked it, because a lot
of it depends really on the nature of the network.
But with the technology we're using, I mean, it's really
low-level UDP communication, you can pretty much guarantee
it's real-time communication.
And actually, as the phone goes to sleep, it actually
sends the disconnect command.
So that's why we just saw it go away there.
So now we'll talk a little bit of how you can actually
implement something like that on Google TV.
So back to the slides--
oh, thanks Dave.
OK, we've got the demo.
So this is the do-it-yourself version, so if you wanted to
actually build pretty much that same
application from scratch.
Like I said, Google TV devices all broadcast using mDNS, and
Anymote TCP local service.
So if you want to find Google TV devices on somebody's home
network, all you really do is you search for this
_anymote._tcp.local by using mDNS.
Once you actually find it, you can actually extract out the
name of the device, the IP address, and the port.
The port is actually for the Anymote service, which we'll
talk about a little bit later.
But this is actually a quick and easy way to actually get
access to the device or find devices on the network.
So the example code here, this is actually
using the JmDNS library.
I would highly recommend that you use the
library to do this.
Otherwise, you'd actually have to build an mDNS client from
scratch, which could be a lot of fun, but it could also be a
lot of work.
So starting a service on a device in Android world, or in
Java in general, you can rely on the java.net libraries.
It's fairly straightforward to open up a server socket, bind
it to a port, and actually start communication.
So here, we have an example.
We're starting the server socket on port 1337.
We have a loop here, because what ends up happening is you
block for communication.
Once the connection is made, you unblock.
Once the socket is closed, or you finish your communication,
then you can go back to a reset state, so you're waiting
for communication again.
So like I said, the opening of a socket that blocks until the
communication accepted.
And then you have to implement a couple of methods for
reading data and writing data from the
input and output streams.
Likewise, to connect to the service using the java.net
libraries is very straightforward.
You open up your socket.
You pass it the IP address and the port.
You write and read your data.
Writing and reading data, you're streaming data back and
forth using the input and output ports.
You close your socket.
When you actually-- if you want to port this to an
Android application and productize it, though, things
get a lot more--
we'll say serious.
So when you open up the socket communication, the port may
already be used.
Don't take this slide too seriously.
I actually went through and tried this, built this
application out from scratch using the sensor APIs.
So these are actually real to-dos, but we'll get to the
point of why I put this slide up here in a minute.
But you have to worry about things--
is the port already used?
How do you handle the exceptions?
Because there's a lot of exceptions that can be thrown
at various places.
With Android, you can't do any network on the UI thread, so
you obviously have to spawn off threads.
And in this situation, you probably want to spawn off a
couple of threads.
One thread to actually do the network off of the UI thread
and another thread to actually do I/O so that you can open up
your communication for a second device to communicate
at the same time.
Because they both use the same socket.
And a little bit more.
So the another really important thing to remember is
that reading and writing data--
if you want to read and write structured data, you need some
way to actually serialize and deserialize that data.
And that actually has to be the same on both sides.
You have to have kind of a mirroring of the serialization
and deserialization.
So if you're using one technology to serialize the
data on the handheld device, you need to use the same
technology or similar technology to deserialize it.
Likewise with actually opening up the socket connection.
When you port this to a production Android device,
there's a lot more work that needs to go into it.
The biggest one is actually, sometimes mDNS doesn't work.
For example, a lot of corporate
networks, they block mDNS.
Most users' home networks don't, however.
But you may want to give the user the ability to enter an
IP address.
This requires you to open up a dialogue or some
type of user interface.
You have to start the threads, et cetera, to actually do the
communication.
So there's a lot more work.
Socket communication is a lot of fun.
Like I said, I went through building an application from
scratch that does everything in socket communication.
I had something that was reasonably well working after
a couple hundred lines of code.
When I started to put a little bit more of the Android
framework in there about the UI to allow people to actually
manually enter, I was up to maybe about 500 lines of code.
It took me a couple of days to actually get it in good
working order.
So to kind of help developers not have to go through that,
all of you to go through that every time, what we've done is
we've put together a Google TV Data Sharing Android Library
for Android developers.
We're in the process of launching this now.
It's not quite out there, but it will be out there in a
couple days.
We're waiting for one last approval from open sourcing.
So what the Google TV Data Sharing Library does is it
really simplifies for Android developers the process of
finding and sharing data with Google TV.
It's built on a client/server infrastructure or
architecture.
And both libraries are actually in one package.
And they actually work almost the same way.
So actually, I'll go over a little bit the code, but the
code works on both sides relatively the same way.
So the library actually deals with things like the service
discovery and pairing.
It includes the UI.
The library's open source, so if you don't like the UI the
way it is, you can obviously tweak it.
It deals with the socket communication, and that
includes the threading and the exception handling.
That also deals with the data serialization and
deserialization using protocol buffers, which is a really
lightweight and language-agnostic protocol for
deserializing and serializing data.
It's part of the way that we actually get the really
low-latency communication.
And in this library, we baked in a few things, like the
reference messages, which include registration.
That tells you when the device connected, disconnected, and
pinging the device.
Sensor data, so the sensor demo that I gave you, we have
the protocol buffers in there for sending sensor data.
And it's not the entire suite, because it seems like there's
a new sensor almost every day with Android, but we got most
of the main ones.
And if your favorite sensor's not in there, you're
absolutely free to add it in there.
And generic data, that's basically just strings.
So if you have serialized data on your second screen device
and you want to send, for example, JSON or XML, and you
want to send that your first screen device, you can
actually just package it into a string, send it across, and
actually deserialize your JSON or XML or whatever your
favorite string-based data serialization format is.
So really, implementing this is pretty straightforward.
And this is actually going to be a lot of code in the next
few sections.
This was an advanced course, so hopefully you're waiting
for a little bit of code.
So you actually implement this CallbackListener, just two
simple methods, one for dealing with errors and one
for dealing with the data coming back.
Oh, and by the way, don't worry about
copying all this down.
This is all going to go out in our documentation, so it's
mostly cut-and-paste.
And I think most of it's actually in an activity that
you can implement or extend yourself, what we'll call the
data-sharing activity, which will simplify things.
But if you wanted to use this code the most flexible way,
you want to implement this activity.
So you want to implement a service connection so that you
can attach your CallbackListener, which was
your activity, to the data-sharing client.
And that's basically this line right here.
So once the service is connected, the service is
actually what does the communication, which you don't
have to implement.
But once the service is connected, you attach your
ClientListener to the data service, and that's how you
get these callbacks on error and response data.
And likewise, when the service is disconnected, you want to
shut things down so you're no longer listening for results.
And then you start the service, and this actually
gets things going.
So this actually kicks off the pairing process, which
includes things like the user interface
for doing the pairing.
It does the whole mDNS scanning the network as well,
and will give the user interface if there's multiple
devices on the network.
So that's pretty much it.
And from that point on, it's just a matter of actually
sending your messages and your code.
So protocol buffers, the way it works is you take a
protocol buffer definition, you convert that into a Java
class, if you're programming Java.
But it is language-agnostic, so you can basically turn
protocol buffer definition into pretty much any
programming language interface implementation.
But you create your protocol buffer, you add your data to
the protocol buffer message, and then you just ship it,
which is the last line here.
So one last thing.
So this is actually what the protocol buffer looks like.
And for the sensor data, we actually made the protocol
buffer look almost exactly like the Android SensorEvent
and sensor classes.
So the SensorEvent in Android has accuracy, the sensor
information, the timestamp, and the values.
We basically did the same thing.
So on the receiver side, it essentially looks like you're
getting a sensor event.
Instead of actually an Android sensor event, that's going to
be a data-sharing library event.
And then you actually can wire that into your user interface,
pretty much like you would with a normal sensor event.
Like I said, this Data Sharing Library, it's actually going
out relatively soon, if it's not out already.
I didn't check my email in the last couple of hours.
But it's going out at code.google.com/
p/googletv-data-sharing.
The link will be on the Google TV developer's documentation.
And the two samples that I showed you a moment ago,
they're also both on there as well, so you can actually run
those out of the code repository.
Take them, extend them, do whatever you want with them.
They're pretty much your guys' to play with.
So that's actually sharing any data between a second screen
and a first screen device or Google TV.
And now I'll talk a little bit about Anymote, which is a
little different paradigm.
So the Anymote protocol is actually what we use for the
Google TV Android remote control and the iOS remote
control that are available for Google TV now.
So basically, the idea is that you turn your handheld device
into a remote control for sending key events, touch and
mouse events, and Android intents.
And we'll talk a little bit more about how that works.
Dave's example really kind of captures the
benefit of all these.
But so Anymote in itself, though, is a protocol or a
specification that defines how apps can actually securely
send these types of events from a second screen device to
a first screen device on a user's home network.
Every Google TV device has the Anymote service running, so
you don't have to worry about implementing anything on the
Google TV side, like you did with the sensor information or
the Data Sharing Library.
So Anymote, actually, the service is
running on Google TV.
You're just responsible for what goes on the
second screen device.
And the interesting thing about the Anymote protocol,
the way it works on Google TV, is that whichever app is in
the foreground actually receives the Anymote events.
So basically, if you're familiar with the Android
remote control, I can use that for navigating pretty much
anything on the device, or actually controlling any
application on the device.
And this is actually really unique in the Android world,
because generally, applications cannot send key
events or touch events to another application.
So this allows you to actually build an application on the
second screen that will actually send these key events
and touch events to another application on
the Google TV device.
So again, this is a client/server infrastructure,
so the second screen is the client.
The first screen is the server.
Discovery is also via mDNS.
That's the point of the mDNS Anymote protocol that we're
broadcasting.
The pairing protocol, since it is secure--
and the reason for security is since you're sending mouse
events, and you're sending touch events, and you're
sending intents from the second screen to the first
screen, you don't want just anybody sending that data
across your network and controlling your TV without
some kind of authentication and pairing process.
The other thing is, too, is that if you're using your
handheld device and you're sending key events, you don't
want other people on your network, necessarily, sniffing
those key events.
Because you could be entering a password or sending
confidential information from your second screen to your
first screen.
So there's an authentication and pairing dance they
basically involves the second screen sending a request to
Google TV, saying hey, I want to pair.
The Google TV displays a challenge to
the user on the screen.
The user enters the challenge into the second screen device.
The pairing happens.
And then actually, Google TV sends back a TLS certificate
that the first screen device can use for
encrypting the message.
And then from that point on, once you have the certificate,
you use it for just sending your events.
I'm going to give you another quick demo, just to show you
how the parent process works.
If you have the Google TV Android remote control, you've
probably seen this before.
But it's a pretty straightforward process from a
user's perspective.
Make sure I'm on all the appropriate networks.
I unfortunately can't show the auto-pairing, just because I'm
not using a standard network.
So I'm actually going to cancel on the auto-pairing.
I'm going to do a manual pairing, which means that--
oops, I have to get out of this demo.
DAVE FISHER: Do you want me to bring it up for you?
PAUL SAXMAN: Yes, please.
Another cool thing about this technology is that you can
have any number of devices connected to your TV.
So actually, Dave is bringing up all this information using
his laptop, which he will talk about.
DAVE FISHER: Don't look.
This is what I'm going to talk about.
PAUL SAXMAN: All right.
So I'm going to do a manual pairing.
Like I said, the manual pairing generally isn't
required on a user's home network.
It's just because we're in kind of a funky network
configuration here.
But this is really the discovery process, where I'm
actually finding the devices on the network.
I'm going to go ahead and connect.
And then this is the pairing process itself.
I'm sorry, I actually am kind of in the front there.
So the handheld device has actually sent the command to
say, I want to pair.
Google TV actually displays this to the user.
I got a really easy one this time.
I enter that in the phone, I hit my pairing, and then
everything's done.
The user is paired to the device.
So now when I actually use this device here to actually
navigate around, that's actually sent across
to the Google TV.
I can go back.
I can go home.
Pretty much, in the Google TV Android remote control, we
have pretty much everything for keyboard
input, touch events.
All the Android keys, et cetera, are all captured on
the device.
Anybody implementing this can actually send all the same
events across as well.
So that's pretty much it for that.
So if we go back to slides--
So as I said, Anymote is a specification.
If you want to learn about how Anymote works, you can
actually go to our developer site for TV remote.
It talks about the specification.
And there's actually quite a few libraries out there and
open source codes.
Discovery is generally handled by the second screen
application itself.
That's not baked into a library.
But you can use something like JmDNS.
Pairing and authentication--
we have a reference implementation in Java that
works on Android, and there's also a C++ implementation that
we released very recently.
And for sending events, you actually use the Anymote
protocol reference implementation, which is also
Java and C++-based applications.
The libraries both use protocol buffers, the same way
the sensor demo did.
Here's all the URLs.
The important thing about this, though, so there's a lot
of things that you'd have to do to actually get started.
We recognized this.
This code we actually launched last year, the reference
implementations.
But there's still kind of a high barrier
to implement that.
So what we did very recently, as well, is we made the Google
TV Anymote Library for Android developers, which works really
similar to the other implementation
that I showed you.
Basically, the major steps is you implement the
ClientListener, you open up the service connection, you
bind to the AnymoteClientService, you
start sending events.
I'm just going to kind of speed through this a little
bit so Dave has enough time to give his demo.
So the code is very similar to before.
You create an activity.
You implement the ClientListener.
This time you have three methods to implement.
The most important one, obviously, though, is the
onConnected, where you actually get an AnymoteSender.
That's the class that you use for sending the events.
Same as before, you have a service connection so you know
when the Anymote service has started.
And that point is when you actually pass your
ClientListener to the service so that you can
receive those callbacks.
Then you're responsible for actually starting an intent.
You can do that on your onCreate method.
You can do that whenever you want, basically.
You can have a pairing button in your application.
It's up to you.
And for sending events, basically what you get is you
use this AnymoteSender to send things like key presses.
This one, I think, is really cool.
By the way, this is actually the Android key events, so you
just tell it what Android key event you received, and you
can pass that along.
The TouchHandler is actually a really cool class.
Basically, you pass it any view, and it turns that view
into a touch surface.
So I can use it to send touch events across
to the Google TV.
You can send intents.
And this is actually a really powerful
feature of the protocol.
It allows you to start up applications or actually go to
URLs, so you can send view intents for loading up Chrome,
for example, or going to the Android market.
You can start applications, like the YouTube application,
and tune it to a particular video.
Or you can do things like start your own application.
So if you've gone through the pairing process and you have
an application on Google TV that you want to control, you
can send that intent, bring your own application up on the
device, and then start sending the input events across.
Like I said, this library is also open source.
It's also available for you guys to use.
Hopefully you saw really simplifies how Anymote works.
We have another sample with that.
We have another sample with the Anymote library for
playing Blackjack.
Pretty simple, but you guys are free to take it
and run with it.
And with that, I'm going to pass it on to Dave.
DAVE FISHER: Cool.
PAUL SAXMAN: Thank you, Dave.
DAVE FISHER: Four minutes, huh?
Nah.
Twenty minutes.
All right, so what I'm excited about, Paul's been telling you
about how you can use phones, how you can use tablets.
My world is about the laptop, right?
So this actually fits me really well as a user.
I'm sitting there, I'm watching TV, I've almost
always got my laptop out.
Maybe that's not a good thing, but that's how it is, right?
So I'm talk to you about Anymote.
That's the communication mechanism, communication from
your laptop.
And we're going to do it via Chrome extensions.
So first off, just to kind of make sure we're all on the
same page, Chrome extensions.
Who here has made a Chrome extension before?
All right, like 10 of us, all right, I raised my hand.
How many people have used a Chrome extension before?
All right, excellent.
So I know what my audience is.
You're not really Chrome extension developers.
Hopefully this'll be your first one.
So just a little background on what they are.
You can download them from the Chrome web store.
They add functionality to Chrome.
There's a great developer site where you can you learn more
about making them.
A couple things they can do-- they can be browser actions.
A browser action is basically like a small icon.
So these little icons up here are browser actions, which is
a little separate program.
Or they can be content scripts, which add things to
the page you're looking at.
So there's two things I want to talk about today, one
user-facing and one developer-facing.
First, we're working on a Chrome extension for
communication.
Not out or available yet.
We're still working on it, kind of brainstorming ideas.
I asked if I give a demo today, and they said, go nuts.
So what this extension does id I've already
paired with this TV.
So I did the pairing code that Paul showed just a second ago.
It's using Anymote.
I can send key events.
So if I wanted to send the search command, you can see
that the search box will open up.
If I hit the Back button, if I hit the Home button--
it's got all the functionality that a remote
does with the buttons.
It's also using your keyboard, which is really nice.
So the arrow keys are probably the most useful things.
All of Google TV is D-pad navigable, so the
arrow keys are huge.
You can also type.
So if I say, Hello.
Well, I can't type.
But I could type, if I knew how.
So those are kind of the basic keypress events
that you can do.
But since it lives in Chrome, it actually travels along with
you as you surf the web.
So in this example, I'm on a Google+ page.
If I'm looking at things on this Google+ page--
here I got sent a link to some funny site.
You decide if it's funny or not.
And what I could do is, if I wanted to share with the other
people that are in the room with me what I just found, I
can actually fling things to the TV.
What it's actually doing is it's sending an intent that
opens up a web page, which opens up Chrome on the TV.
In addition to being able to use your keyboard and to send
these special keys, you can also use your mouse.
So you've got access to your keyboard and mouse, which I
think is really nice.
Other things it can do-- go back to the Google+ page.
We can look at a page, and we can say, is there anything on
here that they might want to have on their TV.
So here's a YouTube video.
So the best screen in your house is probably your TV.
So what we do is we look at the page, and we say, is there
anything on this page that a user would be interested in
having on their TV?
And we give you links to make those things quicker.
So here we found this YouTube video on our page.
When we send it to the TV, it just plays.
You can share it with anybody in the room,
which is kind of neat.
It actually works wherever you go.
So here, we've got YouTube videos in a Gmail.
You can see that it's found those, and I
can send those over.
Didn't want to stop with just YouTube videos, though.
We actually wanted to make this kind of like it's a
companion that travels along with you as you
surf the web, right?
That you can send things to the TV.
So here if you're looking at a page that's related to a TV or
movie entity, what we can do is we can say, we found some
TV programs on this site.
And if you would like, you can actually send those over to
the TV as well.
It's going to open up the TV and Movies app.
And then from the TV and Movies app, you can do things
like you can save it to your queue if you think you want to
watch it later.
Or if you want to watch it now, you can see what
options there are.
If you had Netflix, it would show up in this list.
Things like that.
So that's the extension that we're playing with.
So what we're doing is great.
And in making this tool, we're going to open-source some
things so that you can do your own.
So really, what the developer-facing announcement
is about is how can you make your own extension.
So we're going to show you the tools that you can use.
But first, why?
Why would you want to do this?
I've broken it up into a few categories.
Maybe you're a web developer, and you want to get your
content to Google TV easier.
Maybe you're an Android developer.
How many Android developers do we have?
All right.
That's my audience.
That's what I thought.
And you want to talk to your Android app, this is a way you
can make a customer remote control.
Or maybe you just have some other idea.
So I mean, if you're a web developer, you can make things
that you can fling.
If you're an Android developer, you can make a
custom remote or something that goes with your app, so
it's kind of like a partner to where you can play with your
Chrome extension as well.
Or if you have just some other idea, there's all kinds of
things you could do.
Your computer knows a lot, right?
And it can share this information with your TV.
So how?
How do you make this happen?
We've got a code.google.com project that
you should go visit.
It's just called google-tv-chrome-extensions.
There's an example in there for how you go ahead and do
this pairing and communication.
But there's actually one more thing.
So I said I'm a college professor.
I couldn't resist the opportunity to give you a
homework assignment.
So you've all been assigned a homework assignment, and that
is the AnymoteLearningExercise.
So the idea is you'll learn how to use this Anymote
communication.
And so there are two extensions.
One is the example, which is finished.
And then the other is the learning exercise, which is
missing all the most important pieces, right?
So it's the same extension, but it's missing everything
that's useful.
And we've broken it up into a couple of different exercises
to make it easier for you to learn, just kind of go
through each step.
Let's take a quick look at the example.
So in the example here, what I've got is it's another
browser action.
What I'm going to do is I'm going to create a plug-in,
which I'll talk about more in a little bit, that's kind of
initializing this communication.
Things you can do with Anymote, is you can find TVs
on the network.
Here we've used discovery.
We found this TV.
I've listed by IP address.
We show you how to begin the pairing process, so you can
see a pairing code popped up here.
So dbe1.
And so now I'm paired to this TV.
You pair one time, right?
So if you use an extension every day for a month, you
pair once at the start of the month, or forever, and that's
the one and only time you're going to need to pair.
And then to actually communicate, you open an
Anymote session.
Anymote, you would have to open every day, right?
So every day, you would start up a new Anymote session.
This would do the communication.
And we've got examples in here for how you can send keys,
like the Home key, the Search key.
How you can send data, so here I send the data "Hello World,"
just because it's kind of a Hello World app.
How you can do YouTube flings, which would be opening up the
YouTube app.
I should define the word fling.
Fling really just means to send an Android intent.
It doesn't mean Chrome.
It just means to send an Android intent.
So here I did open Chrome, but you could open YouTube.
You could open the TV player.
Really powerful.
And then the last one here is ping.
This is just to test the health of the connection.
So that's the solution.
And these are the steps that you kind of went through in
the solution.
I should show you the starting point as well.
Way less exciting.
So the starting point, if you click on any button, it just
says, I'm not implemented, so that's what you should do.
Not nearly as exciting.
Extensions are made up of basic web components, HTML,
CSS, JavaScript.
So you can see here that we've got an HTML file.
This is the HTML that's going to get loaded if you click
that button.
So it's really like a local bookmark, is another way to
think about it.
Some CSS, some JavaScript.
Then the other thing that extensions have
is a manifest file.
You're Android developers, so you know the manifest file.
And extensions, it's JSON instead of XML.
Same idea, though.
You're configuring, hey, what does this application do?
So in this manifest file, I'm saying things like, I'm a
browser action, so that's why I've got the icon.
When you click me, you open this popup.html.
The other really important thing is that this extension
uses a plug-in.
So depending on your platform, it's going to load an
appropriate plug-in.
And it'll choose this based on your system.
You don't have to worry about it.
So really, there's one plug-in.
It will just pick the right one.
When we first realized we were going to have to use a plug-in
because Anymote requires level security with SSL
communication, we were initially bummed, because we'd
rather stick with the web technologies.
But if turned out really nice for developers.
So we were forced to make a really well-encapsulated
module that's really easy to use.
So that's what we're giving you today.
So really, this learning exercise, it's all about how
do you use the plug-in, right?
So that's the main thing.
And it's ready to go.
You can just steal it, and you can use it.
So really, using this plug-in--
so this plug-in comes down to 20 functions.
So if you can call these 20 functions, you can do
everything that you need to do.
Paul--
[LAUGHS]
DAVE FISHER: Awesome.
Paul talked a lot about network
communication, things like this.
That's really exciting stuff.
You should learn how to do it, but you don't have to.
So this makes it really easy for you.
And so we're going to use these 20 functions that are in
the plug-ins to implement these buttons.
That's what the learning exercise is all about, right?
We're not going to go through all of the
learning exercise today.
I'd hate to steal your fun.
We're just going to show you one of the steps so you can
see how the flow works, right?
This is going to increase the likelihood of number of people
that are doing it.
It just went from 10% to like 15%.
So we're going to show you one button, the
initialization button.
What needs to happen in initialization is you need to
get this plug-in, which was written in C++ and compiled,
accessible in JavaScript.
So that's kind of the first step.
And then once you make it accessible in JavaScript,
you're going to make three clients, for discovery,
pairing, and Anymote.
So it's a really natural breakdown.
We'll go through the code extremely quickly.
So right now, that button just says, hey, I'm not
implemented.
What you're going to have to do to implement it is you're
going to have to create an embed--
this is kind of our bridge--
of a certain type.
So you add that to the DOM.
The embed, by the way, we've actually made visible.
So when you click on this initialize button, the embed
is this white square.
So that is the embed.
Usually you don't make them visible, but we did for
learning purposes.
And then you've got an embed element which essentially is
the plug-in.
You've got one crazy function to call on it.
That crazy function brings you into JavaScript-land.
So pretty simple.
Kind of a crazy method name, but pretty simple.
Then you need to initialize.
So the initialization step, there's an init, and then the
main thing that happens in initialization is your
certificates.
Your certificate is how you're identified to the TV.
Once you're paired, it's that certificate which says, hey, I
trust this device.
It's used for encryption.
Some boilerplate stuff in here.
You don't have to worry about the details.
But once you've got that certificate made, and once
you're paired, you want to hang on to that certificate.
And then initialization is ready to go.
So you make these three clients,
discovery, pairing, Anymote.
Say it a couple times there.
So we're going to stop with the exercise here, but that's
kind of the first step.
If you were to do discovery, pairing, and Anymote, there
are some API functions for discovery, to start the
pairing process, to send the response, to start the Anymote
session, and to stop it.
And then the most important slide is, once you're
connected, what can you send?
What can you tell the TV?
Key events, obvious.
Mouse events, which I showed you.
Mouse wheel events, for scrolling.
You can actually send arbitrary data types.
Right now that means sending a string.
That's how I did the Hello World thing.
Fling, I'll step over once.
Ping is just to test the health.
Without a doubt, my favorite is
definitely send fling, right?
Just because if you think about how Android works, how
communication in Android works, I mean, if you can send
an intent somewhere, you can do a lot, right?
And this gives us the ability to send intents.
All kinds of intents you can send.
I showed you the code here for flinging the page that you're
currently on.
Probably more fun than that though, is if you wanted to
fling YouTube, you can send a URI that will open YouTube
specifically.
If you wanted to open the TV player, you could send this.
You could actually send parameters, like channel and
things like that, on the TV player.
If you had an app that you wanted people to download on
the TV, you could send a link to Market to open up your app.
So it's like, hey, download the other half
of this tool, right?
So that one, I think, is useful for people.
And then it's kind of crazy, but you're passing a string.
But you can send almost any intent.
You can send extras, URIs.
You can set the category.
It is kind of a crazy string format.
There's a function called toUri.
Which creates a string if you're on an Android platform.
And really, what the receiving end does is it calls parseUri.
So you can send anything you want.
You just have to know how to format the string.
I've got some advice for how you do that in a
doc that I've made.
But if you were developing your own app, you would know
your package name, right?
You would know the names of the activities
that you want to launch.
And you can explicitly call those things,
which is really great.
So these are the 20 functions that you have to learn to use
this thing.
I wanted to definitely give credit to Dave Hawkey.
He's my 20% hero.
He wrote the C++ code that all this is built on.
He also open-sourced all the C++ code, which is really good
if you're C++ developer.
It's all out there, everything that he used to make.
C++ works on a lot of platforms.
I'd love to see somebody run with it with iOS or to do
their own thing with the C++ code.
And then your next step is to go out there and do the
learning exercise.
Anybody going to do it?
All right, show of hands.
Come on.
Everyone in the room just raised their hand.
This is great!
So that's all we've got.
Thank you for coming.
[APPLAUSE]