Tip:
Highlight text to annotate it
X
CHRISTIAN KURZKE: Welcome, everyone, to our session.
So if some people are coming in, I know I'm
competing with lunch.
So I really appreciate you guys made it away from lunch
early and join us here.
We are going to talk today about how to get
content to Google TV.
And if you guys have been here in the earlier Google TV
session, you know that there is nothing that people love
more than getting content on their television.
So I'm really excited you guys are here.
And let's just say you have a big audience for your
applications.
So that's really good.
Let me introduce myself.
I'm Christian Kurzke.
I'm a developer advocate with the Google TV team.
And with me today we have Andrew.
ANDREW JEON: Hi, my name is Andrew Jeon.
I'm a platform engineering manager at Google TV team
covering AB and general platform features.
CHRISTIAN KURZKE: And Mark.
MARK LINDNER: I'm Mark Lindner.
I'm a tech lead on Google TV's core applications and
frameworks.
CHRISTIAN KURZKE: Great.
So those are the guys who actually make the magic happen
on the platform, so really cool.
So hopefully by now, you've all seen the new devices, the
new Google TV devices.
So if you haven't, go check them out in our sandbox right
out in the second floor here.
I just want to highlight a few things.
So this is the new-generation devices from Sony, from Visio,
and from LG.
What they all have in common, they are ARM-based devices.
And I'm really excited.
I know a lot of people outside of the US have been asking us
the last year when can they finally get their
hands on a Google TV.
And those, or at least some of them, will be available
internationally.
So Sony has announced they're going international.
So keep in mind, when we talk about Google TV devices, we'll
focus on all of those devices.
And basically, the features that we talk about today, they
are mostly available on international or on the
ARM-based devices, and also international.
So what I want you to keep in mind is how people interact
with television.
And we talk a lot about content in this session.
And it's really streaming entertainment, streaming
movies, videos.
It's really the big thing in the living room.
People like to be entertained while they're on the bus,
maybe on their cell phone, but they definitely love their
content on television at home.
There was also another session earlier today by Osama, and he
was talking about how to create a
beautiful user interface.
Of course, you need that, too.
And then we'll have Paul talk about second screen.
But the centerpiece in the living room is really content.
It's all about content, content, content.
So what I'm going to cover or what we are all going to cover
in this session is how do you create beautiful content
applications for Google TV?
Some of the key features for this is you
want streaming content.
Usually people, they want their content on their
fingertips.
They want it now.
They want access to a huge library of content.
How can you ensure that they get the best quality possible?
How can you do it securely so that you as a content owner
feel secure that content doesn't fall into bad hands?
And how can you create a beautiful, integrated
experience for your viewers?
So let's get started and take a look at streaming.
So what is streaming really all about?
And I'm sure you guys are all developers.
You've written code.
You know what streaming is.
A lot of people, basically when you do music streaming,
you just take an MP3 URL, you have--
[COUGH]
CHRISTIAN KURZKE: Oops.
That's all right.
We'll keep you muted a little.
So the HTTP streaming of music, it's not really
streaming in the sense as video.
And for music, it's actually simple because you can just
download and playback at the same time.
You basically have enough bandwidth on all devices
nowadays to just stream perfect high-quality music.
For video, it's still the bandwidth.
We're still in an age where we don't have gigabits to every
house, although I've heard people are working on that.
But what we really look for in streaming video is how can we
optimize the bandwidth that your viewers or your users
have to get the best possible content?
And so this is just a snapshot from one of my lab tools.
But it shows basically when you play back video, you
always get bursts of high bandwidth, and you basically
load, you buffer.
And in this particular scenario, we don't really
utilize the bandwidth fully.
Because what happens is there is a lot of times when we
don't download any data that we actually could, so we're
not really getting the best possible
experience to the viewer.
And so if you have too low of a bandwidth, you're leaving
quality on the table.
On the other hand, if you have a higher bandwidth or a higher
bit rate encoding than your viewers have access to on
their home network connection, then you're actually getting
this whole reloading, re-buffering.
And let me tell you, people hate nothing more in their
living room than looking at a spinning reloading, reloading
in the middle of a movie.
So we want to make sure you can do better than that.
The other thing that I want to put out there is a lot of
people have heard about variable bit rate.
And they're like, oh, yeah, that's the solution.
We need variable bit rate.
Variable bit rate is OK.
It gives you the best possible perceived encoding for your
video stream, but it's actually still one stream.
After you're done encoding variable bit rate stream, you
end up with one stream with an average bit rate of--
pick one--
3 megabits, 5 megabits, 7 megabits.
But this is not adaptive at run time to the network needs
that your viewers or your users have.
So in an ideal world, if we had all the computing
resources in the world, we could encode a stream on the
fly for each network connection.
The challenge then is the network speed also changes.
If you watch a two-hour movie and maybe your kids in the
back play a game or maybe your neighbor turns on the
microwave and jams your Wi-Fi signal, so your bandwidth
might fluctuate, even throughout the movie.
So what you need to do is you need to be able to actually
dynamically adapt to the bandwidth that is available to
you while you play back.
So how can we do that?
The key word for this is "adaptive bit rate streaming."
And it's not just variable, but it's actually adaptive.
That means we are actually monitoring the playback
bandwidth as we play back the movie, and we adjust
accordingly.
So how do we do this technically?
We don't encode the video for each one of the viewers, for
each one of you, because that would be a lot of encoding.
So usually we have a set of pre-recorded,
pre-encoded bit rates.
So we have a low bit rate, a medium bit rate, somewhere
between 5 or 10 different quality streams.
And then we have an XML descriptor, which
describes all this.
And then in the playback, you basically point to the XML
descriptor.
The playback stack knows that if the bit rate that is
currently playing, there is potential that we could go
higher, then it switches to the higher quality stream.
If it notices the buffer is getting shorter and shorter,
it temporarily can switch to a lower quality stream.
On the end, your users and your viewers on the screen see
the best possible content that they can with their given
network connection.
There's nothing we can do.
If they have a 2-megabit network connection in their
home, they will see 2-megabit quality.
But I'll talk to you later how you can actually at least
detect this on your end.
So there is various standards for this.
So I didn't invent this.
We didn't invent this.
People have been doing this.
There is HTTP Live Streaming, HLS.
There is MPEG-DASH.
There is smooth streaming.
And believe it or not, there's a lot more.
Because this is cutting edge stuff, and it's really pushing
the boundaries.
And companies who do this for a living, they all believe or
know that they have the best possible solution for this.
So they all tweak the standard just a little bit and push it
a little bit one way or another.
So we'll talk about that, too.
So in Google TV, what you can do is our playback stack, our
media player, it actually supports playback of adaptive
bit rate streams currently in HTTP Live Streaming format.
So if you, for example, in your code, you just use Media
Player and you set the data source to be an m3u8 file,
which is the XML descriptor that I explained, then we will
play back an adaptive bit rate stream.
We also support LiveLine.
So that's basically out in Google TV.
If you have one of our boxes today, you can do this.
And now I actually want to hand over to Andrew, and he
can tell us a little bit more about what the engineers have
been up to recently.
ANDREW JEON: Thanks, Christian.
So as Christian mentioned, Android, as teamwork, has been
supporting HLS and wide-band media format.
But as Christian mentioned, various techniques have been
introduced by multiple companies.
From the beginning, HLS by Apple, Apple did a beautiful
job of coming up with this idea.
And then later on, Microsoft realized that
that has lots of issues.
So they extended the protocol and then came up with
something called Smooth Streaming protocol.
And we realized that there are a lot of content providers who
are using Microsoft format.
And we recognized that, and we thought it is very important
to support it from our platform as a default without
having to do any special thing.
So now we are announcing a Google TV platform which
supports Smooth Streaming protocol from the underlying
platform layer without having to do anything from the
developer's end from that Java application side.
So actually, from the developer's perspective, in
order to play smooth-streaming,
encoded-streaming content, simply create MediaPlayer, as
you can see.
Simply a MediaPlayer and then just hand off a data source
URL, which points to the isn URL.
Then the underlying MediaPlayer will handle
playback gracefully.
And then later on, some content providers got together
and realized that, oh, smooth streaming is good, but it's a
little bit geared toward Microsoft, as we can see.
So they come up with a standard called MPEG-DASH, and
a lot of Hollywood studios are supporting it.
So we are also actively working on supporting that
from the platform layer.
But right now, we have HLS and Wi-Fi media
format and smooth streaming.
Also, there is something called progressive file
download, basically downloading a file.
And as it gets buffered, you can actually read it, and then
play back, which is not adaptive streaming.
However, even if we keep adding these popular streaming
protocols, we can't actually support
everything in the world.
Because as Christian mentioned, there are a lot of
content providers who have legacy content in their back
end, so that it takes time to re-encode to a certain
standard, or they are already locked into something which we
don't support.
So there are a lot of custom streaming servers.
So how we can solve this problem?
So we thought about it.
And then the reason for the people who use a custom series
streaming series protocol is the following.
First of all, they may have a special need, or they wanted
to add special features to differentiate themselves from
competitors.
Or so they are so innovative, so even though draft
specification is not really done, but futuristic features
they adopt.
Or some of them are locked into very old technology.
So in this case, how can we do?
So we came up with an idea of, let's say, if we can create an
API which can enable developers to write their own
streaming protocol and content processing, that would be very
nice so that we don't have to embed thousands of streaming
protocols in our platform.
So from the API's perspective, as we can see, MediaPlayer
interface in Android expects a data source which is URI or
String or FileDescriptors, which means that underlying
player implementation should know what this means.
So in order to overcome this limitation, we introduced
something called MediaSource API.
So what this API does is from the Java layer, as an Android
application developer, if you have a custom streaming
protocol, you can just write Java code to handle streaming,
basically getting the data chunk from
your streaming server.
And then if you encoded this content in a certain container
which is not standard or something we don't support,
then you can actually extract the data from your container
itself from the Java and then end reserved from streaming.
And then extraction of the audio-visual stream will be
audio element stream and video element stream.
And then you can just push down these two streaming
sources to the underlying media player.
And then what's going to happen is, although we can
delegate streaming, handling and container parser, once an
element stream is being pushed down to an underlying
MediaPlayer framework, it can still use hardware-accelerated
codec and rendering and audio-video synchronization,
And even the PTS, we do mapping and
all kinds of stuff.
So we basically separated the portion that developers can
adapt to their own proprietary streaming and still use
hardware features.
So how it works is basically we have a class in our API
package called GtvMediaPlayer, which is a sub-class of a
Android standard video player class interface.
And we added an API called setMediaSource, and then it
expects a class, which is extended from either
PullMediaSource or PushMediaSource.
So the reason why we have two different media sources
depends on the use case.
Let's say if you are trying to stream live content, usually
live content is time sensitive.
So we have to display the content as it arrives.
It shouldn't actually delay or buffer too much.
In that case, PushMediaSource works better.
But in most cases of on-demand streaming, you can keep the
buffer to keep the quality high.
So in that case you can use PullMediaSource.
So both PullMediaSource and PushMediaSource inherits from
a video source interface and then implements basic
functionalities.
And MediaSource is a support class of those.
And then this is a skeleton of the interface.
So as you can see, there are multiple methods that will be
recorded by the underlying media player to handle an
actual streaming pipeline.
So if you take a look at the sample, for example, if you
want to implement you own MediaSource using
PullMediaSource interface, PullMediaSource interface
actually implements all of this defined in the
MediaSource interface for you for convenience, so you don't
have to do anything special.
And there are only a few APIs you need to implement.
So, for example, here on de-queue access unit, access
unit is a chunk of data you want to pass down to the lower
layer media framework.
And this event is called when the underlying video framework
display are decoded and displayed all the content from
the buffer.
So you can keep pulling the data and
keep it in your queue.
And then your queued data will be de-queued by this event.
So that's very simple.
So now let's look at the actual software.
CHRISTIAN KURZKE: So here's one example application.
We were working with a company you may have heard of--
Sirius XM.
They do satellite radio.
And they also do stream their content over the internet
using a custom protocol because they have very unique
needs for encoding their media.
And initially, they came to us, and they're like, well, we
would love to work with you, but how do we support our
custom stream technology?
And so we worked with them and explained the MediaSource API.
And they were actually demoing yesterday on our devices.
And I think the quote says it pretty fittingly.
They were really able to very quickly develop this
application using the new MediaSource API.
And the best part was they did not have to
develop any native code.
They could do everything in the Java layer,
in the Android layer.
So this was really very easy for them to put their software
onto Google TV.
So besides all the cool stuff that Andrew has talked about,
we also have a few new features in general.
So the GtvMediaPlayer class, it also supports multiple
audio tracks, so if you have multiple languages for your
video format.
And, of course, it supports closed
captioning and subtitle.
So we actually have time text markup language support.
And we have also a widget that you can customize for closed
captioning.
So here's an example what it looks like.
And this is very basic.
This is just basically rendering TTML on top of a
video stream.
So now you've seen it's really easy to actually
stream to Google TV.
If you have your content encoded in any of the common
formats like HLS or Widevine or now also PlayReady, you can
easily do this.
If you have special needs, we have APIs for you to implement
support for that as well.
So now the question is, how can you make sure the viewers
really get the best possible quality?
And how can actively monitor what the viewers actually see?
And with that I'm going to hand it back to Andrew, and we
can look at some of the APIs that we came up with.
ANDREW JEON: So let's say you can use all these APIs to
support your streaming protocol and
your container format.
And then you use multi-thread audio APIs to support multiple
audio tracks and the subtitles.
And then now, you open up a service.
So a lot of users are watching your content.
And the next thing you are interested in is what was the
quality the user was experiencing?
The fact is that adaptive streaming is relying on
fluctuating bandwidth.
So at some point in the video playback, the playback quality
may be worse.
And at some point, it could be better.
So how you can actually monitor what was the actual
experience?
Let's say you have--
let's say, five tracks, so 200 kilobit, and then 500 kilobit,
and 1 Meg, 3 Meg, 7 Meg, for example.
So if a user paid for, like say, $5 to watch this movie,
then one user was watching all the way from the beginning to
the end using 500 kilobits.
And then the other user, who has a relatively better
bandwidth, has been using 1 megabit.
And then at some point in another part of town, another
user was able to watch with 7 megabits.
Is it fair to charge the same amount?
So you may want to do it slightly differently.
Or let's say a user had an issue with the ISP, so they
couldn't watch the movie in high quality.
So they are claiming, oh, please give me another day.
I need to watch it again with better quality, or
something like that.
But if you don't have a mechanism to measure what was
actually experienced, there will be a big issue.
So we paid attention to this issue.
And then the biggest--
there are two things we need to care about.
One is actual network bandwidth during the playback,
how the network was fluctuating.
At the same time, even if the network is sufficient, that
doesn't mean that the user was able to watch every video as
it's encoded and which is supposed to be displayed.
Let's say video has been encoded in three frames per
second, but for some reason, the user was running something
in the background so the hardware all slowed down, so
like half of the frame was dropped in the middle.
So even if you have a full-blown bandwidth for the
content, but if the device was not capable, then actual
experience will be affected.
So once we have a set of APIs to measure these two things,
then developers can actually ensure customer satisfaction.
And by monitoring all the experience, you can decide
whether to refund or extend the playback time or do
whatever to satisfy your customer.
And also, if the application itself has access to this
information in real time, you can make a good judgment.
Let's say bandwidth goes down too much and has been dropping
too often, then you can actually recommend the user to
view it later.
Or the bandwidth hasn't been able to go up to full 1080p
quality for a long period of time, then you can actually
suggest to the user, oh, you have an issue with the 1080
playback so the user will know rather
than they just complain.
So these are the set of APIs we came up with so the set of
APIs can measure these things.
First of all, actual frame per second data can be measured.
Let's say from your Java application, you can kick off
a monitoring heartbeat to the underlying media player.
So let's say if you want to measure frame per second, then
the underlying media player, along with a low-level video
SDK to measure extra frames per second in the renderer so
that it can return the information of how much frames
per second has been displayed to the user.
And network bandwidth.
As the bandwidth goes down and up and down, we can
periodically, whether every second or something or even if
you want to monitor much less often, then you
can do that, too.
And when the network bandwidth changes, we can report extra
bandwidth measured right before that point up to the
application.
And the adaptive streaming uses buffering technology.
And buffer actually changes in size.
Let's say at some point a 500 kilobit stream was being
played, then the buffer doesn't have to be too long
because each chunk is smaller, so the buffer can be this.
And if quality goes all the way up to 1080p, then the
buffer size may get larger to keep, let's
say, up to 10 seconds.
But even the 10-second high-quality video requires
bigger memory.
And the buffer size and actual fielding ratio is very
important information.
Let's say I allocate 30 megabytes and then 10
megabytes has refilled, so that's a good indicator for
current quality for upcoming seconds.
But if we only provide proper size and proper filling by
bytes, there is a lot of work for developers to do.
Because depending on the encoding quality, you need to
calculate how many number of seconds of
video is being buffered.
So we added another API, which can actually return buffered
media playback duration, which is, regardless of the size of
the buffer, regardless of the size of encode, if you just
want to know how many seconds of video is remaining in the
buffer, then you can actually know that--
Two 2 remaining, 10 seconds remaining, 0 seconds
remaining, regardless of the size.
So we thought that would be very useful.
And then audio information.
So right now we don't really have a lot of use cases for
audio information because usually audio is encoded in a
single bit rate.
But in the future, if a developer chooses to variate
the audio encoding, let's say, from 10 K, to 44 K, it depends
on the network bandwidth if you want to change the audio
quality also to squeeze out every bit of the network
that'll be possible that you can measure.
And in the future, if a developer wants to change on
their server side, if the bandwidth is good enough, then
send out 7.1 channel.
If the bandwidth goes down to a little worse than 5.1
channel, the bandwidth is limited
then to only two channels.
So we don't really have a use case like that today, but we
thought that would be very useful information.
So there's an API to return that.
Default implementation and default set of error codes in
Android is quite limited.
So we add a bunch of extra error codes to monitor various
error situations.
And the good thing about this is once you start monitoring
all these variables, you can use the Analytics framework to
send this data back to the Google server, and you can
monitor that.
So from the encoding point of view, there are two cases to
implement this API.
One is using existing Android events, for example,
OnInfoListener event or adding our own events.
So, for example, in the case of OnInfoListener, this
example shows you how to monitor network bandwidth
changes and frame per second and dropped frames.
So basically, the same event handler you usually use based
on the Android video playback standard, so OnInfoListener
can receive the event.
And then there's an extra what type to be defined.
For example, video info network bandwidth or video
info FPS or dropped frames.
And you can actually create a switch syntax to check that.
And then, in this case, what is the type of the data in the
actualized extra value?
So, for example, in network bandwidth, the extra will be
kilobitPS in integer.
And then the frame per second is in integer.
So you can actually either log it or send it to your server,
or send it to Analytics.
You can do whatever you want.
So all of this sounds good, but if it's really difficult
to use, then that's not going to be a good thing for
developers.
So when we release this API as a package, we will also open
source some of the sample code that demonstrates these APIs.
For example, we do support smooth streaming in the
underlying media framework, but in order to demonstrate
the power of MediaSource API, we actually implemented
full-blown smooth streaming protocol in Java and then used
the MediaSource API to play.
So you will have a full-blown smooth streaming protocol
implementation in Java, then you can defer it or change it
or whatever you want.
And along with the MediaSource API, we support PlayReady DRM.
So if you have a PlayReady DRM server, you can send the
license to request, receive it, and there are certain
conventions to parse that out and then send out the encoded
key portion back to the underlying framework to load
the key, the session key.
So that when you send out audio/video elements in
stream, then underlying media framework, DRM framework will
know how to decrypt and display.
And multi-track audio, TTLM-based closed captioning
rendering, and extra rendering code itself
will be open sourced.
So the sample Christian showed you with the little red and
italic text, that code is a sample.
It doesn't support full-blown TTMS spec.
But if you are using a very rare feature of a TTML, then
you can use it in your server side.
At the same time, you can extend the rendering widget to
handle that.
And full-blown QoS API demonstration will be part of
the sample.
So that you can just take a look and then use it in your
application.
OK, now I will hand it off to Christian.
CHRISTIAN KURZKE: Thanks, Andrew.
So I think this is really awesome.
So basically, this is a summary of a lot of the
features that Andrew's team has been
working on in the platform.
And I think it's exciting for developers who create Google
TV applications that talk to servers to stream content to
their app and to show really high-quality videos in their
application.
And it gives you all the necessary APIs to monitor,
make sure the video is displayed properly, and so on.
So actually, he already advanced--
so wait, there's more.
So this is actually something I want to tell you.
It's actually a session going on right in parallel, which is
unfortunate for people who are here live.
Fortunately, all the sessions are recorded, so please check
it out when you have a chance afterwards.
So YouTube has announced that there are now content APIs.
And if you don't have servers to host all of your content,
you can use the YouTube servers and the YouTube
infrastructure.
And we know quite a bit about how to stream high-quality
content, so this is really actually pretty exciting.
So you don't need all of the infrastructure back-end stuff.
You could just use ours, and you can focus on creating the
app for playing back this content.
So there is an entire session.
It's called the YouTube Player API and the
YouTube Content API.
It's basically going on right now as we speak.
Now we've talked about how to get content there.
And I'm just going to talk a little bit quickly about how
to make sure your content actually stays secure and gets
delivered securely.
And in the interest of time, you all know what a DRM is
supposed to do.
I just want to highlight, only because you're streaming your
content over HTTPS does not make it a DRM.
DRM is a lot more.
It's license management.
Here is sort of a typical use case where, for example, a
Google TV application, it would talk to a commerce site
where you browse through a library of movies.
You purchase a movie.
You have all those transactions.
And then you use Media Player to actually
request the video stream.
The video stream comes back with a license request, and
then you deal with a licensed server, and
so on and so forth.
And again, just like streaming, there's a whole set
of industry standard protocols.
And Android, actually, in Honeycomb, Version 3,
introduced the DRM framework, which is really powerful.
It is extensible.
It's the standard Android DRM framework, which, of course,
works on Google TV.
So today we do support Widevine, which is a very
commonly used DRM system.
And actually, we have been working on some plug-ins.
And Andrew can talk a little bit more about the new stuff
that his team has been working for DRM.
ANDREW JEON: So we actually already
mentioned the PlayReady.
So it's kind of funny that we are announcing it again here.
But it isn't Widevine, so we added PlayReady DRM as a
default to all the second generation
ARM-based Google TV platform.
And the way it's supported is it's still using Android DRM
framework as is.
But if you look at the Android DRM API, it's so powerful.
So you can almost feel like you can do anything.
But the problem is it's too powerful.
So it's not really well defined for each specific DRM.
So in case of Widevine, the way Widevine is accessed is
using the same API code, we used to need to know the IDN
key pair to operate with the underlying Widevine DRM.
So we took the idea of how Widevine was supported and
then applied it to PlayReady.
So
Once we added the PlayReady DRM in the underlying
platform, then there are certain interactions to work
with the PlayReady.
So we came up with the convention of ID and key pair
to send out all the parameters to the PlayReady DRM and the
underlying platform to operate it.
So right now it is already built into a Google TV second
generation.
And then the picture it supports is basic license
acquisition and license management.
And it can be extended to support customize licensed
software from your application.
So let's say if you have your own licensed server, then you
can talk to your licensed server, basically calling the
API to request a license request text.
And then send that off to your licensed software, receive the
license, and then publish the license into the underlying
framework, and then send out video data so that video data
can be decrypted and then played back.
And we also--
it is already there, but we wanted to mention that even if
there's a DRM, if the DRM is implemented in the CPU,
basically if all the DRM code is running in the CPU, then
it's not really secure.
Because if a malicious user or hacker can access the platform
and then gain access to the route, then everything in the
main CPU can potentially be accessed.
So we introduced something called TVP, Trusted Video
Pass, which means that once the data is decrypted, that
decrypted data cannot be accessed by the main CPU.
The whatever software, piece of software running in the
main CPU, will not have access to the decrypted data.
There are various techniques-- depends on each SOC.
The chief vendor has multiple choices.
They make their own choice and then apply their own design.
But the data is used in the different section of the
memory so that it is protected.
And then once they are actually decrypted in the
section, the way the video pipeline works from decryption
to decoding to rendering is done through the handle.
Let's say you've got a buffer with encrypted data, and then
send that buffer to the DRM security module.
Then the security module will decrypt and keep it in the
secure zone.
And then return to the handle, which points to that buffer,
and then hand it off to the decoder.
Then the decoder implementation should have
access to the secure area to get to the data
and then decode it.
And then uncompressed data will be also so stored in the
secure region, and then return another handle to point to
that, and then send that handle to the rendering piece.
So that the renderer, usually it could be FRC in case of TP
or STMI vendor in the set-top box case, and then send it off
to the display.
So that we can ensure video protection.
And this feature is integrated with smooth
streaming and Wi-Fi.
So if you are using smooth streaming or Wi-Fi protocol or
any other custom streaming protocol with a video-sourced
API and parity, we will protect the
trusted video path.
So just to summarize, we have an Android DRM framework which
enables you to access the DRM feature from the Java
application.
And once data is decrypted and re-protected through trusted
video paths all the way to the display unit and then if the
device a standby device, either a TV that supports the
HDMI input or body box that streams in but sends out data
to HDMI out.
We always protect the data inside of the box using the
trusted video path.
And whenever data goes in from our external box or goes out
to the external box, we are using HTTP content protection
to secure the content.
Now I will hand off to Christian again.
CHRISTIAN KURZKE: Thanks, Andrew.
Yeah.
I think this is really exciting.
So we have now a full set of Java APIs.
And I really want to stress this.
There is no native code required.
You can just stick in the comforts of the Android
virtual machine and you can deliver super high-quality
content, adaptive streaming.
You can even support your custom streaming protocols.
We have support for smooth streaming and
Widevine and HLS.
And we have a full DRM system, which will keep your content
secure all the way end to end.
So now, how can we make this convenient and
integrated to use?
So our vision-- and our team is working really hard to make
Google TV sort of the user interface to your
entertainment, to your living room.
It's really the way how you get to content in your living
room from both broadcast television or satellite or
other set-top boxes that are connected over HDMI or
streaming directly digitally over the internet and bringing
it to the television.
So let's first look at-- and I'll browse through this.
There was an earlier talk where we did a
lot about user research.
So we really looked how people find content, what they want
to watch on their television.
And usually, people, if they know what they want, then they
search for it.
If they don't know what they want, they just know roughly
what they're in the mood for, they can browse.
And we have a TV and movies app.
And then how do we get the content?
So there is a different ways.
So one is the back end.
You may have heard that we sort of search
stuff on the internet.
So we index videos, site maps, and
everything on the internet.
But also on the client side, we actually talk with devices.
And we have basically the client side on
the Google TV box.
We talk with both physical and also virtual devices.
So quick graphic, what it looks like.
So basically, what we do is we can retrieve a channel list or
a list of DVR recordings directly from your set-top
box, if it supports it.
And we have a lot of devices that are currently
integrated this way.
And that's how we generate basically the TV and movies
application which aggregates all the content that's
available from your-- in this case-- satellite provider.
Maybe it includes also DVR recordings that you have made.
And it also can--
I'll show later how we can get content from the
internet into this.
And when you change channels, then you just send the command
back to the satellite box.
So when the user searches, this is what our search
experience looks like.
This is what our browsing experience looks like.
And when you pick a movie and you want to play back a movie,
you basically get a list of sources.
So, for example, this movie that I picked, it may already
be on my local DVR.
It may be available from dish, or I can get it from Amazon.
And the question that I always get is, well, how can I get my
stuff in here?
I am the new and exciting video provider, how
can I get in there?
And we'll talk a little bit about that.
So just to clarify some of the terms.
We have one-way pairing, which is the more traditional
infrared blasting way of talking with devices.
And then when we talk about pairing, we talk more about
the two-way case, where we not just change channels on a
device, but we also read information back.
And I'm actually going to hand over to Mark in a second, and
he can explain how we do this, how we integrate one device.
So the use case that we're trying to solve now is, OK,
you have Google TV at home.
You have a new device that you bought, or as a manufacturer,
you're building a new device.
How can you integrate this with your device?
So, Mark.
MARK LINDNER: So we're going to talk about how do you build
your own media device.
And what a media device is is basically it's similar to a
device driver.
It's a software component that actually communicates with
some external device.
It's packaged as an apk, and you can install it by
downloading it from the Google Play store.
And a media device really consists of three basic
components.
There's what we call the Media Device Controller Service,
which is a standard Android service.
And the service hosts one or more actual media device
controllers.
There's a setup activity, which is used to initially
pair your device with Google TV.
And finally, there's a settings activity in case your
device has any user configurable options that can
be changed at any time.
So this is kind of an overview of our
media devices framework.
We have the TV Player Application, which
communicates with the Media Devices Service.
And the Media Devices Service is a standard component of the
Google TV platform.
The main job of the Media Devices Service is to
coordinate access to media devices, and for each session
with a media device, it maintains the instance of the
Google TV Media Player.
So all this stuff that's in gray is stuff that's already
implemented.
And the white portion is what you would implement to create
a new media device.
So essentially, we have a Media Device Controller
Service, which hosts the device controller instance.
And that, in turn, talks to the actual device.
So the Device Controller Service, as we said, is a
standard Android service.
It interfaces to one or more media devices.
The things it does is notify the system when the device
comes online and goes offline.
It notifies the framework when the channel is changed on the
device and so on.
It can also report information from the device, like its
channel lineup, its list of DVR recordings if it has
them, and so on.
So when we implement a new Media Device Controller
Service, our framework provides a
base class for this.
So you don't have to build it from first principles.
We sub-class abstract device controller service, and we
fill in some details.
The main things we do here is we
construct our device object.
The device object, it just basically describes the
attributes of the device's capabilities and so on.
And we construct or device controller
instance for that device.
So here's a simple example of some hypothetical device with
some various options enabled.
And the final thing we can do here is we can register a
timer to call us back at fixed intervals so we can check for
channel updates in case our device is the type that has a
channel in it that can change over time.
So now the final thing we implement is the device
controller.
This is the actual component that talks directly to your
device through whatever
protocol that device publishes.
So one thing we do is we handle user key press events,
like if you press channel up, channel down, fast forward on
your remote, that gets passed to the device controller for
processing.
And we also handle tune requests.
So if we get one of our TV URIs to tune to a channel or a
DVR recording or whatnot, that gets processed as well.
CHRISTIAN KURZKE: Actually, I just want to quickly
highlight one thing.
So just to make sure you don't miss the
last part of the slide.
So basically, when you have this device controller, you
can do tune requests to local URLs, TV URLs, or you can go
to like HTTP URLs or other URLs using streaming services.
So that's where basically it ties back to the first part of
the talk where Andrew explained how you can just use
the GtvMediaPlayer to play back streaming media.
So the DVR can come either from your HDMI in from your
local DVR, or it can come through a streaming protocol
from basically anywhere from the cloud.
So this is basically a very elegant way to tie the
streaming together and make it transparent, look just like a
local device connected in your living room.
MARK LINDNER: So as we mentioned before, the Media
Device Session holds the actual media player instance.
So as Christian described, for a physical device, that's just
connected to an HDMI port.
The device controller would tell the session, yes, I
wanted to tune to HDMI port number two, or whatever that
happens to be.
But if this is a virtual controller that's representing
a virtual device that's streaming media over the
internet, it would provide the appropriate URL or data source
to the media player as necessary.
So when we implement a device controller, again, we provide
a subclass or a base class in our framework, use subclass
device controller.
And the main two things you would do here is implement
this method perform action, which is handling things like
channel up, channel down, or user actions from the remote,
and tune to channel where given a channel number, we
need to tune to that channel.
So in this case, we would translate the channel number
to an appropriate video URI for the media player to play,
and then we would call a base class method to notify the
framework that we want the media player instance in that
media device session to start playing that stream.
So in addition to those things, the typical things you
would implement are a setup or a pairing activity.
This is just an example of the pairing activity from the dish
DVR integration that we have on our existing devices.
So in this example activity, we send a command to the DVR
saying we want to pair.
The DVR displays the confirmation code on the video
stream, and then the user has to enter that code to
accomplish the pairing.
And then finally, if you have any user configurable settings
for your device, you can implement a settings activity.
This is just a standard Android preferences activity
that gets installed into Google TVs settings
application.
And this is, again, an example of DISH network's--
the DISH DVR's settings where we can configure things like
the behavior of the fast forward, rewind
keys, and so on.
And we can also display some status information about our
device like the IP address and so on.
And back to Christian.
CHRISTIAN KURZKE: Thank you.
Thank you, Mark.
So basically, just to summarize this API, first of
all, if you have a device that is connected to the Google
TV-- and connected can be used very loosely.
It can be either an infrared protocol that you can speak to
your device.
Maybe it's a legacy device.
Or maybe you have a more advanced two-way
communication-capable device.
Usually they come with like an ethernet port.
If you know how to control your DVR device or your media
device, you can write a Java layer code.
We call it basically a media device APK, which is all
basically sort of like a driver that
talks to your device.
And you can stay all in Java, no native code required.
So you can do all this.
And then you can do this, and you can create virtual devices
that get the content over streaming protocols using all
of our adaptive bit rate streaming, using all the DRM
that we have, and bring it, integrated to the user.
And then the setup and settings activities, they will
do device-specific things.
So if you need to negotiate an IP address or if you need to
negotiate a pairing pin, you can do that.
Or if you have things like for a remote service, you might
have to log in.
If you need to set maybe a user account and password and
so on, you can do that as well.
So basically, what that means is we really have one
interface to all of your content on potentially all of
the devices in your living room and all of the devices or
all of the content that's out there in the cloud.
So what that means is if you as a developer--
and I think one of the biggest opportunities is really
creating applications that bring content to television So
you have access to all the content out there on streaming
protocol, so you can use HTTP live streaming, you can use
now also smooth streaming.
And if that's not enough, you can implement your own
streaming protocol that matches your potentially
legacy streaming servers or your CDNs.
We are compatible with the industry-leading
standard DRM solutions.
We have support for Widevine and, as Andrew pointed out,
now also for smooth streaming and PlayReady.
We know there's a lot of existing content out there.
And it's a huge investment for a company to build up a
library of hundreds of thousands of titles.
So we want to make possible that you can adapt or create a
custom application to talk to your existing back ends.
And all of this, even in the Java layer, we have a DRM
trusted video path, so you can be assured that your video is
really handled in the hardware secure path.
So that's pretty powerful.
And then to make it easy for the user, you can integrate,
just as a virtual or a real media device, so users have
the exact same experience flipping channels through
their satellite box as they would have flipping channels
through maybe your virtual application.
So if you're really interested in learning more about this,
we have, of course, our Google TV Plus page.
I encourage you to stay in touch.
We keep it updated with new information.
And of course, as developers, come to our
developers.google.com/tv page.
And I just want to thank you all.
And we take questions, so there's two microphones.
If you have questions, you're more than
welcome to ask us now.
Otherwise, we'll be hanging around the sandbox throughout
the day, so you can come by and you can play firsthand
with some of the devices and can ask us questions.
AUDIENCE: Hello.
So are there plans to bring these features to the main
Android code base?
CHRISTIAN KURZKE: Sorry?
AUDIENCE: Are there plans to bring smooth streaming and the
other PlayReady features to the main Android code base?
CHRISTIAN KURZKE: Do we bring smooth streaming to the main
Android code base?
Andrew, do you have an answer for that?
ANDREW JEON: It actually depends on Android team's road
map and their future road map decision.
But we just came up with this, so we haven't had a chance to
actually discuss when it's going to be suitable for them
to adopt yet.
So right now, it's only for Google TV.
CHRISTIAN KURZKE: Yes.
AUDIENCE: I have a similar question,
but from the browser.
So from the Chromium browser.
CHRISTIAN KURZKE: So will the streaming APIs--
or will it be possible in the Chromium browser to use the
DRM frameworks?
ANDREW JEON: There is a draft protocol, encryption module
protocol which is in draft and being actively worked on.
But it will be different from the DRM API in Android.
But there are a lot of working groups in our company and also
other companies that have combined, so working together,
but not available to the general public yet.
It's actually still in the works.
AUDIENCE: So it's the W3C-- what is it?
The CMB--
ANDREW JEON: Yes.
CHRISTIAN KURZKE: So is that the Chrome browser for Google
TV, or the Chrome browser--
ANDREW JEON: General, general.
Including all of them, yeah.
CHRISTIAN KURZKE: Yes.
AUDIENCE: Will information from the remote devices like
the DVRs, will it also pass control copy information as
well once the stream is decrypted?
CHRISTIAN KURZKE: You mean the copy information from--
AUDIENCE: CCI.
CHRISTIAN KURZKE: CCI.
The content will come in through the HDMI channel.
I assume you mean with the HDCP security bit?
AUDIENCE: Correct.
CHRISTIAN KURZKE: So that is basically just an HDMI
pass-through.
So we would just control the playback, for example, start
the playback.
But then once the content plays back over HDMI in, we
would just basically pass through the device.
You guys can correct me.
ANDREW JEON: So we don't touch CCI.
If the CCI is 1, we keep it 1.
If it's 0, we keep it 0.
And we don't do anything else.
CHRISTIAN KURZKE: Yes?
AUDIENCE: Are there any plans to improve the control over
video content being displayed?
For example, the overlay or position TV windows, or that
sort of thing?
CHRISTIAN KURZKE: So overlays and positioning the TV window
inside of an application.
It is a frequent request.
ANDREW JEON: It is actually technically possible if you
use something called Media Device View as an object.
But we don't really have a concrete widget or a
component-type of infrastructure readily
available for you to use at this point.
But that's something we are thinking about, but we don't
really have a concrete timing yet.
CHRISTIAN KURZKE: We know it's a frequent request for
developers to take the live TV or basically take the HDMI in
and embed it in their application.
It's one of those things we want to get it right, and
we're still working on the proper APIs that we feel
confident we can support going forward.
With overlays, we've seen Google TV applications that
actually overlay over the current screen using things
like toasts are toast widgets with customer views and so on.
So there is Android capability to do that.
Next question?
AUDIENCE: Any time window when you will be releasing the
sample code?
ANDREW JEON: It will be the later part of this year when
the device--
I mean, when our software update for second-generation
device will be rolled out to the market.
We are working into multiple OEMs, so even if we are done,
it's not really readily available to all the devices.
It takes some time.
A little bit later part of this year.
But it will be in this year.
AUDIENCE: Thank you.
CHRISTIAN KURZKE: So basically, all the APIs that
we talk about in this session, they will come with the next
software update to the ARM devices.
So today in the sandbox, it's not quite there yet, but we're
working on basically getting the next-generation software
update out.
And as soon as this launches, we will have all the APIs, the
documentation, and all the stuff on our developer page.
So stay tuned for that.
AUDIENCE: You know, we've still got to deal with this
four-by-three--
4:3-- and 16:9, some of the live content
which comes in 4:3.
Some of the devices, I think I've seen one, kind of given a
button, and then you just make it, stretch it to 16-by-9.
Is there any way we can deal with that situation in
software or app?
CHRISTIAN KURZKE: So changing the aspect ratio
of the video input?
AUDIENCE: Correct.
4:3 to whatever--
16:9.
So press a button like normally we in normal TVs on
the remote.
ANDREW JEON: So our platform has an API for OEMs to be able
to implement the features to change the aspect ratio.
So some OEMs do have a feature to change, some OEMs don't.
But if that's critical, we can actually strongly recommend
all of our device makers to follow this API standard to
enable aspect ratio change.
AUDIENCE: The problem with that is that they don't really
implement that.
My question was that is there any way we can handle that
software, put that in our APK, and then just tell the users,
in our own way, OK, press this button and that would just
stretch the image?
ANDREW JEON: We will take that as a
request, a future request.
AUDIENCE: Sorry, I didn't get that.
ANDREW JEON: We can take--
CHRISTIAN KURZKE: We'll look into this.
We'll note it.
I think the answer is today there
is no such API available.
But I think it's something to look into.
AUDIENCE: Yeah, I see great demand.
And I mean, people see all the black sides on both sides.
CHRISTIAN KURZKE: So I think if you have feature requests
or any ideas, also reach out to us on our developer page.
We have what we call an Issue Tracker, where you can submit
ideas and things you want to have implemented.
Or you can actually reach out to us on our Google+ page, and
we can start a discussion on it.
AUDIENCE: All right, thank you.
CHRISTIAN KURZKE: Thank you.
All right.
I think we're just on time.
Thank you everyone very much, and have enjoy the rest of
your Google I/O.