Tip:
Highlight text to annotate it
X
JAREK WILKIEWICZ: Hello, everyone.
I'm going to go ahead and get started.
Welcome to our session.
Thank you very much for coming.
I realize there's free drinks after this, so we're going to
try to keep you entertained until then.
We'll talk about supercharging your mobile game with YouTube.
And I would like to introduce my
co-presenter Corey from Unity.
He is working on Unity implementation with a number
of partners.
So I wanted to start with this photo that I
found on the internet.
As you can tell, it was taken a while ago.
There's a few really excited mobile gamers.
This photo was taken at a gaming convention in
Vancouver, Canada.
The question that popped into my mind when I saw this is,
what would it take to excite and energize your gamers, your
players, your customers equally today?
Well, if you look at some stats, there was a study done
two years ago.
And 95% of gamers that spend significant time playing games
watch user generated content on YouTube.
So these are gaming clips.
Actually, the number for trailers is 94%.
So more people watch user generated content than the
highly produced gaming trailers.
In fact, there's several companies already taking
advantage of this trend.
So, some of these titles that I'll show you will probably be
familiar to you.
Trial Xtreme 3, FIFA 13 has video upload capability so you
can share your gameplay directly from the game, and
then the granddaddy of them all, Talking Tom.
This is an application that has really taken great
advantage of video sharing.
So in that app, you can create a video that's both a nice
virtual pet, a type of app, as well as a
self expression platform.
So this is what we'll talk about today.
We will show you how you can take a sample Unity game that
we build for this purpose, and then integrate YouTube API in
order to share the gameplay with the world.
YouTube has one billion users, so there's a billion potential
customers out there.
And then we'll show you both the upload and playback
capability--
how that can be integrated into a Unity game.
So really, the only thing you have to do as a game developer
is focus on building a great game.
That's it.
So first I wanted to show you a little demo of the game that
we have built for this purpose.
So here's my game, and the objective of this game is to
shoot these gas cans as fast as possible.
And I'm pretty bad at it, as you can tell, but I'll try to
do my best.
One more.
Oh.
I got 12 points.
All right, time is up.
So the next step--
I just had this wonderful score of 12 points.
I would like to watch myself play this
game, so here's a replay.
Pretty amazing.
I rock.
So what I do next is, well, I definitely want to share this
gameplay on YouTube.
So I'm going to hit the YouTube button and let this
processing take place.
What this means to you is, imagine you were playing an
awesome game--
some sort of warfare simulation and with your best
friend, and you just blew up his or her Abrams tank.
You can share that achievement on YouTube, and then claim the
bragging rights for the rest of the week.
Now, I would like to introduce Corey, who will take you
through the process of creating games with Unity, and
then he will discuss some of the plugging opportunities
that exist within the Unity framework.
After that, I'll take you through the process of
actually integration between YouTube APIs and Unity.
So over to you, Corey.
COREY JOHNSON: Thank you.
Hello, everyone.
My name's Corey Johnson.
I'm a field engineer at Unity Technologies.
This is Unity circa last fall.
There I am, just in case you were doubting me.
I'm going to take you through what Unity is, what we're
about, and then kind of give you an overview of our editor,
just so that we have some context when Jarek talks about
his plugin later.
I apologize if you already know Unity.
We're going to stay high level, but I just want to go
really, really fast.
And if you don't know Unity, I apologize, because we're going
to go really, really fast.
But myself and my illustrious colleagues are in the sandbox
on the third floor in the Android section, so please
feel free to follow up.
So what is Unity?
Well, we're a cross platform engine.
We come with an editor, which is a tool to
create 2D and 3D content.
We believe we have the best tool in the industry.
We believe that we give you a rapid learning curve and rapid
iteration times.
We have a mantra of build once, deploy anywhere, meaning
we're, again, multi-platform.
Our mission statement is to democratize game development,
and that means that we want anybody out there who wants to
build a game-- whether an artist or an engineer-- to be
able to build a game.
And that attitude has led to our tooling, and it also led
to an enormous community, which I'm going to talk about
in a second.
When I say everywhere, I'm just going to
quickly define that.
We're currently at 13 different platforms, and more
on the way.
We even have Union, which is platforms that you may not
even know you want to be on yet that we
help get content to.
So as you can see, we're fairly prolific, and if you
want to be there, we're probably there for you.
A little bit about our community.
1.8 million people use the product.
400,000 monthly actives every month.
5 million hours of creation every month.
Enormous.
From the hobbyist all the way up to triple A studios.
One of the things we did to democratize this is build an
asset store, which is a marketplace for our users to
share content, whether it be tools that they built in our
customizable editor that we just didn't get to, or if it's
awesome artwork that people need.
So like, for me, I'm an engineer.
I can't do art, but I can go in and buy awesome
environments or a knight to run around my game.
One of the features that we use is called Native plugins.
We're going to talk a lot about it today, so I just want
to give you an overview of what they are.
So Unity uses Mono for scripting, so that means you
can run your game scripts across platform.
But what plugins let you do is call native code to whatever
platform you're on from your game scripts.
For this talk, we're going to focus only on Android, and I
will point out this is a pro feature.
So first, there's two ways to do plugins.
The first is native and the second is Java.
They're actually interchangeable.
I'm going to talk about Native first, and then a little bit
about Java.
So here is an example of some code that's like C code that's
a minimal plugin.
All it does return a value.
All I need to do is put this C file and build it into a .so
file using the Android NDK, and I need to place it in my
project plugins folder, which I'm going to
show you in a minute.
To call that code-- and that code can be doing, obviously,
a lot more complex things--
all I need to do is, from my game code, use interop
services, annotate my function definition, and
then call my code.
That simple.
For Java plugins, it's a little bit different.
We use the JNI to interact with your Java
class that you build.
You can build a .jar file using Eclipse and the Android
development toolkit.
Again, you just build a .jar that contains all your
classes, make sure you check the Is Library button, and
then place that inside your plugins folder.
We provide--
because you have to do some work with the JNI to discover
your methods and then invoke them--
we provide some wrapper classes, the Android JNI and
the Android JNI Helper.
And on top of that, we build another layer of helpers,
which is Android Java Object and Android Java Class, which
allow you to not only automate the whole process, but we also
cache those lookups so subsequent
calls are a lot faster.
So instead of showing you Java code for the plugins, I'm
actually just going to show you an example here.
And then Jarek's example is going to show
a lot of Java code.
So on the slide, the two lines that you see that are not
commented out are what you need to do to make a call to
get this hash code string.
All the commented out lines are what you would do if you
had to do that natively, so it just kind of gives you an
example of how much work we save for you and how much
prettier your code can look.
That's about it with plugins.
What you do with them is up to you.
Here's some pro tips.
When you're dealing with Android--
and obviously, you have the whole world of the native
platform at your fingertips--
you can do things like add activities and do things with
stuff that we don't normally request permissions for.
So in order to use that, obviously, you need those in
your Android manifest.
Now, usually we generate one for you, but you can take that
and modify it, add whatever you need, place that in your
plugins folder, and just drop it right next to it, and we
will automatically use that one instead.
That way, you don't have to rebuild it every time.
One thing, whether it's for yourself, whether it's for
future users, customers, whatever, teammates--
you saw there was some work, annotating your function
definitions, making the NJI calls.
It's really easy to just drop in another code file that does
all that work for you and just exposes the raw APIs you want
your users of this--
the plug-in-- to actually see.
We are dealing with managed code in
your user game scripts.
When you go down to the native level, obviously you're
native, and we have to marshal that data-- any data that you
want to deal with-- back and forth.
So be aware that there is some penalty costs to this, so you
may have to weigh the benefits of where you
want to do the work.
We're talking a lot about video
today, for obvious reasons.
I just want to point out some function calls that we're
going to use later, and what they do, and
explain a little bit.
The first is on render image.
This is a call back to you, so you can basically react to
whenever all the rendering is finished
for your render target.
This means that everything is done, and in this case,
normally, what we would do is Michael Bay up the screen like
here, and add some bloom effects and lens flares.
All we need to do it for to get a replay is
just copy the frame.
Similarly, on audio filter read gives you a chance--
every time we read a piece of audio data that we're going to
send into the audio processor, you get a chance to customize.
You can squelch it, make yourself sound like Darth
Vader, whatever.
Again, in this case, all we're doing is using that
opportunity to copy that data so that way, we
can encode it later.
When you're dealing with a plugin that deals with your
frame time, you're going to want to be able to sync up
your plugin work when you're going to get the frame versus
when Unity's rendering, because you don't want to grab
any frame data in the middle of when it's writing to the
frame buffer.
Wait for end of frame is an object you can yield to in
your co-routines, and that will make sure that when you
come back and that code executes, you're at the end of
the current frame.
And that just helps keep all the syncing together.
I'm going to switch over to the editor now.
Awesome.
So this is Unity Editor.
This is where you're going to build your game.
The first thing I'm going to talk about is our project
window right here in the middle.
The project window is literally a mirror of what's
actually on disk.
It's all your models, meshes, textures,
sound files, et cetera.
You can literally go and look in on the file system, and
it's going to look exactly the same.
You'll see here I have this plugins directory.
Plugins is one of our reserve words.
We look for that folder and then treat things in there as
plugins, so we know how to build them
into your final game.
And then you just say, I want an Android.
And as you can see here, we have a
custom Android manifest.
And then we have a bunch of libs that Jarek's going to
talk about later.
So not everything in there is going to end up in your final
build of your game.
On the left here, we have a hierarchy window, which you
can see here to the right of my screen.
This represents everything that's represented in our
drawing screen.
So that's like my wall here.
I can move it up and down and break his game, and do all
sorts of fun stuff.
And here's where you build your layout of your game and
all that stuff.
So we are a component based engine, which means that
everything in our game engine is an object with components
that add behavior.
So over on the right, we have the inspector window.
The inspector window shows us all the components on there.
So you can see that currently, I have the wall selected.
We can see that it has a mesh, it's a mesh render, it has
some animation to it, it has a box collider.
And all these do is tell the engine different things, and
have different widgets where we can customize how our
engine reacts to these objects.
If I select our main camera, we'll see that we have a bunch
of scripts attached to it.
Now, to get custom functionality into your game,
you create scripts.
Our scripts, again, are Mono, so it can be C#,
JavaScript, or Boo.
We ship with Mono Develop, so you have everything you need
to develop a game already installed.
Inside here, we have--
it looks like I picked the audio one.
That's fine.
Inside here, we have the OnAudioFilterRead, and you can
see all we're doing is checking to make sure that
recording is active and then passing that data to a convert
and write function.
Similarly, we have a similar function for video, where we
have OnRenderImage, where all we do is do a little bit of
logging, and we are making a copy of our frame buffer.
I mentioned earlier to have rapid iteration times.
At any time, you can hit the Play button located at the top
of the screen there.
And we go ahead and allow you to play your
game, and test it out.
So I can test out the force of the tennis balls that we're
shooting out, et cetera, et cetera.
Awesome.
So that's a little bit, so when Jarek's in there later
and showing screenshots, you kind of know what you're
looking at.
I'm going to pass it back to him now, so he
can talk about, maybe.
JAREK WILKIEWICZ: OK, thank you, Corey.
OK, so Corey took you through the process of creating a game
with Unity.
And if you're a Unity developer, then really, the
only thing that might be new for you is the way you
integrate plugin capabilities.
If you haven't tried Unity yet, I encourage
you to try it out.
This is a lot of fun.
Now, what I'll cover next is, OK, now how can we integrate
video upload and video playback capability into the
game to really take advantage of the opportunity out there
that we highlighted earlier?
So first, let me switch back to my demo.
And the gameplay that I was showing you earlier when I
scored an amazing amount of points, I'm actually
ready to share it.
I think it's really awesome.
I'm just going to hit Upload.
And at this point, we're actually uploading the video
to YouTube.
I'm actually generating a notification to show the
status of the upload.
While that is taking place, let me walk you through what
actually happens.
So for this, we are using WebM as a video container, VP8
codec, and Vorbis codec for video and audio respectively.
Tomorrow morning, there's a talk about VP8, so if you
would like to learn more, I highly encourage
you to check it out.
And we're also launching VP9.
There's another talk about the next generation of this
technology.
But let's just take a step by step look at
what is involved here.
So we started with this awesome Unity game, and then
what we actually need to get out of the game engine is the
audio and the video frames.
So this is what is illustrated on this diagram.
We're getting the video frames, passing them onto the
VP8 encoder.
We're getting the audio frames, passing through the
Vorbis encoder, and then use the WebM container in order to
create a video file for us.
Once that file is created, we use the YouTube data API.
In this case, we use the YouTube data API version 3 to
upload the resulting video to YouTube.
The YouTube data API is RESTful.
It's fairly straightforward to use.
For Android, we use the Java client libraries, so you don't
actually have to write any HTTP, HTML, HTTP
JSON parsing code.
Now, in our example, we built a Unity plugin in order to
take care of the video upload from Unity
and the video encoding.
Couple of hints while developing the plugin.
Corey mentioned the Android manifest is something that
Unity generates by default, since it
actually creates the APK.
However, if you're building any additional activities in
your code, or need additional permissions, you need to merge
that into the manifest created by Unity.
So the way to do that is, first time you build a game in
temp staging area, you will get an Android manifest.
And then you edit that and copy that over into the
plugins directory, and you're all set.
Another thing to pay attention to is that plugins are
actually required to be built as Java Android libraries, so
you actually need to be careful about resource merge.
And really, the only thing that is somewhat inconvenient
is that if you're used to the ID base lookup up in your code
directly, inside of the plugin while you're running in Unity,
you should actually use the functional way of looking up
or the procedural way of looking up the resources.
Other than that, it's just regular Android development.
Now, Corey mentioned some of the methods involved in video
and audio capture, so I'm just going to quickly go through
what we did for this demo.
So we actually attach two scripts to the camera--
one script responsible for video capture, the other
script responsible for audio capture.
So let's start with the audio.
We are actually capturing the audio at 24 kHz frequency, so
this is the raw PCM data that you're going to be receiving
in your application.
You can actually configure that using
AudioSettings.outputSampleRate.
So this is something that you can configure in Unity.
And for video, I'm just using 10 frames per
second right now.
Unity has a nifty feature.
You can actually set the replay frame rate, and this is
different than the actual target frame rate at which
your game runs, precisely for this use case.
So if you want to record a video of the game, you can
actually set the capture framework to whatever makes
sense, and you do operate within the number of
constraints that we'll touch upon a little later.
So for audio recording, and Corey highlighted that when he
pulled up MonoDevelop to show you a snippet of the code.
We use AudioFilterRead.
So this is the callback that is invoked by Unity at the
frequency that you define using the mechanics from the
previous slide.
And what's passed to you is really raw PCM audio, and this
is what we'll pass on to the encoder.
Then for video--
and this is, again, something that Corey highlighted--
the continuation here, yield return new wait for end of
frame, that allows you to be notified when frame rendering
is fully completed, at which point you can actually turn
around and read the complete frame back.
And there's a number of approaches to this.
We experimented with a few of them.
They all have different tradeoffs.
Here's one.
So this is the approach where I'm actually using texture 2D
to read back pixels at a resolution defined by you, so
you can actually generate a video at a lower resolution
than the original.
And this is, again, if you want the video file to be
relatively small, so you can be shared from
mobile devices quickly.
You can actually read the frames at smaller resolution.
Another approach is to use the glReadPixels, so you can read
it directly from the frame buffer using the open GL
glReadPixels.
And this is an example of another implementation that I
did using Java.
So that is actually invoked from within the Java plug-in
to obtain the frame buffer.
So that's the capture step.
Once the frames are captured, so the raw audio, raw video,
we need to encode it.
And for that, we're actually using WebM and VP8
plus Vorbis for audio.
So let's talk a little bit about how that is done.
And again, just referring back to our reference diagram,
we're actually at that stage right there in the middle
where we obtain the raw frames, we are passing it down
through the encoder.
So one thing that is very useful on Android is our WebM
engineers actually have created JNI bindings for the
encoder, both the Vorbis encoder as well as the VP8
encoder, and you can fetch that from code.google.com.
That is a JNI wrapper.
It has dependencies on a few of the native libraries, but
this is what actually provides you a really nice performance,
and this is what we used in our application.
So a couple of things to consider when you're actually
using this.
So think of this as a kind of nice object-oriented way to do
a video encoding.
So if you're not comfortable writing C code, but you're an
Android developer, you know Java, well, now you can encode
video, and you can do that very easily.
So let's just walk through the basic building blocks.
Audio encoder, there's a couple of classes-- the
encoder configuration and the actual encoder, a video
recorder, again, the configuration, and the actual
encoder, and then the WebM maxing capabilities so that
you can write the compressed audio and video frames to a
container, and then end up with a single file.
And here's an example of how this looks in practice.
So we start with the byte array, which is basically the
PCM audio that was given to us by Unity as a part of the
script attached to a camera, and then similarly for video,
we read that video from the frame buffer.
And then we construct--
ultimately, what we want is an audio frame and encoded
packet, which basically represents the encoded audio
and the encoded video.
And the way that is obtained is we actually pass the raw
audio bytes to the Vorbis encoder, pass the raw video
bytes to the VPN encoder, and then we use the maxer to save
the audio and the video frame.
That takes care also of synchronization, so you don't
really have to do any work in that area.
And what you get back is a video file.
So here's an overview, again, of the objects
that we just discussed.
So encoder configurations, both for video and audio, the
actual encoders, the tracks representing the audio and the
video part of the WebM file, and the maxer.
So this is kind of the way I see it as an object-oriented
way of doing video encoding.
And I really like it, because I think it's very
approachable.
So for those of you that would like to do a little bit more
work in this area, but have been intimidated by having to
deal with a bunch of native code and large libraries, this
provides an abstraction that is very easy to use.
So once we have the video and audio, the only thing
remaining is the actual upload.
And the way we do that is using the YouTube data APIs.
I'm using the YouTube data API version 3.
So in order to access that, you need to register your
project in the developer console.
One thing to note is we are uploading the video into the
user's account.
Therefore, we must obtain a note of permission from the
user for our application to act on his or her behalf.
And the way that takes place, there's a little
bit of magic involved.
It actually works very nicely.
The only thing you need to know about is that when you
register your project in the developer console, you need to
supply a fingerprint for the key that you're going to be
signing the application with, and you have to tell Unity to
use a specific key storer or a specific key to sign your APK.
And once you do that, everything
just works like magic.
So we know on the server sides, our API back end notes
that hey, this is your application that is generating
these requests, so you don't actually have to change any
code for this.
And then the upload process, when I was showing you a
couple of hints, don't block the user
waiting for the upload.
You can just do that as a service, and then use a
notification to indicate the progress.
And then, use resumable uploads, because if you lose
connectivity, which is, in fact, very likely at
[INAUDIBLE]
right now, this should actually resume automatically.
So here's one video that I created a little earlier.
Let's just play it.
I'm going to switch to the mobile device.
So remember the notification that I was popping up as a
part of the video upload right now indicates that my video
has been successfully uploaded.
Watch the video on YouTube.
So when I click it--
let me just select it--
I can see the gameplay right in my game.
And I'm going to tell you a little more about how that is
implemented as well.
But before that, just a quick note on authorization.
I mentioned to you that we use OAuth 2.0 in order to upload
the video on behalf of the user.
And to implement that in your Android application, you can
use Google Play services.
So we're using GoogleAuthUtil.
Makes it very straightforward to implement the old flow.
And really, the only thing you need is to remember about the
scope that you're going to be requesting from the user.
So in our case, I'm using YouTube Read Only, because I
need that in order to check on the upload status.
And I'm using YouTube Upload, which is going to grant my
application the right to upload videos.
And that is reflected here by this popup that is generated
by the Auth for Google Play Services.
And then I was just showing you a while ago how the
playback in application playback works.
So the reason why that is useful is earlier today, you
have learned about the new capabilities that will be
launching in the gaming area.
So personally, I think leaderboards and achievements
are very cool, but I want a video to prove it.
So this is one use case that you can implement is if you
have the upload capability, you could associate or keep
track of these achievements and leaders, and show the most
interesting footage from your users in the app itself.
And for that, you can use the YouTube Android Player API.
So this is an API that we launched a few months back,
and as I was showing you earlier, it has the capability
of high quality in-app video playback.
There's very little work that is required in making that
happen in your application.
Really, all you need to do is drop in this library here,
YouTube Android Player API.
This is a small client library that actually relies on our
YouTube app to do the actual video playback.
So it's very robust.
It has all the capabilities that you see in the YouTube
app, and you can make that available inside of your own
app that is running inside of a Unity game using the plugin
capability.
A couple of things to note.
The YouTube Android player API requires a developer key that
is slightly different than the dev key that we used for the
old uploads.
And then we may prompt you to actually upgrade the YouTube
app on the user's device.
So if the app is out of date, because the user hasn't
updated it, or it's not set for automatic updates, we
actually generate this error service
version update required.
Typically, that doesn't happen, but this is how we can
make sure that the latest capabilities or bug fixes are
actually made available to the user when they try to use our
YouTube API to do video playback inside
of your Unity game.
And then our transcoding pipeline takes a little while,
especially if it's high quality video, and we
transcode it to a number of different formats.
YouTube data API v3 has this nice capability.
You can actually check the processing progress in order
to find out, is this video ready to be shown on all the
platforms that YouTube supports?
So I highly encourage you to actually do that before you
do, say, software sharing.
So I was requesting G+ permission in my auth flow a
few slides earlier, and the use case there is, once I'm
done with the uploading, I want to
share it with the world.
But share it only once processing progress indicates
that the video has been completely processed, because
that means it will play on every single type of device
that YouTube supports, all the transcodes have completed.
So a few links.
If you'd like to learn more about our APIs, go to
youtube.com/dev.
This is the link to the repository that hosts all the
JNI wrappers for WebM, VP8, Vorbis, and libyuv, which
allows you to do RGB to YUV conversion, which is, again, a
little detail that we don't get into too
much during the stock.
And then we are working on polishing up this demo app,
and then we're planning to open source it as well, so you
can try it out yourself.
So we hope that this capability
will excite your users.
Now it's 2013, so I figured someone playing a tablet game
would be an appropriate way to conclude this presentation.
And if you guys have any questions, please
come up to the mic.
We have a few minutes left.
AUDIENCE: So this would do a record as the user is playing
the game, right?
So what's the impact on the performance that you expect on
the device?
Because gaming already consumes significant horse
power on the platform.
And then if you're going to record at the same time.
JAREK WILKIEWICZ: So the question is, what is the
performance impact of this capability?
Actually, the way this demo app is implimented, we are
using attract where the gameplay recording is not
happening at the same time as the game.
So I am actually doing a replay using the Unity
capability where you can actually tell it to render
frames at certain speeds.
So there is no impact while you play the game.
However, when you're actually ready to share it, depending
on the device and depending on how you configured it-- for
example, if you want HD, if you want high quality video,
because those are parameters you can pass to the encoder--
then the actual rendering of the frames may or may not keep
up with the frame rate that you get when
you're playing the game.
So there's an additional step that involves basically
rendering the frames one by one, and at low resolution,
low frame rate.
Right now, because I'm using a pretty expensive way to fetch
the frames from the frame buffer, at low resolution, I
can keep up with the frame rate of the game.
At high resolutions, I can't.
However, that doesn't really impact the gameplay, but it
impacts the replay.
Ideally, what we would want--
and we have a couple of partners that do that for
other platforms--
in fact, there's one in the sandbox, and one of them spoke
earlier today at the--
as you see, raising his hand.
So there's other approaches to that.
They don't currently work on Android, so that's an
alternative that can be used.
Hopefully we'll get to that same level of performance, but
the impact right now is zero, because the step of rendering
the gameplay is distinct from the play.
AUDIENCE: So you also said that VP8 and going to VP9,
again, does the developer kind of pick the codec there?
If it's going to be VP8, VP9, or is it YouTube that decides?
JAREK WILKIEWICZ: Yes.
So the question is, what does it mean now that
we have VP9, VP8?
So the good news is, you don't really care.
So frankly, as long as you give us the content in any of
the formats that we support for YouTube ingestion, we
actually turn around and transcode it and everything
imaginable that is required by all the
devices that we support.
So we chose VP8, and we're going to get into that in more
details tomorrow is because this is open source royalty
free codec.
You don't have to pay anybody any money.
You don't have to pay any royalties.
You can do whatever you want with it.
And it's a very liberal license.
And it's open source.
So we find that this is a good fit for these types of
applications.
The way YouTube works is, once you upload the video, it will
actually take care of transcoding into the formats
that are supported by different devices.
So if a target device for playback only supports H.264,
we'll use that transcode.
Then our Android player API, which I demonstrated earlier
for playback, uses one of these transcodes.
So it really is totally transparent to the developer.
The only thing you have to know is all these codecs have
specific requirements.
For example, VP8 requires a YUV
representation of the data.
That's why we have libyuv also wrapped, so you can pass the
RGB data that you get from the frame wrapper, convert that to
YUV, and pass that into VP8.
So there's a little bit of knowledge that is required,
but I would say it's pretty minimal.
AUDIENCE: So you answered part of my question with respect to
the transcoding.
Now, for recording, I assume it would also be H.264?
Because most of the devices, they don't necessarily have
support for VP8 HD record.
JAREK WILKIEWICZ: Yeah.
So we are using a software encoder right now.
So the way this application is built is the actual VP8 codec,
Vorbis codec WebM container is shipped as a native library
wrapped through JNI and packaged in the Android plugin
that is integrated with the Unity.
So everything required to render the video is included
in the game itself.
Android has the capability to actually use the underlying
hardware encoder, and that is typically H264.
Though, and again, this is something that tomorrow's
session is going to go into in more detail, the world there
is also becoming more attractive.
But the nice thing about this approach is there's no
dependency on the specific device or a specific version
of Android, because it's just C code that you compile using
NDK, and then the JNI wrappers.
And it's also very small.
AUDIENCE: Thank you.
JAREK WILKIEWICZ: All right, so we have 20
more seconds left.
If there's one more question.
AUDIENCE: From the beginning.
I just want to ask, can we record the normal Java screen?
I mean, the application screen in Android, without
[INAUDIBLE]?
JAREK WILKIEWICZ: Yeah.
Yeah, so the question is, can you record the
application's screen?
And I believe the answer is, for an arbitrary app, the
answer is no.
And typically, it's because of sandboxing
requirements and whatnot.
So that's why this is something that has to be built
into the application itself.
And this is really how for PC games--
AUDIENCE: What are these mechanisms for [INAUDIBLE]?
JAREK WILKIEWICZ: Yeah.
So in this session, we actually described how that
can be done with Unity.
And the mechanism for that is Unity has integration points
to obtain the raw audio and the raw video frames.
And this is what is actually passed into our encoder and
uploaded to YouTube.
But it's application specific.
So Unity has this capability.
It relies on the GL capability.
So this is something that is app specific.
And if this is not clear, I'll hang out after this.
We can go in more detail.
All right so.
I think we're out of time.
Thank you very much for coming, and
please rate our session.