Tip:
Highlight text to annotate it
X
JAMES HAWKINS: My name is James Hawkins.
I'm the tech lead of the Chrome team in Los Angeles.
And for the past year, we've been working very closely with
the Google+ Photos team on a new product that's the Google+
Photos Chrome app.
So this is a Chrome packaged app.
And if you'll remember, packaged apps were announced a
little over a year ago.
And this is the key piece of Chrome's app platform.
Packaged apps give you the ability to have extended
permissions, you can do a little bit more.
You can have a more native look and feel with your app.
It's a step above what you can do with the
web platform today.
So we're going to talk about Google+ Photos Chrome app.
Actually, instead of talking about it,
let's cut to the demo.
So this is Google+ Photos, the Chrome app.
This is my personal account.
I've got all my photos loaded up into it.
We're doing some nice scrolling.
And if you note, I'm on a Pixel right here.
So we obviously have touchscreen capabilities to do
scrolling with this.
OK, so we're viewing some photos.
Let's say I want to go in and check out a particular photo.
All right, I've got this photo, and I've got my nice
information, my tags, and details.
Very neat.
But I just went and used my camera and took a bunch of
pictures, and I've got it on an SD card.
I'd like to upload these to the Cloud.
I want to share with my friends.
So let's go ahead and do that.
Close this.
So here we go.
The app was already open.
And it's already started importing photos, and it's
going to start uploading them.
We can view the ones that are on this SD card.
Cool.
So these are the pictures that I just took.
Got a little notification that we had some photos copied, and
they're being uploaded right now.
We got a notification that we're finding the best photos,
and there we go.
What you just saw is something that you heard
about in the keynote.
This is autocuration.
So in the back end, Google said, we're looking at your
photos, we want to see what are the best photos of the
ones you just took.
Let's make it very simple for you to share the best photos
that you have with your friends, your
families, et cetera.
For example, this photo has been deemed to be
underexposed.
Cool.
All right.
That's fine.
I'm not too picky.
I just want the best photos to go up.
So we want to share these.
Cool.
I've shared it with my family.
Cool new photos.
And so these are the photos that I just shared.
We can change the name.
No, it's '13.
It is early after all.
Cool.
So this is Google+ Photos.
You've got the autocuration magic behind the scenes.
You've got a very slick interface.
Very fast viewing of all your photos,
just really nice scrolling.
And we're going to talk a bit today about what it took to
develop this product as a Chrome
packaged app on the platform.
AUDIENCE: Is it available in the marketplace?
JAMES HAWKINS: This is not available yet.
It'll be available soon.
So we started developing this product with a
set of goals in mind.
What did we want to accomplish?
When we started development, the only way to view photos in
Chrome OS was through the file manager, and even then, that
was not the best experience.
You could do some minimal editing.
You couldn't really share very easily.
And the viewing experience was just a stopgap.
That's not to say, it's not good.
But we wanted to do something better.
We wanted a better photos experience for ChromeOS.
In addition, we wanted a place where the Google+ Photos team
could experiment with new UI, new back end functionality
like we're talking about with autocuration.
In fact, autocuration was not available to the public in the
Google+ desktop until we just announced it at I/O
a couple days ago.
And so this product was the only place where the
autocuration team could see autocuration live, see how it
worked out, get a lot of feedback on the feature.
And so this is going to continue to be one of the
places where that feature development happens like
sandboxed, if you will.
And coming from the Chrome team side of things, we wanted
to make sure that the platform had everything that we needed
in order to be very successful and very easy for people to
develop apps.
What's missing?
Could we identify it?
What are the bugs?
It's a very nascent platform, and a lot of the API's that
are being developed are even, right now, in development.
And we wanted to test them out, have a user for the
platform instead of giving it to you guys, and you guys have
to cut your teeth on it.
Which is not the best experience.
We wanted to do that ourselves to spare you guys some pain.
And as a part of that, as well, we are Chrome developers
in addition to doing some of this front end stuff.
So when we did find areas that were lacking in the platform,
we could go in and add those APIs that we were missing.
And I'll talk about one of those specifically later on.
So Chrome, as you know, has three core principles.
You want your UI to be extremely
quick, extremely fast.
You want animations to be very snappy and fluid.
When you have jink, you have stuttering, you have a really
bad user experience.
It's jarring for the user.
Keeping your app very simple, to me, it means
getting out of the way.
Get your UI out of the way of the user, out of the way of
what they're trying to do.
And sometimes that is actually the hardest thing to
accomplish.
Security, obviously, is extremely important.
You want to keep your users secure not only from your app
misbehaving but from malicious attackers that can use your
app to harm your user.
You want to make sure that that is not possible.
So we took these principles to mind when we were developing
in the app.
And the challenges that we faced when developing the app
I'm going to frame around these core principles.
Speed.
You've got to be fast.
Everybody wants to be fast.
One of the first problems we ran into was with having to
have a client side photo and needing to know if we needed
to upload it or not.
You obviously don't want to waste bandwidth if the user
already has that photo in the Cloud.
And so the way we did that is to store SHA-512 hashes of
these photos in the Cloud when it was already uploaded.
And so when we have a new photo to upload, we have to
calculate that hash to say hey, do you have
this photo or not?
And if not, we don't need to upload it.
So we have a little demo here.
This is a quite large image.
It's about 12 megabytes, I think.
And this is just one example of what the user could be
uploading that we need to get the hash of.
And we found that the JavaScript implementation,
while we could optimize it as much as we possibly could, it
could never compare to a native implementation.
And we got that native implementation through Native
Client, which allows you to run C++
binaries in your web app.
It's a really amazing technology that is extremely
useful, especially for these use cases where you need to do
data processing, you want to get bare bones as close to the
metal as possible.
So I'm going to do a little demo.
We have hashing done in JavaScript.
And we have hashing done in Native Client.
And we're going to get the timings of those.
And we'll see what the difference is.
OK.
Almost three seconds compared to 200 milliseconds.
That is an order of magnitude difference.
That's amazing.
Not to mention the fact that Native Client doesn't block
the UI, the main thread.
Whereas this JavaScript implementation does.
So if you're taking three seconds to hash an image, your
user is doing nothing.
They're looking at your app not doing anything.
So Native Client saved us on this one.
And, in fact, with more native-like apps being
produced for the web, I think we're going to see Native
Client being used a tremendous amount.
One of the first big apps that was done with Native Client
was actually games.
That makes perfect sense.
You've got to have high, super high performance calculating
with a lot of data.
So actually, I want to go in, and let's look at this and see
how we hooked up with Native Client.
So we're going to go into the JavaScript.
And the meat of this is the NaCl hash for the NaCl
implementation.
We'll ignore the JavaScript for now.
We're going to do document, the element ID file_io.
I/O. So let's go see what that was.
That is, I'm going to zoom this up so you
guys can see it.
That's an embed of type X NaCl.
This file_io.nmf is an NaCl manifest format.
And it specifies what the binary is for--
I'm not going to open it up.
But it specifies what the binary is
that needs to be run.
We communicate with this with postMessage.
So we postMessage the file name, and inside the
implementation of the NaCl module, we take that file
name, do the hashing on it in C++, that's compiled, and send
a message back through the NaCl API to the app.
And we handle that message here.
So this is essentially how we were doing
it in Native Client.
It's actually really simple.
So this is what, maybe 10 lines total of code for
handling this.
And C++ code on the other side.
The meat of it is getting the NaCl module built, which there
are tons of tutorials out there.
It's not that hard.
Doesn't want to full screen.
OK, hold on.
Just one second.
Oh, yeah.
Thank you very much.
I was zoomed in.
So the next issue that we are running into is
the data store layer.
So with Photos, there are users that
have, say, 40,000 photos.
This is a case that we have to handle.
It's very difficult.
And the fact that this app needs to be native, needs to
act like a real app on your platform, you have to have
offline support, which means you need to store your users'
data locally to some degree.
You don't have to store everything but
at least the metadata.
For Photos, you would have maybe a link to where the file
is on the file system and the dimensions of the photo and
any other metadata that you have.
This needs to be stored somewhere in your data layer,
and we use IndexedDB do this.
The problem that we ran into with our initial
implementation is that we were not using transactions in
IndexedDB appropriately.
And this was kind of killer because it's not immediately
obvious what you're doing wrong when your reads are
going really slowly unless you're very
familiar with the API.
So I'll give you a demo here.
We have writing 1,000 records, reading, removing them, et
cetera, and with transaction and with no transaction.
We'll see what the differences are.
OK, it's taking a long time.
And this is even worse.
This is two orders of magnitude worse without
transactions.
And it's very simple.
Let's go in and see what the transactions do.
So you open a transaction, we'll zoom this up.
And I thank you very much.
I will zoom back out when I'm done with this.
So we have a notification when the transaction is complete.
That is when we say I've done all of my 1,000 operations.
And you just do all of your operations at once using the
transaction object.
You get the object store out from the transaction.
You do your operation on the store.
And once this goes out of scope, the
transaction will be complete.
And we'll be done.
The implementation that doesn't use transaction has
one database transaction per call.
So every time you are doing a new call, you're creating a
new transaction.
Very inefficient.
So I think the biggest part of making this app very
performant is the scrolling performance.
And this was the biggest challenge for us to solve.
There's a lot of optimizations you have to do
all across the board.
You have to make sure that you don't have any leaks.
You have to make sure that you're doing the right thing
in order to be on fast scroll path.
This needs to be very non-choppy.
Otherwise, if it is choppy, your users are jarred.
And they're just very confused.
It looks like not a native app.
It looks like a web app.
And so the solution for this is a fast scroll path.
The fast scroll path is where the GPU does the
scrolling for you.
Offload the processing for handling this chunk of video
memory, you've got everything rendered into it, and whenever
you scroll, you just ask the GPU to do the [? offset ?]
for you.
This is common in games, graphics, whatever.
And we're starting to add this to the
browser in a lot of places.
Keep in mind that this is very preliminary, and the GPU team
is working very *** this.
But there are a lot of things you have to keep in mind.
When you do scrolling, the element that you scroll has to
be the body element.
That's the first thing.
And I say that right off the bat because it's not obvious
that is a requirement or a constraint.
And you can set up your DOM, your HTML structure, in such a
way that it's very hard after the fact to move
scrolling to the body.
And we ran into that problem.
It was a total pain.
For example, let's go look at the app for a second.
We have our main content here in the middle.
And I'm going to go into the single photo
view to show you more.
In the toolbar on the top and this sidebar on the right, the
toolbar and the sidebar were siblings of the main content.
And this was all in one wrapper
that could be scrolled.
This doesn't work because you're not on the body.
And in order to be on the body element, we had to make the
toolbar and the sidebar fixed position.
And that has its own intricacies.
Like for example, the scroll bar goes all way the up
through the toolbar.
There's really not anything we can do about that right now.
And if that's the worst of our problems that gets us this
type of scrolling, this fast scrolling, then it's
definitely worth it.
You also want to make sure when you're handling the
scroll event, that you're not doing too much work.
This is when the browser is saying I'm about to scroll,
I'm about to change your UI, the page.
If you start doing a lot of processing, maybe start
loading a bunch of image elements, a lot of photo
elements, that inherently is going to make the scrolling a
lot worse, a lot slower.
So you want to offload your processing to a time that is
not in your body scrolling.
Perhaps queueing up and batching operations that could
be done later.
And we'll talk about that in one of our solutions.
One thing that we had to do was to reduce image loads.
So when you set the source attribute on an image element,
the browser, no matter whether it's in the view port or not,
the image element has to decode the image.
And we found that the decoding can be very expensive, and it
obviously scales with the size of the image.
So you want to question expensive
operations like that.
And setting image source is one of those expensive
operations.
So we talked about batching your heavy processing so that
you're not doing everything in your scroll.
One way to do this, and the best way right now, is to have
a callback to this method call RequestAnimationFrame.
This method is a way for the browser to alert you, hey, I'm
about to do an animation, and you should do some heavy
processing in this time period, batch it all up.
Do everything at once because we're going to
swap everything out.
And this is a way for the browser to say this will be
less janky.
And so whenever we do scrolling, we do
RequestAnimationFrame, and then we obviously need to load
more elements.
So when we RequestAnimationFrame, the
callback, we do that image loading then.
Because that's when the browser says, do your heavy,
intensive processing.
And this is really great to have high
performance in your scrolling.
Another thing that I think is starting to be more well
known, but it is not that well known I think, and it's not
very necessarily intuitive, is that when you measure certain
properties on DOM elements, you can cause reflows, which
may cause repaints.
And painting is the one thing you want to
minimize at all costs.
If you're doing a bunch of paints, then you don't have
the benefit of the GPU doing the scroll for you.
Because you're just thrashing what the GPU had.
ScrollTop, for example.
If you're reading scrollTop, not writing to it, if you read
it, you're going to do a reflow, and you could
potentially repaint.
Another one is
getClientBoundingRect on elements.
And you know what, it's kind of iffy on some of these
whether it will cause a reflow or not.
There are several good articles, and I recommend
looking up reflow, HTML reflow, to get a lot of
information about this.
But this is something that you have to keep in mind.
This is the next stage of app development.
And this type of performance is what these apps are going
to be doing.
And you want your app to be this performant.
Garbage collection is nasty.
JavaScript is a fun language.
And, obviously, the way the memory layout works is very
fortunate in some ways, but garbage collection is not one
of those ways.
So you want to try to minimize garbage collection by
minimizing the pressure buildup on memory.
One of the ways that we do that in this Photos app is if
you consider it, you have your main viewport, and you have
image elements for each of the photos that the
user wants to see.
And we obviously have to preload before and
after the view port.
And that could be 40,000 DOM nodes.
That's too many.
That's going to cause a lot of garbage collection, a lot of
memory pressure that you don't need.
Consider that you can compress those nodes given like say a
square of 1,024 by 768 higher up that is not
visible in the viewport.
Get rid of all those nodes, you can either store them
around somewhere, but at least keep them out of the tree, and
replace them with one div that's sized
the exact same size.
So that's like compressing what you had before.
And that will keep the structure of your document,
will not change the size of the entire body.
So the user won't even be aware that
you've taken nodes out.
A nifty little trick you can do.
It's really important.
Leaks are bad.
Everybody knows that.
But you've got to pay attention to it.
The profiler inside of Chrome itself can
really help with this.
Let's take a look real quick.
Nope.
So profiles here allows you to take a heap snapshot.
Obviously, the heap snapshot for this is not going to be
entirely interesting.
But it does show you some of the objects that are going on.
So we have like this slide deck object, and
there are two of them.
So I may question if I only have one set of slides, why do
I have two slide decks?
I don't know the answer to that.
Could be a leak.
We found, in many instances of just going through this, and
you can arrange things by, for example, dominators, which are
saying this thing is huge.
Obviously, you want to go through the native stuff at
the top that's not yours, but get down to where you're
allocating stuff and say this is too heavy.
This slide deck is too heavy, for example.
And that's just one way to say I need to go in here and start
looking for places where I may be leaking.
I may not be freeing references, removing
references, et cetera.
There are a lot of tools that can help with this.
They're, in our experience, not very great.
There's not a lot of tools right now that have a very low
false positive rate.
But it can point you in the right direction.
And it doesn't hurt to just run it every once in awhile
and say these are the known false positives.
These are the ones we need to fix.
Listeners can get very expensive.
We had an instance where you would start the app with no
photos, and we had 40,000 listeners.
What are they listening to?
I mean, that doesn't even make sense.
You need to make sure that things are being detached
appropriately.
They stop listening.
They can be dangerous not just in the memory footprint but
also in the processing footprint.
If these listeners are being fired on events, and they
don't need to be listening, then they're going to do some
processing that is unnecessary.
And that's going to hurt your performance as well.
And we already talked about profiling memory usage, so
we'll skip that.
So back to simplicity.
I truly think this is one of the most difficult things that
you have to solve in an app, how to get out of the way so
that the user can just see what they want to see, do what
they want to do, minimize your UI footprint, et cetera.
And the Chrome platform is really starting to take over
this and give you APIs that allow you to
get out of the way.
One of the big ones we had was with sign-in.
Signing into an app really feels like a web page that you
have to sign into.
Especially if you don't have the ability to sign in once
and have that persistent across multiple lifetimes,
multiple instances of the app, it just doesn't feel native.
And we do, that's our goal is to feel native.
Also, if you're signing in, that's a step, that's a road
block to the user getting to do what they want to do.
And so we really needed a way around this.
And thankfully, the Chrome platform has a way.
There is this thing called the Identity API which allows you
to retrieve the OAuth2 token of the user that's
currently signed in.
Now this app is right now designed for ChromeOS.
But the Identity API works on all platforms.
So if the user is signed into the browser, you can get their
OAuth2 token for their Google account.
For example, in this app, I didn't have to sign in.
There is no sign in button anywhere.
When you start the app, I don't ask you to sign in, or
I'm not asked to sign in.
And that all comes from the Identity API and the fact that
on ChromeOS, specifically, you're always
signed into the browser.
So for an app on ChromeOS, you shouldn't have to sign in if
you're using Google Accounts.
Now you may not be using Google Accounts, and the
Identity API has a solution for that.
It's called launch WebAuth flow.
And so with that API, you can pass in a third party URL in
point for OAuth2, and under the hood, all of
the workings happen.
A pop-up is shown that allows the user to login.
Now, obviously, that doesn't solve the issue of getting out
of the way and not having a dialogue.
But if you're not using Google Accounts, which is a fair
point, you need a way to sign in the user to your app.
And that token can be persisted across
instances of the app.
So, for example, in this code sample here, we are calling
getAuthToken, and we're storing that token.
So for the lifetime of the app, we don't have to keep
calling getAuthToken, we just have the token around.
And we build our requests with this token.
For example, the Photos back end at Google.
We're going to do a request to read all the
photos of the user.
And so what we've done is we have the token stored.
We just keep using this same token for all of our requests
in the app.
It's really easy to use.
So like I said, I had a camera, and I had
an SD card in it.
And I plugged it in, and the right thing just happened.
It's a Photos app.
I've installed this app.
I said I trust this app to do what I need it to do,
including the fact that it says it has
access to your media.
Currently, in the web today, and in most native apps as
well, you have to select the media specifically, say by the
file open dialog, which I consider the worst piece of UI
in browser history.
It's awful.
But everybody has to go through that.
The alternative is Flash, but even then, you still have to
do file directory browsing, et cetera.
We wanted to get rid of that.
The solution to this was the Media Gallery API, which is
like the pinnacle API in this app.
It's what makes this app pop.
It's what makes it real.
The Media Galleries API allows the developer to have access
to media devices, so SD cards, platform media, like my
pictures or the photos on a Mac.
And it has a seamless access.
You don't have to ask the user at the time of app running, at
the app running, only at installation and
only that one time.
So in this app, we have UI that shows up a notification
that says you just plugged in a card, we're
going to start uploading.
And you have the ability to stop the uploading, but that
action happens right away.
The user doesn't have to wait, and the user doesn't have to
do the file open dialog, which is just awful.
So here's a little code snapshot of how to use this.
The API call is getMediaFileSystems.
And if you're familiar with it, it returns a DOM file
system, which has its own file entries inside of that.
Its directory structure, you can read through this with the
file system API.
So this is really neat.
We just have this one little layer on top of the already
existing file system API that restricts what is shown to the
app to just media locations.
For example, when I plugged in the SD card, it knew that it
had a DCIM folder and said we think this is a media.
It's a simple heuristic, but it really solves 90% of the
use cases that we're looking for.
If you plugged in an external drive, and it had a DCIM
directory, we would say that it probably has
photos on it as well.
And we would start loading that.
So the first time I saw the Pixel screen, it was detached
from the actual laptop.
It was not even put together yet.
It was this little piece.
And he said, you have to see this, one of the
developers of the Pixel.
And he showed it to me, and it was just mind blowing, the
picture that he showed on it.
I'd never seen something like this.
And then he started swiping it and moving things around, the
touch on this one screen, it was just amazing.
And so you have the ability when writing apps not just for
this but say for tablets as well, like high DPI tablets,
you obviously have MacBooks that have Retina Display.
You want to make sure that you're optimizing for these
form factors which are becoming
more and more prevalent.
And for the high resolution display, it's not hard.
You just provide 2x assets, high-resolution assets.
The API to do this is in CSS itself.
It's called WebKit Media Set.
And you can specify the multiplicative
factor, so 1x or 2x.
And you just pass in the right resource through that.
One thing to keep in mind is that you want to make sure to
set the background size of your asset to the low res.
So we have this close icon, and it's 32 pixels in the
low-res asset.
When we load the high-res asset, we still want it to be
32 pixels on the screen.
So that doesn't blow up un-proportionally.
We just have it more dense inside of those 32 pixels.
You want to make sure you do this especially if your user
is going to be on a high-resolution display
because on most browsers and definitely in Chrome, the
scaling algorithm for images to scale up to high resolution
is optimized for speed, not quality.
You'll get very blurry images, and your users are going to
get a headache after a while.
It looks really bad.
We had a regression one time where the high-resolution
assets were not being loaded properly.
And every body was like, oh my gosh, this is awful.
Something is clearly wrong with this app.
And imagine if your user saw that, they're
not expecting it.
They just know something is wrong.
And to touch support.
This is a really fun thing to touch.
And they always say, why would you touch this laptop screen?
People don't want to reach over their keyboard to do it.
But the more you use this, the more you start touching
everything around you.
Like you reach over everything.
I touch my work 32-inch monitor
expecting it to do something.
And it doesn't.
It's extremely frustrating.
More and more, displays are going to be touch, whether
it's a hybrid, whether it can be converted, et cetera.
Your app should really handle, if not touch gestures
specifically like dragging and touch start, touch end, to
consider what your user actions should be.
For example, consider the Photos view.
So we have this photo here.
I admit, I was in the wrong when we were having this
discussion.
I felt that we should have double-click to
activate these objects.
So if I wanted to go into the single photo view for this
photo, I should have double-click.
And that was me coming from wanting to have a more native
experience.
And in native apps and in the file system browsing around
things, you double-click to activate for the most part.
And I thought that would be a more native experience.
But then we had the issue of well, what do you do when
you're tapping?
Do you double tap the screen?
No.
That doesn't work.
We could have it where you, with a mouse or a track pad,
that you double-click, and then with touch, you
single-click.
But then you have a very confusing UX, your user
doesn't know what to do.
And then they're going to start doing the wrong thing
the entire time.
They're going to start double touching the screen.
So the key takeaway for this one is just to consider that
tap support is very important.
Users are going to start using tap and touch screens.
And you don't necessarily have to have full touch support.
Just think about what your UX is going to be.
I know a lot of you are engineers, maybe some of you
are designers in UX.
But it's all our responsibility to create the
best looking app.
So on to security, that last principle.
This section, it can come off a little dry.
It's kind of difficult to deal with, but security is
extremely important for the user.
And there are a lot of things in the platform that at first
feel like you have to deal with, but at the end, assuming
you start designing your app the right way from the
beginning, you're much better off.
Your user is much safer.
So content security policy is this essentially white list,
black list that says your app has access to these resources,
these remote URLs, these domains, et cetera.
And by default, you really don't have access to hardly
anything at all.
As you're adding a white list for resources your app should
have access to, you consider like your back end server, you
obviously need access to that, and nothing else.
So if an attacker gets a hold of your app, something gets
injected, and starts making requests to some malicious
server, those requests are going to be denied because
they're blocked by content security policy.
The key thing here is to start designing your app from the
very beginning with content security policy in mind.
You can't load remote scripts.
You can't use eval.
And, for us, this was kind of a problem because we were
getting JSON objects back from the server, and so we had to
translate those into objects using eval, and
we couldn't do that.
So what we ended up doing was having a sandboxed iframe that
had the use of eval inside of it, and we passed the JSON
objects into this iframe.
It would do the eval, get the object back, and then post a
message back to the app, this object.
So if something malicious goes on, the eval is going wonky,
that sandbox iframe has no permissions whatsoever.
All it can do is communicate back and forth.
So you're pretty safe as far as that goes.
Making requests.
So you can't say image source equal remote.
You actually have to pull down the bits for that image.
So all of these photos, we have to pull them down from
the server.
XML HTTP requests is the way we do this.
You just create your URL for the request, send it off.
You get your response, you should save it in your data
layer obviously.
And then once you get that, the data gets bubbled up
through your UI.
And another tip as far as thinking what you should do
from the beginning, designing from the beginning, resource
compilation comes in very handy with packaged apps.
We particularly use Closure, but there are a lot of tools
that you can use to do the same thing.
You're going to say, essentially, in your main HTML
of the packaged app, script equals some local file.
You could say a bunch of local files, but it's a lot easier
to just have your code compiled, have it minified, et
cetera, and say script equals this local file.
So to wrap it up, we look back at what our goals were, and I
think we really achieved those goals.
We identified holes in the platform,
APIs that were missing.
We added media gallery APIs.
We tested a lot of features from Google+ Photos.
We got autocuration working really well.
And we just pushed the platform to its limits.
How far can you go with this?
How hard is it for developers to use this?
I think the answer is not very hard.
The biggest thing you have to do is to understand your
constraints from the very beginning.
Understand that you don't have access to remote URLs.
Understand that if you want fast scrolling, you need to
scroll your body.
And, obviously, the platform is constantly changing, things
are being fixed.
I know I've reached out to the GPU team a lot to say, this is
not easy enough.
We have to fix this.
And just imagine the evolution that's happening right now.
You can see it at I/O. This platform is just moving
forward at the speed of light, the things that you're going
to be able to do.
It's getting a lot easier.
And it's getting a lot better.
So packaged apps, yeah, we definitely want that.
And I think the way to think about it in terms of how does
it relate to web apps is that you could have the core of
your app have functionality that is available
cross-platform, is not necessarily specific to
Chrome, doesn't have these extended APIs and permissions
that are requested.
And in doing that, you're going to have a lot safer API.
For example, you've got to use XML HTTP requests, you're
going to have content security policy support, all of that
from the get-go.
And that's just your core bundle.
Layered on top of this is all the goodies that you get from
the platform.
You get the Identity API, so use it.
If you have it, use it.
You get Media Galleries API.
Use it.
And if not, degrade gracefully.
And this is how if you build your app, whether it's for
Chrome packaged apps specifically, or if you're
going to deploy it on the web across multiple browsers, if
you do this, you can have one product that can work on
multiple platforms.
And on some platforms, it just has more functionality.
We've done that with Google+ Photos Chrome app.
And we think it's working out really well.
So I actually think we don't have time for questions.
We only have about a minute left.
But I will be at the Office Hours bar if you want to come
by and ask any questions you have.
It's been a real pleasure.
Thank you very much for attending.
And I hope to see you guys soon.
[APPLAUSE]