Tip:
Highlight text to annotate it
X
>>Richard: Welcome to the tech talk.
Thank you for coming, and today we'll be talking about optimizing every bit of your site serving
and Web pages using Page Speed.
I'm Richard Rabbat.
I’m a project manager at Google.
>>Bryan: And I'm Bryan McQuade.
I'm a software engineer working on Web performance.
>>Richard: Before we start, has anybody ever used Page Speed?
Can I show, see a show of hands?
Great, perfect.
This is the link to the Google Wave.
We encourage you to look at it.
There are live notes being taken, as well as you can put moderator questions there,
IO-speed [inaudible].
And what you're gonna get from this talk quickly.
This is a one-on-one kind of talk.
So we don't assume a lot of pre, past knowledge in terms of understanding of Page Speed and
performance.
So we cover some of the basics, but we'll also go into some more details, more advanced
details.
We're gonna cover a few things.
Most importantly, why you should be here and why performance affects your site and why
you should pay attention to performance.
We're gonna make sure you become familiar with Page Speed and the, the new features
in Page Speed.
And we'll also be talking about four new product features, namely export functionality in Page
Speed, DSDK and Apache module as well as Page Speed as far as ads, analytics.
We're gonna spend some time looking, talking about Web performance.
So I wanted to kind of give you, for those of the people that haven't seen Page Speed,
a brief UI basically, a bunch of rules and how a Web page has been doing against the
rules.
We're gonna go over the details.
But since we're spending some time talking about Web performance, it's good to, for you
guys to see the product first.
Web performance 101.
And here is why should it matter, why should, should speed matter to you.
We know from a lot of user studies that speed is more people viewing your site, more people
coming back to your site.
Last year at the Velocity conference that's run by O'Reilly we were fortunate enough to
have a number of companies actually share some of the data on their, on how performance
actually, actually affects traffic.
So, in, in, in, in those lists Google ran what, what we consider a 400 millisecond latency
increase.
So basically we, we took a bunch of people that we served more slowly by 400 milliseconds.
And it actually, like corresponded to about 0.6% search decrease, which is very substantial
for a company such as Google.
Yahoo did a similar experiment.
It actually hit their traffic by 5% to 9%.
Shopzilla went a little bit further.
So what they did is they basically archetyped the whole UI and it actually contributed to
about five second latency decrease.
And they got a 12% revenue increase.
And not only that, it actually de, decreased their OPEX costs because they needed to use
less hardware to do the serving.
So important things that you should worry about whenever you're developing your web,
website.
So Bryan.
>>Bryan: So, now that we've seen why speed, Web speed, is important, I'll do a little
bit of a, more of a deep dive into the technical aspects.
Why don't we start with the building blocks of Web performance?
So there are three categories you need to be thinking about when you're thinking about
Web performance, or the end-to-end picture.
That is performance at the Web server, on the network and in the client on the browser
as well.
So on the server really, the, the most important thing or the only thing we really, we, we
look at is that server processing time.
How long does it take your server to generate the response?
So for a static resource like a file stored on disk you'd expect that to be close to zero.
But for a dynamic response, something in response to a user query, you might see increased processing
time.
Now we'll actually talk about, a little later in the talk we'll talk about some ways to
mitigate the impact of that processing time.
But that's, that's the primary factor at the Web server.
And then on the network the two factors, the primary factors, are bandwidth, or the contributing
factors, bandwidth and round trip time.
And then finally, we'll dive into those a little bit more on future slides as well.
Finally, on the client and the browser you're looking at parse time, so how efficient is
the browser at parsing HTML.
Resource fetch time, how efficient is the browser at finding and fetching resources.
So we've seen a big improvement in efficiencies and browsers in the last 12 months in terms
of resource discovery and resource fetching.
Previously we, browsers had fetched JavaScript serially.
So now most modern browsers, all modern browsers in fact in the past 12 months do parallel
JavaScript fetches which is, is a big win.
And we're continuing to see improvements there.
And then finally the last two categories, layout and render and JavaScript.
And for most traditional Web pages on the, on the, out there, these categories don't
tend to, traditional being the pre-AJAX pages.
These don't have as big an impact.
But if you've got a large DOM or a complex DOM, layout can actually be a significant
time, time contributor.
And then JavaScript, again if, if you're using a JavaScript heavy AJAX page, that's an, potentially
an important contributor as well.
So in those latter two cases there's actually another tool, and hopefully you got to attend
the tech talk's Google Speed Tracer.
That does a nice job at giving you a timeline and drilling down into the specifics of time
spent in layout and rendering, and time spent executing and parsing JavaScript.
And I'd recommend checking that out to Google Chrome extension.
That does a nice job.
So now that we've looked at the building blocks, why don't we look at an example page load
and sort of see how those building blocks come together to, over the lifetime of the
page load.
So we'll look at a page load for a Google search request, a search query for Half Dome
photos.
And what I'll show is we've got sort of three columns here.
Client and server.
These are operations that happen either on the client or the server.
And then the third column we've got the render column.
This is what the page looks like as a result of these different steps along the course
of the page load.
So what we've done here is we've really slowed down the page load.
And, and we'll see the discrete steps that we go through, then in turn what that looks
like in the browser as a result.
So we'll understand how all these building blocks come together to actually display the
page for the user.
So first, the first thing the browser has to do every time you navigate to a Web page
is potentially perform a DNS lookup.
Subsequently, once the DNS lookup completes, and that's, that takes about a round trip
time, DNS lookup.
And in fact, in many cases it'll take longer than a round trip time because you'll hit
multiple DNS caches along the way.
But roughly speaking you're looking at one round trip time. TCP connection, connect to
the server, another round trip time.
And then finally, after those two round trip times the client will send a HTTP request
to the server, so asking for that specific resource.
The server begins to process that query and will start sending back the response.
And at this point we've seen three round trip times pass.
So round trip time varies considerably depending on where you are, how well connected you are
on the Internet.
But you're looking at anywhere from single digit milliseconds on a local LAN to 10, 40,
I think the average is about 70 milliseconds, up to hundreds of milliseconds or even a second
in the worst cases.
So minimizing round trip times is a really important part of optimizing your website.
So finally once the response comes back after these three round trip times, the browser
can begin to parse that content.
And then we start to see the page rendering on the screen.
And subsequently, more of the content comes back.
The browser continues to parse.
And in this case the browser's discovered that there are four image resources embedded
in the response.
And so it begins to fetch those resources.
And what we see is that the network just echoes and begins fetching these.
Each of those sub fetches is potentially going to incur a DNS lookup, a TCP connection as
well.
So you're seeing additional latency there as well.
And then eventually we see these responses start to come back.
And I'll, I'll mention too, the gray section is the sort of off screen section, portion
of the page.
So we're looking at the -- the top portion is the part the user can actually see.
What we're seeing here is that the page is rendering that most important content, the
user visible content first.
And then the yellow regions are the current, the repainted regions during that last iteration
of the load.
So what we start to see is the image responses come back, they continue to fill in.
And finally, the page finishes rendering.
So this is sort of how these different factors, DNS, TCP, client cypings, parsed layout, sub
resource fetches come together during lifecycle to page load to, to sort of load and render
that page.
So in fact—
[ pause ]
Typically when you're performing a Google search, it feels like it loads, hopefully
it feels like it loads like that [ snaps fingers ].
But in fact all these little discrete steps are happening along the way.
And understanding those, understanding how they come together can help to understand
how to optimize the page.
So given that, Richard will summarize.
>>Richard: So, if you go away from this tech talk, and you need to remember in fact three,
three things out of this.
These are like the three speed guidelines you should like always worry about when, whenever
you are developing a Web app.
And the first one is you want to try to serve fewer bytes.
And you're, you're going over the network.
You want to try to minimize the number of bytes that you're sending over the network
because they, they fit in packets.
There are so many round trips.
So the way, some of the ways that we suggest that you do it is by compressing and serving,
enable user compression.
Obviously lots of people do it; some just still don't.
If you have a Web host, Web hoster that is hosting your, your content and is not enabling
compression move to another one.
Optimize damages.
A lot of the images that come out of, of a camera are very wordy and verbose.
There's like a lot of meta information that's un, unnecessary.
Get rid of it with a lot of open source tools; you can see a bunch of them.
And also make sure that you're only sending the right size resolution of the image.
It saves bytes on the wire, but it also saves processing time on the server, on the client
side.
Get rid of all the content in the HTML, in the JavaScript in the, in the style sheet
that is something that you've put for development sake.
All the comments are things that your browser doesn't care about.
So get rid of them.
Use minification tools such as Closure Compiler, which is also an open source project.
And also, cache aggressively.
The way I think of it is the best, the fastest serving is when you don't have to serve.
That everything is in your cache.
So see if you can push things earlier to the browser that is gonna be, is gonna be needed
a little bit later so you're not waiting, the user's not waiting.
So, serve fewer bytes.
Parallelize resource downloads.
Modern browsers use up to 60 parallel connections.
Try to make use of them all.
And we'll talk a little bit about like one of the rules about like optimizing autostarts
and scripts, which also helps in parallelism.
And don't shy away from promoting modern browsers.
Don't develop for the lowest common denominator.
It doesn't help.
Push the envelope.
If you need to support older browsers, check the user agent and serve unoptimized content
for that, for that user agent.
For example, don't serve sprites to old browsers that don't support it.
But use spriting, image spriting, when you're, the user agent can support it.
So three things: serve fewer bytes, parallelize and push the envelope in terms of browser
support.
So I know it's been a few minutes since we've started this talk, and people are anxious
to see it.
So Page Speed is, is a, it's a Firefox, Firebug extension.
And we have about one million active users.
So for the people that haven't used it, download it and join the fun.
[pause]
This is our site, code.google.com/speed/page-speed.
And the way you're gonna use it is -- this is the little Firebug.
And you, you start it up.
Page Speed is an extension in Firebug.
And it tells you about like the new features that we have.
And the first thing you're gonna do is analyze the performance of that page.
So I'm analyzing this page on the code site, and it gives me a bunch of rules.
The first thing you see is a score.
A score is a, is something that we believe is good indication for, it's a good metric
that you can use that, that's reliably reproducible.
So you're getting about 82 over a hundred.
We think it, it's okay.
It's a, it's an okay Web page.
And it's gonna, you have a bunch of rules that you executed, and each one is gonna tell
you what, what the issue is.
So for example here, leverage browser caching.
So all these, a lot of this JavaScript is, has an expiry time of like one hour.
You should look at the expiry time.
Do you really need it to be one hour, or can you push it?
Can you like, can you put it at 24 hours or seven, seven days?
Seven days is a good time.
You'll, you'll make sure that anybody that comes back to your site can actually, can
actually have it in its cache, in the cache of the browser.
And obviously not, a number of rules here.
I encourage you to explore them.
And I encourage you to also like look at some of the documentation.
So the easiest way to get to the documentation is just to press on the rule.
'Cause once you see a rule and go like I don't understand what this rule is, just press on
the rule and you have a lot of documentation.
All this, all the documentation in open source.
And we, we try to be very descriptive of what, what the problem is and how you can resolve
it.
So going back to, going back to our presentation.
Bryan?
>>Bryan: Yep. So let's look at one example of, of why speed minded development matters.
So for, for each of those Page Speed suggestions, why is it important that you adopt that, that
suggestion?
You apply it to your site.
What, what is it doing and how is it making the site faster?
So we'll look at one specific example, which we talked a little bit about earlier around
parallelization.
So the ordering of styles and scripts.
So here's an example I had of an HTML Web page.
What we've got is sort of some interspersed CSS and JavaScript content.
It looks reasonable enough.
But in fact, in some browsers what you'll see is that intermixing CSS and JavaScript
like this, so some CSS, some JavaScripts and CSS introduces additional serialization delays
in the page load.
So what, what you get is you get the two, the CSS and the JavaScript file will load.
Youl get another delay on the next JavaScript file and then the final CSS files load.
And it turns out, so, so what you're looking at here is roughly 300 milliseconds in this
example if it's a hundred millisecond round trip.
So it turns out that if you just reorder these things, so you put all the CSS upfront followed
by the JavaScript, the browser can more efficiently, some browsers do anyway, more efficiently
will fetch that content and you'll be able to remove one of the round trip times.
So this is an example where it's an easy fix, an easy thing to do.
All of our suggestions, this one and all of our suggestions, won't have any, shouldn't
have any impact on the look and feel of your page.
So as far as the user's concerned the page is exactly the same.
And then finally, what you get is you reduce one of those round trip times and go from
the 300 milliseconds to 200 milliseconds without any other change in the page.
So over the last, we launched in June?
>>Richard: Yep.
>>Bryan: So it's been about a year.
And over the last year we've been working *** a number of things.
We've added some new rules and fixed some others.
And we just wanted to talk about a few of them just to give some examples.
So we added a rule, called minimize request size, within the last year.
And the idea there is that each request that the browser makes has some overhead.
And there are things you can do.
Reducing cookie size, reducing the length of the URL in fact can keep that request size
small so that it fits within a single TCP packet and is more likely to be transmitted
efficiently and quickly over the network.
And that's especially important in mobile because mobile tends to have a high latency
and asymmetric bandwidth where you've got a slower up than a down link.
So, next specify a cache validator is a new rule that we added actually pretty recently.
And the idea there is that for static content, for static resources, once they do expire,
so if you set an expiration of a week or a month or a year, once they do expire it's
possible for the browser to ask the server, "Hey, I have this resource. It's not fresh
anymore. It's not, it's not, it's expired. Has it changed?"
And the server can say, "Nope, it hasn't changed."
You can update it and keep it for another week for instance.
Using a cache validator allows you to do that, otherwise you have to download the entire
resource again even if it hasn't changed.
So that's a rule we've added.
Specify a character set early.
It turns out that if you, if you're serving HTML content and you don't specify the character
set, so UTF-8 or Shift-JIS or whatever it might be, the browser has to guess as to what
the character set is.
And in order to do that, it buffers content in its, in, in memory before it actually starts
parsing it.
So the browser's downloading the content.
It's being served from the, the server as quickly as possible.
But the user's not seeing anything on the screen until it finishes buffering, analyzes
all that content to guess the character set, which it could possibly guess wrong, and only
at that point to start rendering content.
So just specify in your character set and HTTP response headers, content type text html;charset=
whatever it might be.
It allows the browser to more efficiently parse and render the content as it comes,
as it arrives on the wire.
And then finally, minimize DNS lookups is a rule that we implemented initially based
on analyzing some pages internally, a long time ago in fact.
And what we noticed is that for certain sites and for certain content, third party content,
it tended to flag those resources and sort of say you should, it would flag resources
that we, we felt we probably shouldn't be flagging.
So we spent some time and just recently looked at the algorithm and tuned that algorithm
so that it, it doesn't flag, it's basically more accurate and gives more accurate recommendations.
So actually we just released Page Speed 1.8, which has a new implementation to minimize
DNS lookups that is more accurate and less likely to give you sort of incorrect suggestions.
So we're constantly tuning these rules.
We're constantly adding new rules, both as we find issues either ourselves or from feedback
from users.
So we'll send you a link of the Page Speed discussion forum at the end of the talk.
And then, right.
Yes.
So, go ahead Richard.
>>Richard: So, a bunch of new features.
And today we'll talk a little bit about the, the export fun, functionality.
It's basically a beacon that you can send.
And I'll just go through the demo directly.
So let's go back here.
And, so we have, of course [laughs].
>>Bryan: Right.
>>Richard: So we have export functionality that will allow us to export—
>>Bryan: To our reload.
[pause]
>>Richard: Yeah. That's fine.
>>Bryan: Or switch browsers.
>>Richard: Yeah.
Yeah, it doesn't want to start.
>>Bryan: Switch to Safari.
Like I said, we're always finding and fixing problems, so—
>>Richard: Yeah.
So basically we have two export functionalities.
And one we, we send the data back in JSON format.
And we also send the scores to www.showslow.com.
If you want, if you guys want to try it if you have Page Speed running, just send it
out.
You're gonna have a bit of a legal disclaimer.
We worked with this outside independent developer who maintains showslow.com.
And basically, it gives you a way of keeping track of your Page Speed score across, across
time.
And in this case I'm showing an example of I believe Google.com and YouTube.com and gmail.com
and measurements that we're sending to Show Slow.
So when you're doing the development you change some of your, you, you adapt to the Web page
to be more performant and you can track the performance of your page across, across time.
So I encourage you to use this.
It's a great functionality.
Don't hit showslow.com with too many, too many beacons at the same time.
[pause]
>>Bryan: Do you want to show the site?
>>Richard: Sure.
>>Bryan: Do you want to go look at the—
>>Richard: Oh yeah.
So the, the actual site here is, here's the site.
YSlow is a competitor to Page Speed.
We encourage you to use as many performance tools as available as you can, you can try
out.
And in this case what we, what we did earlier is we sent a bunch of requests, too much of
beacons to showslow.com and recoursed them right here so you can keep track of them.
And you can, these are the comparisons.
So Google.com and YouTube.com are here.
And you can see over time the, the performance of your page obviously.
Okay, so let's go to the next feature.
Bryan?
>>Bryan: So, so one of the things we've been working on over the past year is the Page
Speed SDK.
So—
[pause]
At the time of our initial launch. Page Speed was entirely a JavaScript implementation.
It was pretty, it was tightly coupled to Firefox APIs.
And what we found was that we wanted to reuse the Page Speed logic in other
environments.
So one area, one spot, one place early on that we said we'd like to provide this is
in Google Webmaster Tools.
How many, has any, how many are familiar with Google Webmaster Tools?
Great.
So hopefully, maybe you've seen that there are Page Speed suggestions actually in the
Webmaster Tools UI, in the lab section.
And if you haven't used Webmaster Tools before, I would definitely encourage you to check
it out.
It's a, a great resource with lots of good helpful information for your website, assuming
you have a website you can sign up and, and learn about your site on that site.
So what we did was we, over time, over the last nine months, we've been porting rules
from the JavaScripts space to a sort of browser independent library we've implemented in C++,
that we're able to reuse in Page Speed for Firefox, in Webmaster Tools and in other environments
as well.
So you can now download that SDK, use it. We've got a build set up for Linux and for
Windows.
And if you want to build on Mac, I don't think it'll take much work.
So we, if you figure, if you figure out what small changes to make to the Mac's file or
you get that to work we'd definitely, feel free to share that with us and we'd be more
than happy to include it in our open source repository.
So I mentioned Webmaster Tools. Do you want to—
>>Richard: Yeah.
>>Bryan: —display that.
This is one of the places where Page Speed is available today.
And so here's, here's an example of Webmaster Tools.
You can sort of see, right, that this is the YouTube area of Webmaster Tools, which we,
we're able to see.
And it gives you some example feedback for some pages on your site.
So you can drill down and for instance see that these four rules have specific suggestions
to help you tune and optimize the site.
Here for example is real combine external JavaScript, which you can learn more about
in our documentation Richard showed earlier.
But the idea is that if you combine these two resources the browser will be able to
load the page more efficiently, at least in some browsers.
So now not only do you have access to Page Speed suggestions in the Firefox tool, you
can just go to this website, Webmaster Tools, Google Webmaster Tools, and get this information
without having to install an extension, without having to run it live on your site.
This data's just provided for you as part of the Google Webmaster Tools service.
Do you want to [inaudible]. [pause]
So in addition we've actually, so we've worked with a couple other tools as well.
This is the Page Speed for Firefox UI. Page Speed for Firefox is now driven off of the
Page Speed SDK as well.
Gomez, a Web performance company we've been working with, also integrated the Page Speed
SDK rule set and they're providing that in their tools.
This is a pre-release.
They haven't actually launched this yet, but this is something that'll be coming soon.
And then [ coughs ] Steve Souders, excuse me, Steve Souders built a nice Web page where
you can take a HAR file, a HAR file's an HTTP archive file, sort of a new JSON format that
lets you capture all the information about a page load, so all the resource content,
headers, timing information.
You post that into this Web page.
It uses the Page Speed SDK and comes back and gives you a Page Speed score.
So these are just a few deployments.
We launched the SDK about a month ago.
And we've seen great uptake.
And we, well, let's do a, a little bit of a deep dive into the SDK now [ clears throat
] to see how you might use it.
So if you wanted to use the Page Speed SDK it's, it's pretty straightforward.
You just need to choose an output formatter, sort of how do you want to present the results.
Do you want just plain text, HTML, JSON, et cetera?
Pick the Page Speed rules you'd like to run.
Specify a source of your input data.
So for instance, HAR or some other input source. And then just invoke the engine.
So let's look at a snippet of code that does that now.
[pause]
>>Bryan: So we're choosing to use a text format, or just something that will print to standard
out in this case.
And we're populating the core rule set, the core Page Speed rule set, so the rules that
you're familiar with in the tool.
We're going to use a HAR file as our input.
So this is an example HAR.
HAR is a JSON format.
The dot, dot, dot would be a big blob of content that contains all the resource bodies and
other things.
And then finally we'll invoke the Page Speed engine.
So pass at the rules, initialize, compute and format results.
And at this point the results will be printed to the standard out on the, on the, on the
console.
So let's look at that actually.
So one of the tools bundled with the Page Speed SDK is called HAR to Page Speed, which
is actually the tool that powers the HAR to Page Speed website that Steve built.
And you invoke it, you, essentially the code we just looked at is the core, the guts, of
that tool.
It's got the ability to read a file from a command line argument.
But beyond that, I mean, it's essentially what we just looked at.
And so you run it like this, very simple, right.
Now you're not in a browser anymore.
You're on the command line, different, different environment.
And you're able to get that same information, those same results, here on the console easily
and quickly.
And so potentially you could write an automation tool that you use something like this to automatically
analyze HAR files over time, all without having to stand up a browser and run Page Speed and
that sort of thing.
So we can learn specifically what we can do to speed up the Web page.
So that's the Page Speed SDK.
Now let's look at another deployment of Page Speed technology that we've been working on.
It's sort of, it's very early in the lifecycle, the project.
But what we decided we wanted to do is try to shift as much as possible from telling
Web developers what they can do to speed up their site, and just actually try to do that
for them.
So essentially, so what, what we decided to do was implement an Apache module that en,
encapsulates a lot of these Page Speed suggestions.
So all you have to do is install this module on your Apache server and then you don't have
to think about the problem anymore ideally.
We just automatically optimize the images, the HTML, the CSS, the JavaScript, combine
resources, extend caching lifetimes using a technique called resource fingerprinting,
which is talked about in our documentation as well.
All these things are captured automatically so you don't have to go to the trouble of
implementing them, or Web content hosters don't have to do that, and they can just sort
of have this applied automatically.
So this project is open source as well.
Like I said, it's early in the development cycle so it's not ready for use yet.
But if you're interested, take a look at our code.google.com repository.
>>: [Inaudible] does it insert semicolons?
>>Richard: So the question is—
>>Bryan: Does it insert semicolons at the end of lines?
>>: Yes.
>>Bryan: I don't know actually.
I want to say, so the—
>>: [Inaudible]. Does it preserve the new lines, or what?
>>Bryan: I actually don't think it preserves the new lines, so it would need to insert
semicolons.
So, so that's actually a good example of a case where—
>>Richard: If you can repeat the question, because—
>>Bryan: Yeah.
So, so he asked if it retains semicolons at the end of new lines.
'Cause one of the things JavaScript minifiers do is they tend to remove new, new lines.
JavaScript new lines implicitly add a semicolon.
So if you just combine the two lines you can end up with JavaScript that breaks, I want
to say we do fix that, but I'd have to double check.
In any case, if, if you run into a problem do, you can actually go to the same URL and
file an issue or post it on our discussion forum.
And we'd be happy, we're always happy to accept code patches if you're interested in, in submitting
patches or we'll try to fix the issue ourselves for a subsequent release.
So here's an example of as the HTML flows through Apache, sort of coming in unoptimized
like this perhaps, we'll parse that HTML and perform some optimizations.
And what you end up with is HTML that's a little more minified and it's serving fewer
resources.
So what we've done here is we've combined the two CSS resources, we've combined the
two JavaScript resources.
What you can't see here is we would have also extended caching lifetimes and removed unnecessary
white space along the way.
>>Richard: So, before you move on why don't you talk a little bit about the extension
of caching lifetime because it's quite interesting.
>>Bryan: Oh sure.
So, one thing that we'll often find is that caching lifetimes are either unspecified or
set sort of not very aggressively, on some sites anyway.
And developers are sometimes concerned that well, if I extend it for a week or a year,
what if I need to change that resource?
And so what we recommend is a technique we call fingerprinting, URL fingerprinting essentially,
which looks at the actual content of the resource and embeds that fingerprint in the URL.
So what you're looking at here is /cache/someblob.
That makes no sense, right, .css.
And what that actually is an, part of an md5sum of the concatenation of a.b, a.css and b.css.
So now because we've sort of captured a fingerprint of the actual contents in the URL, we can
use a really aggressive caching lifetime.
We can set this thing to not expire for a year.
And then if it does happen to change, well the contents will change, the fingerprint
will change and in turn the URL will change, right.
So the browser will know, "Oh, I have to go fetch this other resource which has a different
URL that's not in my cache."
So this lets you sort of, instead of specifying how long the browser, you basically can expire
the resource when it expires.
Instead of having to wait for that expiration time you just change the URL and the browser
will download it as soon as the URL changes.
So that's a technique we'll do.
It, it's a bit of a fragile technique.
You have to sort of match up the content signatures with actually URLs in the content, which you
can do by hand or if you use them mod Page Speed, we'll do that for you automatically.
So that's mod Page Speed.
>>Richard: So, so Page Speed came out of Google labs.
We, we spent a lot of time trying to understand optimization of UI.
At Google we, we built a lot of these rules internally.
And after we, we released it as open source about a year ago just like Bryan said, we
got a lot of good feedback.
And one of the, one of the most important pieces of feedback is a website is not usually
coming from one, one property.
There's, there's content that, for example for a publisher there's a content, there's
content that the reporter's writing, there's the ad systems that are, that are, that you're
shipping so that you can monetize your, your, your, your pages.
And there's also tracking analytics that you need to, so you can keep track of measurements
and all the metrics that you care about.
So we spent a lot of time trying to understand how to adjust this.
And our approach is, is to try to give as much information back to the developer as
possible.
And to do that we basically started focusing as, as a first step in terms of like third
party content on ads and trackers.
And I will show the demo.
[pause]
>>Richard: So this is the YouTube page.
And I'm gonna start Firebug.
This is Page Speed.
And there is a filter option with these, these options.
The first one is analyze ads only, analyze trackers only, content only and the complete
page.
The complete page is what you're used to when you run Page Speed for tho, for those of you
that have run it.
And now what we're going to do is we're going to filter only ads and try to, and try to
see what the, what we're gonna get in terms of that.
So I’m going to, first going to analyze the performance for the complete page.
I get a bunch of rules.
Obviously there, there's a lot of recommendations for just about every rule.
And then I'm gonna analyze the ads only.
All right, refresh the analysis.
You can see like all these rules are not applicable anymore.
And what we're, what we're looking at is specifically the ad content.
And so we have a number of filters for, for what we think are ads.
And by the way all the, all the filters are open source.
So if you have suggestions for adding more.
We know today we're not very, we don't have a lot of coverage internationally.
So it will be good to have more international coverage for ads, systems, ads that we don't
capture today.
And you'll see like double-click is being served on YouTube, but obviously it's a, it's
a, it's an ad.
And we're going to give you recommendations about this ad.
The same thing happens for analytics, although I don't believe that YouTube has analytics
on their, on their pages.
So what this will give you is enough information to understand how third party content is affecting
the performance of your pages.
We're going to extend it to gadgets.
We believe gadgets are becoming a big, a big part of every Web page and we, we think it's
important for every Web developer to actually understand the impact of all the content that
they have, and understand the impact of things that they can, they have control over versus
things they don't have control over, and make the right decision.
And every, every development is a, is a, is a balance between adding more features and
for thinking about speed as a feature.
And we're sure, we're sure you can find the balance there.
With, with this we hope that giving all this information will, will also spur third party
content to actually try to make sure that they are optimum so that when they're served
out of your Web pages they, they are fast and performant.
>>Bryan: And this is a feature we just added in—
>>Richard : So, we just released it this morning.
>>Bryan: Yeah.
>>Richard: We just pushed it out, it's in beta, ten percent of—
>>Bryan: Yeah.
>>Richard: —all of our users are getting it—
>>Bryan: [Inaudible].
>>Richard: —as of this morning.
>>Bryan: You may get it automatically if you have Page Speed installed, and if not you
can go to Page Speed download page and download the Page Speed beta and you'll get that feature
as part of that download.
>>Richard: And, so it's a new feature.
So give us feedback online on the discussion list, it would be great.
And obviously I just covered this.
[pause]
Future work.
>>Bryan: Yep.
So looking forward, I'll talk about a few of the rules that we're thinking about adding
to the rule set over the coming months.
I'll talk about three rules.
The first is to recommend using chunked encoding.
So chunked encoding is a technique that allows you to send a page in pieces as opposed to
sending the whole thing after generating the whole thing.
And this actually relates to that bit I talked about at the beginning where server latency,
what do we call it, server processing time can add to the page load.
Often times if you use chunked encoding, you'll mitigate or even eliminate that, that, as
far as the user's concerned.
What it lets you do is essentially send the, so the assumption is that most pages that
are dynamic, search results pages, user customized pages like email websites, etcetera, have
sort of a static bit of content at the head of the page that's, doesn't take anything
to compute it.
It's essentially just a static string.
And the idea is that you send that as the first chunk of the response.
While you're doing that you start computing the actual dynamic data the user requested.
And what you have is in parallel you're sending that static data on the wire while you're
computing the user's result.
And then as soon as that dynamic content is generated and ready to serve, you serve it
right behind as a separate chunk.
And as far as the, depending on the user's connection, it may just look like a, a, a,
a consistent stream of data that actually was never interrupted.
So chunked encoding, so I should say the default behavior in HTTP is to specify the length
through response in the response headers.
The response headers come before the entire response body.
So by default, you have to wait and buffer the entire response, the entire dynamic response,
before you start sending any of it.
So chunked encoding lets you do this in chunks.
Send that static header first and the, the dynamic body afterwards.
And what we see is that this has been a big win for Google properties like search and
calendar that have fit that constraint of dynamic response with a static header.
And so, whoops, so what we'll see oftentimes is that before implementing this kind of technique
you'll have this HTTP waterfall chart that shows the timeline of the resources being
downloaded.
It looks something like this.
HTML you'll take, you'll spend a lot of time downloading that HTML resource.
Towards the end of that download you'll start downloading the sub resources declared in
that content.
And once you enable chunking what you get is, so one nice side effect of, of this is
that external JavaScript and CSS are oftentimes declared in that static chunk.
So by sending that static chunk much sooner you pull in those sub resource fetches cons,
considerably and allow the browser to start downloading, parsing and applying those resources
much earlier in the page load.
So this is a useful technique for dynamic responses.
So second, minimize the size of early loaded resources.
And the idea here is that browsers have become much more efficient at downloading resources,
specifically JavaScript.
A year ago, most browsers out there would download JavaScript serially.
They'd sort of, if you had ten JavaScript files declared in a row it would download
one, wait for it to finish, parse and execute it, move on, dowload the next one.
And you saw this stair step in the waterfall chart.
So what we're seeing now with the modern browsers, all the modern browsers, all the major browser
vendors have implemented this, is that you get parallelized JavaScript fetches much more
efficient use of the network.
But regardless, the browser can't show anything til, to the user until all of those resources
have been downloaded and all of the JavaScript has been parsed and executed, CSS as well.
So the less you serve up front the less you serve in the head of the page, and the more
you can defer to later in the page until after content has been rendered.
The faster that initial flash of content, that initial sort of time to first paint will
be in the fast, the less time the user's sort of sitting there staring, waiting at a blank
screen.
So you can actually accomplish this technique today using two of our rules in the Page Speed
rule set in Firefox.
Remove unused CSS and defer loading of JavaScript, which will help you to understand which JavaScript
and CSS are actually used on your page and which, which are, are not until later.
We're gonna sort of, what this rule will do is it will streamline that process a little
bit to make it a little easier to apply the technique.
And then finally, minimi, minimize fetches from JavaScript.
So as browsers have become more efficient in the last 12 months and they've parallelized
JavaScript fetches.
What we've seen is that JavaScript that's fetched using JavaScript still gets serialized.
So we pay a penalty for fetching JavaScript, from JavaScript.
So, so sometimes JavaScript libraries, you'll see this in a lot of actually major websites,
will do something like this.
Very straightforward, write a couple script tags.
And then they'll use some JavaScript library to load a couple of these JavaScript resources.
It seems pretty reasonable.
But what this does in the modern browsers, traditionally this actually had no latency
impact.
A year ago, most browsers it didn't make a difference because it was just gonna be fetched
serially whether you fetched it using JavaScript or whether you fetched it using a script tag.
And so what happens here is that the browser uses sort of a, a speculative fetcher.
So it goes and parses ahead of the renderer, looks for tags and says, "Oh, I found a script
tag. Okay foo.js, I'll fetch that".
Parses ahead, parses ahead.
Hits a script tag.
It says, "Well I'm just a speculative fetcher, I don't actually execute JavaScript. So I
can't do anything about this one. Skip it."
Then eventually the renderer receives foo.jss, js, common.js and effects.js.
Parses and executes those and says, "Okay, next I'll execute that script block," because
it executes scripts in order.
And what this ends up looking like is that stair stepping behavior that you saw in older
browsers, the serialized JavaScript fetches.
So if you've got JavaScript fetched in this way on your page, and it's easy to just turn
them into script tags, you'll go from that serialized JavaScript fetching to parallel
JavaScript fetching, making your page, display its contents sooner than it otherwise would.
So then finally, I'll let Richard talk.
>>Richard: So a lot of the development for Page Speed happened early on when we didn't
have Chrome, we didn't have Developer Tools.
And the past year we've been focused a lot on the rules and the, the correctness of the
rules, and building that, that, the SDK.
We're going to be releasing a version of Page Speed for Chrome with integration with Chrome
Developer Tools.
And we're hoping to get it at the end, by the end of this year.
We know it's a, it's a, it's a very high request by everybody.
We apologize for like not having been able to do it earlier.
But Developer Tools are now such a complete developer environment for us that we, we're
going to be landing in the, landing in Chrome this year.
So where can you get more information?
A bunch of places.
So we have a very developed website thanks to our wonderful tech writer.
So everything is at code.google.com/speed/page-speed.
We, we have -- all our development is an open source.
We don't do any developments in the sandbox somewhere.
Do contribute, if you'd like to contribute there's also, bug, just asking for features
and bugs is great.
And it's a pretty active mailing list that you can subscribe to and help us make the
product better.
And tell us about success stories using Page Speed or problems of how you use Page Speed
and how it didn't perform so we can make it better.
Thank you.
So let's see if we have anything on, on moderator to cover.
>>Bryan: And if you have questions.
>>Richard: And if you have questions, please the microphone is there.
[ Audience clapping ]
>>Richard: Thank you.