Tip:
Highlight text to annotate it
X
ARVIND JAIN: All right.
Welcome everyone.
My name is Arvind Jain, and I have Satish Kamabala here with
me, with a first look at Google on making the web fast.
And we're very excited to be here today to talk about how
to measure the speed of your websites in the real world.
So, before we start, why should we measure the speed of
our websites in the first place?
The answer, to me, is obvious.
I don't like to visit slow websites.
I like to shop from responsive and fast websites.
I like to read news, rather than waiting for the news
piece to appear.
And I'm sure you're all like me.
Nobody likes loading slow web pages.
At Google, we actually went a bit further.
We ran an experiment to understand the relationship
between speed of our service and its usage.
So in this experiment what we did was we inserted an extra
delay in serving our search results to our users.
And then waited to see, does it actually impact usage?
Are the users using our service more, or less?
And the answer was actually very clear.
We saw a significant drop in the usage of Google search,
even after adding a delay as little as a few hundred
milliseconds.
And I think that applies to all the websites out there.
A slow website results in lower usage, less revenue, and
obviously poor user experience.
So we all need to make sure that all websites are fast and
responsive to our users.
Of course, you can't make things faster if you don't
know how to measure it.
So tat's the topic of our talk today.
How do you measure the speed of a
website in the real world?
So we will talk about three things today.
We'll start with talking about HTML web timing standard,
which makes speed measurement a web standard.
After that we'll talk about how Google Analytics uses the
HTML5 web timing standard and its Site Speed feature to make
real world speed data available to its publishers.
And finally, based on that data that we collect in Google
Analytics, we want to share some statistics on how fast
the internet today is.
All right.
So HTML5 web timing API.
This is a new standard which we're working on in the W3C.
Google and a bunch of other companies are involved in
standardizing this as we speak.
Until very recently, it was not possible for us to measure
the end-to-end user-perceived time it takes
to load a web page.
It was just not possible, because a lot of time is
actually taken in doing things like establishing a DNS
connection to your website, connecting to the website,
sending a request for the web page, and then
start getting a response.
And the JavaScript actually has no clue of all of this
time, where this is being spent.
So this standard actually allows us to collect that
end-to-end time that it takes to load a web page.
And you can do this from any browser, irrespective of what
device you're on.
Whether you're on a mobile phone, or whether you're on a
laptop, or a notebook, it doesn't matter.
All browsers that support this standard will allow you to
collect speed data in a consistent manner.
The specification consists of three different standards.
Navigation timing, resource timing, and user timing.
Before I go into the details of this, first let's take a
look at what happens when you load a web page in a browser.
So in this waterfall view we're loading
the YouTube's homepage.
And this figure shows all the different things that happen
when you load that web page.
The very first thing that the browser does is connect to
your web server and download the root web page.
In this case, the YouTube homepage.
Index.html5.
Once that root page comes back, the browser
starts to parse it.
It executes any scripts that it finds on the page, and it
discovers new resources that are acquired by this page.
And it will start downloading them as well.
And this process continues until all the scripts on the
page have been executed and all the individual HTTP
objects have been downloaded, for example images or CSS
files or scripts.
And once all of that is done, that is when the page is
considered fully loaded.
In this chart, it's right at the end of the chart.
The navigation timing API, that's the first standard that
allows you to capture this end-to-end time, starting from
when user navigates to the page to when the
page is fully loaded.
In addition, it also gives you very detailed timing
statistics for the root web page.
For example, how long did it take to do the DNS resolution
for youtube.com in this example?
How long does it take to establish the TCP connection
to the web host?
And so on and so forth.
Here is a list of all the different timing attributes
that are available to you, via a simple JavaScript API.
The list of attributes contains network level
attributes.
For example, how long does it take to the DNS resolution, or
how long does it take to do the TCP
connection to your web server?
It also includes HTTP level attributes, like how long does
it take to download the response,
the actual web response?
And finally it also contains document level attributes,
like how long does it take to fully render the web page?
All of these attributes are available via a single
JavaScript object, called window.performance.timing.
Satish will show us a demo of navigation timing in action.
SATISH KAMBALA: Hi all.
Here I have my Chrome browser open.
Like any normal user, I'm going to a website.
Let's visit news.com.
I'm waiting for the page to be loaded.
Still loading.
Now the page load has completed.
In this browser, I'm going to the wrench icon and opening
JavScript console, by going to the Tools
and JavaScript Console.
So here let's see whether [? event load ?] is
available or not.
Window.performance.timing.
So you can see this object and all the attributes that were
shown in the diagram.
So just look at some timings from this object.
Storing [INAUDIBLE]
variable.
So loadEventEnd tells, after the load has finished, since
the navigation start.
And I can see that this page has taken four seconds, like
4.3 seconds approximately.
That's the end of the timing.
Response and minus response start.
So this represents the time for downloading the HTML from
the root page, which is very fast.
So the main HTML is fast.
But if you can see the screen capture, the overall page has
taken a lot of time to load this.
So this data is available for any website, in all the
browsers that support navigation timing API.
So [INAUDIBLE] for the demo.
ARVIND JAIN: Thank you Satish.
So as you can see, navigation timing allows you to capture
the end-to-end page load time for your web page, as well as
detailed timing stats for the root page only.
And it's not that interesting when you saw in the previous
example, it took less than 100 milliseconds to load the main
root page, which is because we serve YouTube from a location
close to you always.
So it doesn't take that much time to actually
serve the root page.
Most of the time is actually spent in downloading all of
the resources.
And this is true for most of web pages out there today.
On average, I think web pages have about 50 or so
resources per page.
It's not atypical to have even more than 100 resources for
most of the popular web pages.
And most of the time is actually spent downloading
those resources.
And that's where our second specifications come into play.
This is the resource timing specification.
This allows you to collect timing information for every
individual resource on the page.
And just like navigation timing, you have a variety of
different attributes available for every individual resource
on the page.
These include network level timing stats, like time to DNS
resolve for the particular resource and the time it took
to do the *** connect for the resource, as well as HTTP
level attributes like the time to
download the actual resource.
This data is available for every single resource that's
on the page.
It could be an image or a script or object, iframe, svg.
All the different object types.
And just like navigation timing, this information is
available through the JavaScript API.
And we'll take a look at an example to see how that works.
So in this particular example what we're going to do is, for
this given web page, we want to see how long it takes to
download every image on the page.
So the first thing that we do is we call this method called
getEntriesByType on the window.performance object.
This gives us a list of all the individual resources that
are on the page.
Well, actually, it gives you a list of all the resource
timing objects that the page has saved for you.
And next we're going through that list and checking for the
resources which are of the type image, using the
initiator type field.
And all we do in this case is just alert the time it took to
actually download that resource, which is the
difference between the response and
the start time attribute.
So with navigation timing and resource timing, you can
capture detailed timing statistics for the overall web
page and for all the individual
resources on the page.
But as you all know, that web is changing.
And [INAUDIBLE], we can't really think of just resources
when we want to actually capture timing.
Many web pages are now applications.
And developers need the ability to measure the time it
takes to do logical operations on their web pages.
And many times those logical operations do not necessarily
correspond to HTTP level resources.
And that's where the user timings API comes into play.
User timing allows you to capture the time it takes you
to do a specific logical operation on the page.
It's provides a buffering mechanism for you to capture
any arbitrary start point and end point in the application.
And on top of that, it also provides you with a high
precision timer, which is useful when you do detail and
fine grain timing measurements.
So collectively, these three APIs, navigation timing,
resource timing, and user timing, they allow you to do
detailed timing instrumentation for your web
pages, and collect this data from the real world.
These standards are not all implemented
yet, by all the browsers.
Navigation timing is available in most modern browsers now,
and resource timing and user timing implementations are
also expected soon.
So with this we want to switch gears.
And now Satish is going to talk about how we use the web
timing API in Google Analytics to provide real world speed
data to our publishers.
SATISH KAMBALA: Thanks Arvind.
So it's great to have all this data right inside the browser.
But think about every website owner changing the website to
read the data from the browser, sending it back to
the servers, processing it, and seeing the reports.
Basically, to get a view of the real world web performance
is a very tedious process.
So that's where we offer a service called Google
Analytics Site Speed that does this automatically for you,
and provides this data without doing any work.
So how many of you have used Google Analytics before?
OK.
I think most of you know it.
So as you know it's a service that lets you measure the user
engagement on your website.
Typically to set up Google Analytics on your website, you
get an ID for you, and write some script to download the
Analytics script asynchronously, and add a
couple of lines.
They look like this.
You provide your Analytics ID to the account, and also call
trackPageview, which actually starts collecting user
engagement information on your website.
To get the Site Speed set up, you don't need to do anything.
Analytics, by default, collects speed data for your
site at one person sampling.
So if you have a large website, you can actually get
a substantial [INAUDIBLE]
for your site load time with this sampling percentage.
But it your site is small, like if you have 100 or 1,000
pages per day, we provide an API to improve the sample rate
for the speed collection.
So this API is setSiteSpeedSampleRate.
So you should call that API before calling trackPageview.
So it collects most speed data.
One thing I want to mention here about the sampling is we
do the sampling on a visit by visit basis.
So if a user comes to your website, and if he falls in
the sample, we track speed data for all the
pages in that session.
So that when you go back to the reports later, you can
analyze the user behavior on a session by session basis.
So we don't sample per page, we sample on a session by
session level.
So before seeing how you can see all this data, let's go
back to the diagram that we saw earlier
for navigation timing.
So the top bar here, it shows the page view process that you
saw earlier.
The bottom bar maps those attributes to the timings that
you can actually represent.
So the first thing is the redirection time.
So if your page has some redirections before the final
landing page is finished, that is captured in
the redirection time.
Mostly, if you're getting a difference from the search
engines of other websites, you can see how fast they're
sending them to your site by looking at this time.
DNS and connection times, they collect the network
attributes, as shown in the navigation timing.
And the other one, server response time, this is an
important metric.
So this measures the time from the moment the browser sends a
request to your backend since the first byte
arrives to the browser.
So it represents the time your backend takes to respond to
the user request.
It also includes the time to the first byte to the browser.
So if you want to understand your backend performance from
the real world users, you can actually look at this time.
And page download time is the time to
download the entire content.
So starting from the first byte to the last byte.
So this varies based on your site content and user
connection speed.
Frontend time is the time, after your root page is
downloaded, the time the browser takes to pass the
HTML, execute the JavaScript, download any additional
resources that are there, like you were seeing earlier, and
basically render the whole page and show to the user,
until the page loads.
I've basically scaled this diagram to show the
significance of these timings, in general what we saw from
the data we have.
But depending upon your site and based on the user, these
scalings can vary.
Let's look at how you can see all these reports, using a
demo in Analytics.
So I have Google Analytics open.
The URL is google.com/analytics/web.
At the top you can see several reports.
So Site Speed reports are available in the standard
reporting section.
Then you go to the content, and you can see Site Speed
here, which has all these reports.
So it has three reports.
Overview, page timings, and user timings.
Let's look at overview report now.
So I am hiding this so that you can see the full view.
So Analytics by default shows data for the past one month.
So overview report gives a quick summary of your website
performance for the past one month.
So it shows you all the timings that I explained
earlier, starting with average page load time and all the
other metrics that we saw.
You can actually change this graph to show any
metric that you want.
And overview also gives you shortcuts to see the speed
data on common dimensions like browser,
country, and per page.
Coming to the page, your website might have many pages,
and overall site speed load time might not give you enough
insight to improve your website.
So that's where page timings report comes in.
It lets you look at your speed data on a page by page basis.
So the page timings report shows a graph similar to the
above, and it also shows you a table.
And this table gives you a list of page load times for
the top 10 pages.
So we show the average page load time on pageviews.
We just sorted by the pageviews.
So we can actually see the top 10 pages for your website and
see how they're performing in terms of loading.
In addition to the page load time, we also show some
important metrics, like bounce rate and percent of exit.
So these metrics actually vary from page to page.
So if your page is taking a long time, people usually
abandon before the page loads, and they actually go to the
other site.
So these metrics are important on a page by page basis.
We show them alongside the page load time, so you can get
a view of how this is affecting the other metrics.
So in this analytics reporting, we
have three tabs here.
Explorer, Performance, and Map Overlay.
So below the Explorer tab we have another section called
Technical, which shows the other metrics, basically.
Page load time is the first metric to see overall.
And if you are caring about a specific metric, like server
response time, you can actually come to the technical
section and then see them.
This is similar to the site usage.
It has a graph, and you can change to any metric.
And it also gives the table of pageviews and all the other
timing metrics.
So you might be wondering, in all these reports I've shown
until now, I'm only showing average page load time.
But latency distribution is not normal distribution, so
average does not represent your site
performance for all users.
So if you have some users who are experiencing large page
load times, that can actually skew your average and might
actually give you a wrong picture of your website load.
So we do do some filtering, like
removing extreme outliers.
Like more than 10 minutes load time, if it is there we can
have it removed from these calculations.
But if there are some users who take one or two minutes,
that can actually skew your average.
So that's where this Performance
tab comes into picture.
So Performance tab shows you the latency distribution for
your site, across different speed buckets.
So in this diagram I can see that m latency solution is
like zero to one second.
10% of the users are experiencing less than one
second page load time, which is pretty good.
And 40% of users are experiencing
one to three seconds.
So I can actually see from this, even though my average
page load time is 3.3 seconds, around 80% of the users are
experiencing latency below three seconds.
So I feel pretty confident, OK, my site is doing OK.
But there are some users who are experiencing
more than one minute.
So I might want to look at them and then see whether I
can do something for them.
Finally the Map Overlay here gives you the site performance
data on a region by region basis.
And why is region important?
Because most of the time you host your website in one
region, but you might get users from different locations
in the world.
So you not only want to look at the user experience based
on speed buckets, but you want to see how users in other
regions are experiencing your website.
So here, similar [INAUDIBLE] reports.
We show the top 10 countries that get the pages from you,
and show the page load time for those countries.
And the map gives a quick summary to say.
It has a lot of gradients.
So thick green color kind of shows that the people in that
country are experiencing high latency.
Light color shows they are fast.
So we just went through an overview of all these reports
and analytics.
But I wanted to give an example of how my friend used
these reports to see his latency has changed as he made
improvements to his website.
So he's a good blogger.
And also his [INAUDIBLE], so he keeps making performance
improvements to his site.
And let's see how he has seen the speed change over time.
So he had a New Year's resolution
to improve his blog.
So around December end, he kind of changed his blogging
provider, posting from one platform to another platform.
So here I am comparing generated data
compared to the December.
So the blue lines represent January, the new provider, and
the orange lines represent December.
So it shows that the average page load time has increased
by 25% after this change.
So in January it shows that there is a significant
improvement.
So we can see that the change has really worked for him.
So there is an improvement in the average page load time.
Not only that, there is an improvement in page views.
Which could be coincidental or as a result of speed.
But it's a good thing for him to change the provider.
So this is just a change in the average.
Let's see how the distribution has changed.
So the orange bars, that is before the change.
You can see that they're scattered over one to three
second buckets, and three to seven seconds.
So people are evenly distributed.
Some people are even experiencing
seven to thirteen seconds.
But after the change, that color has
shifted to the left side.
Most of the users are now seeing zero to one seconds,
one to three second buckets.
And some users are still in the three to
seven bucket range.
So this gives me an idea, OK, not that the average has
improved, but most of the users also have shifted to the
speed of the buckets.
So one thing is his old hosting provider used to
generate the web pages dynamically.
So there's a database created that is happening when
somebody goes to his website.
And the new provider actually recalculates them and then
stores those images statically.
So because this is related to the backend time, let's see
how the server response time has changed.
So if you look at the orange bars, which is the old hosting
provider, he's generating the pages dynamically.
His distribution is bimodal.
So there is a set of users in the 0.1 to 0.5 seconds, and
there is another set of users in the one
to two second bucket.
So this can be explained by saying if the queries in the
backend are actually cached, they are getting the pages
faster and they're falling in this bucket, 0.1
to 0.5 second buckets.
Or if the query is not cached, they're actually falling into
the second bucket.
After his shift, because the pages are generated
statically, they are moved to the faster buckets.
So that can be seen in this distribution.
So these reports actually help you understand how these
changes are happening as you make site improvements.
One interesting thing, if you see here in January the first
half of the month he's seen varying page load times.
So he did some investigation, and he found that in the Asian
region the [INAUDIBLE]
was erratic and it was timing out.
That caused actually this increase.
So he put another to do to fix the social
widgets on his blog.
So he told me that he has fixed it recently.
So let's see whether that made a difference.
And seeing the data from May to June,
average page load time.
There is an annotation that he added.
So he tells me that on May 25th he changed his site such
that he preferred the loading of all social widgets on his
page to happen after that page actually loads.
So as you can see, after this the average page load time is
hovering from two the three seconds.
Compared to eight to nine seconds, which is a big
improvement.
So you can observe all these improvements in these reports.
Let's get back to the slides.
So this is just a summary of what we have seen so far.
You have an overview report, page timings, there is a
technical section, the histograms, map overlay.
So page timings and over report gives you by default,
without doing any work, your site page load latency.
But as you learned earlier, most of the user interaction
happens after the page loads.
Because nowadays the websites are complex and they're mostly
behaving like applications, you want to measure the time
after the page actually loads.
That's where user timing comes in.
It lets you define user defined custom timings.
You can define any action of what you want in your website,
and measure the timing with this.
So if you have Ajax actions that are happening, you can
measure those timings.
If you want to get detailed performance measurements, like
on your page you want to measure how much time each
widget is taking to load, or how much time CSS is taking to
load, you can do that with the detailed instrumentation.
Or if you have a site like a news website, where they're
not concerned about the page load time, but they're
concerned about how long it takes to show the main article
to the user, they'd want user perceived load time, and then
start reporting analytics.
So user timings allows you to do all of that.
And the way to do it is using an API called trackTiming.
So this API has five parameters.
First three of them are required,
other two are optional.
Let's go through one by one.
So first parameter is category.
So what we have noticed is, even instead of Google, we
also try to make all the apps fast.
What we noticed is most of the people start instrumenting,
and then keep adding a lot of variables and they
pile up over time.
So we have made it mandatory to say specify a category as a
first parameter, so you can group your variables into
separate logical things.
So for example, if I take a YouTube website, I can define
category as watch page, homepage, search page,
settings page.
So at a high level, I can see which page I am creating a
report for tracking the latency.
The second parameter is a variable, which is like the
JavaScript load time or [INAUDIBLE]
action load time kind of thing.
And this is a time.
And we provide one millisecond resolution.
So you can specify the time to track timing at a millisecond
resolution.
And you can see all of that in the reports.
The other two optional parameters are label and
sample rate.
So here a label is any arbitrary text that you can
attach to your user timing data.
So the way to use labels is if you're doing some
experimentation--
like Google web search does thousands of experiments every
day, and I want to see how one experiment is performing
against others, or how my experiment is changing the
speed data.
So you can attach your experiment as a label, and see
the data for a particular category and
variable along them.
So this is very powerful, because you don't want to do
some speed change without knowing how
much it actually improves.
So you can do an experiment and attach some label and see
how it performs, and then make it 100%.
The sample rate here is similar to the Site Speed
sample rate.
So the reason we provide the sample rate API is because if
you send too many hits to the Google Analytics, when you
look at them in the reports they'll take a
long time to load.
And they also do some sampling while generating the reports.
So if your site is small, you should definitely increase the
sample rate.
But if your site is large, like more than one million or
10 million, you should keep it in the default sample rate.
Let's see how this track timing API can be
used with an example.
So this example tries to measure the time to load a
[INAUDIBLE]
that is loaded through Ajax.
So I defined a class called TimeTracker.
So I'm using this class as a helper class that stores the
category, variable and other bookkeeping for me.
[INAUDIBLE]
class.
And when I make xhr request, before I send the request I
record the start time.
And after I get the response and validate that it's a
success, I measure the end time.
And then I kind of send the data to Google Analytics by
calling this trackTiming.
So the main things are you just record you start time,
record your end time, and then just make a call to
trackTiming.
That it as simple as it is.
So it's very simple to do these user timings using
trackTiming API, but this is also very powerful.
You can use this trackTiming API to measure any timing.
For example, you have a web page, and you want to measure
how long the user takes before he interacts
with your web page.
So you can measure the time using this trackTiming API.
So at this point you may be wondering, you have user
timing specification in the browser, and we have user
timings in the Google Analytics.
How do these two work together?
These two actually compliment each other.
Because if you notice in this example, I am creating a class
called TimeTracker that is doing a lot of
bookkeeping for me.
Storing the category, variable, recording the start,
recording the end.
And if you noticed, user timings API actually does the
same thing for you.
So it provides an API to measure the start and end
measures, and [INAUDIBLE] through them.
So the advantage of that is you don't need to pass these
variables around.
I don't need to create an object and pass it around my
JavaScript code, which kind of makes the whole code look
uglier, less variable, because I'm passing performance
related data throughout.
But if you have it in the browser, at any point in your
code you can say start measurement for a particular
name, variable name, and [INAUDIBLE] measure, and then
report it using the track timing.
So user timing API can be used for buffering the data, and
track timing can be used to send the data to Analytics.
So you can make them work together.
So I'm not going to demo the user timings report, because
it's very similar to the page timings.
You have the same three tabs, Explorer, Performance, and Map
Overlay tabs.
And if you see on the bottom, we have a dimension to slice
and dice the data by.
So like I was saying, we provide several ways to look
at user timings data.
We have a category at a high level, we have a label as a
tagging mechanism.
So you can look at a particular variable data and
compare how it is doing across different categories.
Like how long my social widgets are taking to load on
my home page, compared to watch page.
Or you can see how a particular variable is taking
across different levels.
You can plug them together and see which one is better.
So you can do that powerful slice and dice using these
dimensions.
And in the Performance tab, we have a single set of buckets
that are provided for all the timings.
These buckets are designed in such a way that we can measure
granular timings, like JavaScript things that usually
are in the range of 10 milliseconds, things that
happen on the client side.
Or you can also measure large load times that
take a lot of minutes.
We provided buckets starting with 0.1 seconds, until one
minute or more.
At the high level, we give some buckets first.
But if you want to dig deeper we provide sub-buckets, so you
can go expend those buckets and then see
how those are varying.
So until now we have seen how page timings and user timings
give you a complete tool set for measuring your website
performance.
But that's not all that Google Analytics can do for you.
It can do even more.
You can create custom reports.
By custom reports I mean you can create a table of any
metric that you care about, along any dimension that you
are really interested in.
For example here, let's take a look at this table.
Visitor Caching Info.
I'm plotting my average page load time based
on the visitor type.
Visitor type is a standard dimension that is available in
the Analytics.
So I can see my returning visitors are experiencing
faster pages compared to new visitors.
This is obvious because some of the resources on my page
might be cached, so they see faster load times.
So this way you can actually plot your latency versus some
revenue metrics, or some goals that you're interested in.
You can create any custom report that
you want using Analytics.
Not only that, you can actually create dashboards.
This is a simple dashboard which has all these four
tables that I created.
So these dashboards can be accessed in the home
[INAUDIBLE].
So the standard reporting is where we saw the reports.
So the dashboards can be seen in this home page.
And then we provide an API to expose all the data that you
have seen in the reports through the API.
So if you don't like the reports, you can actually pull
the data using the API and draw your own visualizations.
You can plot whatever you want in whatever way you want, on
your web page server, on your websites.
You don't need to come to Google
Analytics to see this data.
Another powerful feature of Google
Analytics is advanced segments.
So what advance segments let you do is see any reporting
Google Analytics for a segment of users.
You can define any segment of users the way you want.
You can create the segments very
easily, in multiple steps.
Here I'm again using some default segments that are
already created.
So there is a segment called search traffic, which shows
the data for traffic coming from the search results.
And I have a direct traffic segment, which shows data
people who visit my site directly.
And I am seeing the technical section of all the metrics
across these two segments.
You can do this segmenting across any reports.
User timing sites, page timings, or any other
[INAUDIBLE]
metrics also.
And these reports are good.
You can see all the data.
But you won't be looking at the reports 24 hours a day.
But you are speed-conscious, and you want to be notified
when something changes, something significantly
changes to your website performance.
So that's why Analytics allows you to set up alerts.
So you can set alerts to be notified when something
happens to your site, and it automatically sends you email,
or phone, or SMS, or something.
Whatever you said.
Here there's an example.
I'm asking you to alert me when my average page load time
increases by more than 10% compared to the same day in
previous week.
Because your page view traffic or page load time, all these
graphs vary based on the week day.
Weekend, you might have more traffic or less traffic,
depending on your content.
So for the same day of the previous week, if my page load
time changes by more than 10% I want to be sent
an email or an SMS.
This is the result of the email.
So it shows me that my average page load time has
increased by 67%.
Not only does it give me an alert, it also gives me some
reasons why this alert could be happening.
This is, again, based on the correlation.
So this may not be the actual reason, but you can actually
take this as a starting point and debug why
it could have happened.
So in this case, people from India, New Delhi city, are
experiencing high load times, which caused
my average to increase.
So if I have a background in that region, I can go and see
if there is a problem with me set up.
Or if there's a network already in that region, I can
do such kind of analysis.
So far we have seen how Google Analytics gives you several
ways to see your performance of the website.
But if you have some apps--
native apps, like iOS apps or Android apps--
there is a good talk that is tomorrow, to see how Analytics
helps you measure the end-to-end value of your apps.
You should attend that.
It's at 11:30 tomorrow morning.
So measuring your speed is the first step of improving your
website speed.
Because this talk is about that measurement, I'm not
going to look into how you can optimize the websites.
But I'll give you some pointers to some tools that I
frequently use.
Web page test is nice tool to see your website performance
in a lab scenario.
So you can simulate a user, using a browser, coming from a
location, and see how your page loads.
It gives you waterfall charts.
In fact, the waterfall charts that we were showing earlier,
one of them was taken from the web page test.
You can also see video strip of the web
page loading process.
So it takes snapshots of your web page at several points of
the load time.
So if it takes 10 second to load, that'll be like 50
snapshots, from zero to ten seconds, at several points.
And vendors actually use it to see some content of the page
being loaded using that video strip.
So this is a very good tool for analyzing your web page
and how it behaves.
And the Chrome Developer Tools, again I used it earlier
to show how window.performance.timing
object shows the data.
This is also very powerful to analyze your web pages.
There was a talk about it Tuesday morning.
So If you missed it, the talk was recorded.
So you can go and watch it on YouTube later.
So Analysis is good to understand how
your website is doing.
And optimization is the next step.
We have a group called Page Speed Insights, that analyses
your web page and gives you suggestions to improve your
speed, based on the best practices that we have known
and we have learned for years.
So you don't need to do anything.
If you go to that link, enter your website, it gives you
[INAUDIBLE] suggestions.
Based on the priority, again.
So you can go ahead and change your website to implement
those solutions.
But there are tools which actually do this suggestion
implementation automatically for you.
So we have an Apache module called mod_pagespeed.
So if you use that module, it implements those page speed
suggestions automatically.
It rewrites your web page, and then serves it.
So you get the benefit without much work.
And we also a Page Speed Service.
It's the service--
if you point your domain name to the service, it does the
serving as well as the rewriting part for you.
So you get the benefits of Google infrastructure, where
your website will be served from location close to the
user, as well as these best practices applied to your
website content.
If you have a Google App Engine app, we have made it
even easier for you.
There is a simple check box where you can say enable page
speed service for my Google Engine app.
And it will do what Page Speed Service does for your Google
Engine app automatically.
And if you want to learn more about the App Engine app,
there is a session tomorrow morning at
11:30, in room [? 304, ?]
I guess.
You can learn more there.
So with this I will hand over to Arvind, who has some
interesting data, and will tell you how fast the web is
from our point of view.
ARVIND JAIN: Thank you Satish.
So Google Analytics is a really good tool to measure
the speed of your website in the real world.
And I think more importantly than that, it actually allows
you to understand the relationship between speed and
other business metrics that you care about.
For example, how much revenue you make, our how many page
views do you get for your website.
So it's a great tool.
You should work on making your site fast, and then you should
convince yourself.
You can see the data in Google analytics, and see how it
actually improves your other business metrics.
Google Analytics has lots and lots of users.
We have millions of websites that
actually use Google Analytics.
And from that we've been collecting a lot of data on
the performance of these websites.
If you look in aggregate, that's a good enough data set
for us to proxy the performance of the entire web.
So that's what we did.
We did a study on one [INAUDIBLE] called site speed
data from Google Analytics.
This was done using data from publishers who actually
allowed us to do these kinds of studies, by opting into
aggregate analysis.
And these are the results from that study.
So first of all, the web is not fast.
I mean I think this is one of the things that we all need to
work on, make sure that our websites are really fast.
On desktop it takes well over six seconds, on average, to
load a web page.
That's a lot of time.
Just imagine sitting for six seconds, not doing anything,
just waiting for the page to load.
On mobile, it's even worse.
It takes well over 10 seconds on average to load a web page.
And by the way, this mobile data is from Android ICS
builds only.
So these are really the best phones that are out there.
And even those are quite slow.
The other chart shows the different histograms.
But the story remains the same.
The web is quite slow as of today.
This is another interesting navigation.
Here we show how fast the web is for all
these different countries.
There's actually a heat map, there's a link to a heat map
here where you can see the performance of the internet in
any given country.
The data is sort of obvious.
It shows that the web is fast in countries where you would
expect it to be.
I think it's sort of well-known that countries like
Japan and Sweden, they have really good broadband
connections to the home.
Often in excess of 100 megabits per second.
And that sort of shows in this data.
But why is that important to you?
One of things--
we at Google, we really care about this data.
And we want to make sure that our web services have the same
level of responsiveness, no matter where the user is.
So if the user is coming from a slow connection, or if a
user is coming from a really fast connection, we want to
make sure that we provide them with the same responsive
experience, even if that means that we have to sacrifice on
some features.
So we would often go and cut some features from our
products, so that we can still maintain those low page load
times for our users.
As an example, for Google Maps we can solve them with
low-resolution titles.
For Google search, we can actually remove some of the
features which are not commonly needed, or we can
defer loading them and make them available later.
Gmail, again, has the same kind of experience, where for
slow connections we would actually show them a more
traditional, web 1.0 style email interface.
Where you get to see your emails really fast, but you
won't get all the cool features, like
Hangouts and what not.
So that's sort of the design philosophy that we follow.
And we often notice that that is actually very useful, and
helps us keep those users in those countries where the
connections are slow.
And that's something for you to consider as
well, for your websites.
This is yet another aggregation.
This shows the speed of the web across
different industry verticals.
And sort of interesting--
entertainment sites, as an example, tend to be fairly
rich, with a lot of images.
And that sort of materializes into slow page load times.
And the same question emerges again, why should you care
about this?
So one of the things that is actually good to know, where
does your site stack up?
As compared to your competitors, for example.
So you can go to Google Analytics, look at how fast
your website is, and then compare the performance of
your website with these aggregates.
For example, you can find out whether your site is typical,
or slower, or faster than other websites in your region.
Or whether it's slower or faster compared to other
websites in the business as yours.
So this is all we have today.
This concludes our presentation.
Thank you all for coming.
And we would love to take any questions that you might have.
AUDIENCE: Question about the custom metrics
that you can collect.
I saw in your example you were measuring the beginning and
end of a particular operation.
Is there a way to just capture how far you are from the very
beginning of when the web page began to load?
Say that you're loading some resources via Ajax shortly
after the page loads, so that it's comparable to--
it's on the same timeline as the rest of the performance
events, this long to get to this step,
this step, this step.
SATISH KAMBALA: So the navigation timing API has this
fetch start.
That's one of the attributes in the navigation timing API,
that tells when the page started to fetch.
So you can base all of your end times with
[? respect to ?]
the fetch start, so we can compare them one by one.
AUDIENCE: OK.
Cool, thanks.
AUDIENCE: Ads are often a major issue with web
performance.
And they're also very variable.
Is it possible to remove those from those timings, so you can
check your pages outside of ads, in case the ad stuff is
not something that you're able to control.
ARVIND JAIN: Yes.
I think that's a difficult thing to do.
Because when you load a web page, your ads are actually
just one of the 100 resources, for example,
that have been loaded.
And it's really hard to tell what would happen if those ads
were not there.
You can certainly look at the--
there are other tools.
For example you can use Web Page Test, or you can use Dev
Tools, to actually see the waterfall and see that your
ads are actually sort of coming in the way of your
actual content from getting loaded.
So that's one of the ways to actually see whether ads make
things worse.
Now the other thing to do is you can actually measure the
performance of your ads by themselves.
For example you can use user timings to extract the time
from the start of an ad quest to when the ad request
actually comes back.
That sort of tells you how long does the
ads actually take.
And if your ads are blocking, like if they actually block
the content of web page, they'll show the actual impact
of those ads on the web page performance.
So you can actually look at the data, and you can manually
definitely try to figure out the actual impact of ads on
the web pages.
The right thing, in my opinion, is to actually use
ads properly.
And many ad networks, definitely all Google ad
products these days, the recommended way to include
them is to load them asynchronously, so that they
don't actually come in the way of your web page performance.
Seems like that's it.
So thank you again.