Tip:
Highlight text to annotate it
X
STEVE: So welcome, everybody, back.
This is the dreaded after-lunch event
when you're all in a carbohydrate coma.
So hopefully you'll all wake up with the fierce debate
that is about to be ignited here.
The inventor of the Onslyde system,
Wesley, assures me it's going to be working.
So please put your hand up and give an opinion.
You don't have to just be putting, standing
to ask a question, stand up, let's ask an opinion.
We want to hear what you think as well.
Quickly introduce you to the panel.
Actually, we'll start from this side.
Pat Meenan from Google, the inventor of Web Page Test,
basically the best page load performance
testing tool out there.
Wesley Hales from Shape Security,
front-end developer, inventor of Onslyde and loadreport.js.
We have Luke Blaney from the FT, did a lot of work
on speeding up the FT way back and is a big fan of Varnish,
apparently.
We have Peter Hedenskog, from Cybercom Group,
creator of sitespeed.io and Browser Time.
And last, but not least, we have Andy Davies, from NCC Group,
Web Performance Velocity speaker, and the person who's
going to kick off with the introduction.
Andy.
ANDY DAVIES: Yes.
OK, so Steve's already introduced me, but I'm Andy,
and I'm frustrated.
And the reason I'm frustrated is, the web is too slow.
Or rather, too many sites are too slow.
Because we've been talking about why
we need to make web pages faster for a long time.
Steve Souders first invented the concept back in 2006, 2007,
he wrote his book, and we understand
how to make sites faster.
And this is one of the reasons why I really get frustrated.
We know that to make sites faster,
we need to minimize latency.
Whether that's using CDN to move our content closer
to our users, whether it's speeding up our back-ends so
the time to first byte is quicker.
We know latency doesn't change.
It's governed by the speed of light.
So we have to move to reduce it.
The other thing we need to do is cut down
the number of round trips we make,
because every round trip is bounded by how quickly we can
make it, or the latency involved.
And we do this by turning on gzip to compress stuff.
We use minification.
We merge resources together in a build system.
And one of the reasons we do all this
is there's a tension between how we build sites,
so how we break it down into modular components.
And the best thought way to get them
to the browser, for the browser to be
able to load the page quickly, render it quickly.
So we can minimize latency, we can reduce the number of round
trips, and we can minimize blocking,
because some of our resources block.
If we've got CSS, we have to wait for the CSS
before we can render the page.
If we've got JavaScript, we have to wait for the JavaScript
to execute before we can move on.
We have web fonts, and depending on how we load web fonts,
sometimes we have to wait for them, sometimes we don't.
The other way of making sites load fast
is to maximize the value we get out the first round trip.
So when you hear Paul Irish on stage
at Fluent conference talking about, put in the first 15K,
this is effectively what he's talking about.
He's talking about, turn the initial, TCP initial congestion
window up to 10.
Make, so you get a roughly about 14.8k,
depending on how big your TCP segments are in it,
and push that out to the browser.
Everything you need to render the page on that first hit.
So that's what the [INAUDIBLE] [? might ?]
do on the mobile site.
They have the content, the idea that they serve out
the content, which is the HTML and CSS to render the page.
They have the enhancements, so where
they insert JavaScript, so swiping, and then
they have the idea of leftovers, so advertising, analytics.
So we can make our pages faster.
But we seem to want to leave it to the browsers to do it.
And browsers are doing a great job,
and have done a great job over the years
of helping us make our pages faster.
The HTTP standard recommends we only
have two connections, two TCP connections to our server,
but we've moved on from that.
Typically browsers have four, or six, or sometimes even eight.
They open TCP connections in advance,
they speculate that we're going to request something
from the same server.
So we open the connection, it's there ready for us to use.
We have the preloader, which, while we're
busy waiting for CSS, or blocked on JavaScript to execute,
the preloader will go looking through the rest of the page.
Picking up the resources the page
will need to complete, prioritizing
them and downloading them in the most optimal order.
We've got faster JS engines, we've
got new image formats, faster layout engines,
browsers are doing a great job.
We also have new protocols.
HTTP doesn't fit very well on top of TCP.
So we have SPDY now, we have HTTP/2 coming.
And they will help improve the performance of our sites.
On some tests I did, I got a 30% uplift in performance
just by switching to SPDY, and that
includes the CLS negotiation overhead.
HTTP/2 may get rid of some of our build stats.
It may reduce the need to merge stuff.
We're still going to have the challenge,
though, that we have people on HTTP/1 as well as HTTP/2.
And we're going to have to work out how to optimize both.
But despite all these improvements, [INAUDIBLE]
across, we keep adding more and more stuff to our pages.
Our pages are getting fatter.
We're relying on browsers and networks
to overcome the performance hurdle.
And perhaps more worrying, is we're
including more blocking resources.
Now, the number of times I see a tweet going, somebody going, I
hate web fonts because this is the experience I get
and a page with no text on it.
But yet, they've put web fonts on their own site.
So we're making our pages more and more complex
and delivering more and more of a challenge.
We can automate some of this optimization.
So we talk about merging stuff and image optimization.
We can use things like mod_pagespeed,
or Akamai's FEO service to take some of these optimizations
away, to simplify our build services.
But OK, so why aren't we getting faster?
And my view is, we don't measure enough.
We've got great tools.
This is sitespeed.io that Peter wrote.
We've got things like Web Page Test.
We can measure in the visitor's browser.
So we can measure the page level.
We can measure individual resources.
We can tag the page so we can measure
when something we're interested in appears.
But there's a lot of data, and we
need to move beyond which pages are slow to why are they slow.
This is a waterfall in Web Page Test.
There's actually some interesting things
in this waterfall.
In the time to first byte, so always about 200 milliseconds.
And this is because I get did test
from Dulles in Virginia instead of the UK by mistake.
But I ended up looking at this for a while,
working out what was wrong with it and took me a while.
And destined to be a human pattern
matcher for network waterfalls for the rest of my life,
to help make the web faster.
We need to move on to how do we fix it.
And we know the older browsers need resources
to be able to render a page and get a page to a users,
but we don't really have the tools to help us get there.
And finally, we think of performance
as a technical issue and it's not.
Or I would argue, there are technical aspects to it,
but we need to go back and think about performance
as an aspect of user experience.
We go to Fluent, or Velocity, or Edge Conference,
and talk about page load performance,
but we need to fit it in to the rest of the user experience
picture.
We'll A/B test whether a button should be green or blue,
but will we A/B test how our performance improves
if we remove our A/B framework, or our fonts?
We don't generally do it.
We need to design for performance.
If it's a user experience asset, we
need to design it info the way we build sites.
It's just another constraint, like time or budget.
And design.
Clear left, Tim Kadlec put forward
the idea of a performance budget,
so you decide how long should your page take in slow network
conditions?
How big should it be?
And we need to, as well as technical solutions,
we need to go and look for the human solutions.
And we've come a long way.
We've got much faster browsers.
We've got better networks.
But we need better tools.
We need to fit performance and look at it in holistic ways,
in the way we build websites.
And we also need to be careful about new technologies.
We've got HTTP/2 coming.
We don't really know, we know some of the performance
improvements it makes, but we don't
know what other impacts may come with it.
We've got web components that we talked about this morning,
and things like the potential blocking effect of ral
equals import.
And we need to work out if we deploy web components
on a large scale, in a blocking format, what
are the issues it brings?
And now I believe our moderator will put to us your questions.
STEVE: So Andy's frustrated because he's
going to spend the rest of his career
as a human pattern matcher.
I don't think that fits in a tweet,
but it would be really good if it did.
OK so, kickoff, first question is basically
one on responsive web design from Peter O'Shaughnessy.
PETER O'SHAUGNESSY: Hi.
Using branched loading, the Guardian
have made their new website responsive, but 42% smaller
on mobile than on desktop.
Does this end the performance arguments
against responsive web design?
Are there still cases when a separate mobile site is best?
STEVE: You're the front-end developer representative, Wes.
You want to [? touch that ?] with this one?
WESLEY HALES: It really depends on the goals
of the organization, I guess.
I mean, you can have a separate team sometimes.
Like when I was at CNN, we had an entire separate team working
on a separate mobile website for CNN, and then we had a,
and they were completely divided,
and it was a really uncomfortable situation,
not being able to cross teams.
It was just the way they had siloed it off.
So like I said, it depends on the company, what
your goals are, but it does make sense for like CRUD,
if you have a heavy client-side application, single page,
whatever, then you would not want
to try to scale that down to put on mobile.
I mean, chances are, unless you're
trying to achieve the same thing on mobile,
but sometimes developers just build for desktop first
in a lot of cases.
STEVE: Pat, you're looking at web sites all the time,
Web Page Test.
PAT MEENAN: And I think it's going to get interesting.
We're probaby going to see this a lot today.
Depending on how extreme you're trying to get,
like when we start talking about deliver
the above-the-fold content in the first 15K, right.
If you're going to try and do that on mobile,
the first 15K is fairly easy to get your one
image and your story or whatever.
What you're trying to deliver for your first
15K of your desktop or your responsive site
is going to be very different.
So I think it's going to be really
hard to do a responsive, like uber-optimized site,
that scales for both mobile and desktop.
You may be able to do well enough in both cases.
ANDY DAVIES: I think part of the question
is, we know it's 42% smaller now,
but how small could it be if it was catered just
for that device.
And it's a compromise.
Responsive design is about building
a site that works on as wide a variety of devices as possible
at an achievable cost effort.
If you look at the studies, people
build really small mobile sites.
You can build a mobile site that's
tens of K, whereas how big is the Guardian site, Patrick?
PATRICK HAMMOND: 700K on the mobile site.
ANDY DAVIES: Yeah, so the 42% is still a huge chunk.
So it's a work-in-progress as to whether the responsive
argument has gone away.
STEVE: Luke, FT, you're in the same business, publishing.
LUKE BLANEY: Yeah, we still have the separate mobile site,
and we have a web app as well.
But I think that's more like an internal legacy sort of thing.
A lot of these things--
STEVE: It's an organizational constraint, not
a technology one.
LUKE BLANEY: Yeah, it's not a technical thing.
And on a lot of these things, yeah, if you're starting fresh,
you do it completely different.
But if you've got this big massive website that's already
there, just going from that to say,
we're going to snap our fingers and make it immediately
responsive, that's--
STEVE:
A lot of inertia.
LUKE BLANEY: Yeah, there's a lot of that.
I think eventually, yeah, it would be great to get there.
And I think from a performance point of view as well.
And like, say if you're supporting
every individual browser, you can make something really
performant in one browser, and make
it work in that one use case really well.
The more and more things you support,
the more compromises you have to make.
That goes for just desktop projects,
for example, you can optimize to Chrome,
and say I'm going to make this work really well in Chrome,
and not care about IE.
But every time you support one extra thing,
you're going to have to make, it might be small compromises,
it might be big, but then, and the same
goes for saying if you want mobile and desktop.
ANDY DAVIES: I think it depends on
if it's good enough for your audience.
And it's the YouTube example of, when they shrunk YouTube,
they got new audiences.
It's whether the Guardian feel that 700K on mobile
fits their audience.
PAT MEENAN: Well, and I wouldn't necessarily
even look at the 700K number, right that's all in.
It's what does it take to deliver
your initial experience, the visual experience, right,
and focus on that.
If you can get that small enough on both the mobile
and the desktop sites with one delivery,
then you're in a lot better shape working with that.
STEVE: So we'll take a question from Guy in a minute.
But just, it comes back to, does it
end the performance arguments against responsive web design?
I think we're coming to the conclusion that that answer is
no, there's still some arguments for and against.
So Guy, you had a question?
GUY: Well, I wanted to comment on that.
I think it's just fundamentally harder
to make a responsive website faster.
That's just the reality.
So I just ran a test to look at the top five thousand websites.
If you look at the top, the responsive or not responsive
websites on the m.dot sites on the top 5,000 sites,
it's almost the responsive websites
are about three times bigger on mobile
than those of m.dot sites.
It's just that m.dot naturally lends
itself to be more lightweight and fast, while in responsive,
you need to do a lot of work.
Possibly, eventually, you can get to the same performance,
but I think we're sort of very far from the point where it's
just as easy, or just as implied.
PAT MEENAN: You can also, I mean it's
not unusual for the m.dots, especially the legacy ones,
not to be doing advertising tracking and all sorts
of other things that the business gets when they're
doing a responsive one, too.
So you sort of---
GUY: It's true, and correlation is not causation, and all that.
But the reality is that if you are looking
at that anecdotally, it pretty much, all of the newly
launched websites, it's hard.
Most of them would have done very little.
At least today, we've done a good job.
Images, responsive images, are being tackled much more
frequently, but still, at the end of the day,
there's just a lot more excess, as compared
to if you were to do something dedicated.
ANDY DAVIES: I think one of the interesting things
is, what do we need in the way of tools or in browser features
to be able to build sites in the same way the Guardian have
built their sites.
To make that easy for everybody, rather than just needing
somebody with Guardian skill set developers.
PAT MEENAN: And I think that's probably
the topic I'm going to touch on most through the day.
It is damn hard to build a fast site,
and we need it to be easy.
We need it to be the default case.
We need it to be, especially like with components,
you just drop them on and they're fast, right?
Not, you have to figure out how to cache in local storage,
or figure out how to plug in service workers
to cache it if you're offline, and not
to fetch it if they're not.
STEVE: We have two seconds left.
Andrew?
Quick comment?
ANDREW BATES: I think it's a really frustrating trade-off
because you have, on the one hand, as Luke says,
you can optimize one particular user agent very well.
As you try to introduce more and more,
I think it becomes more and more of a challenge,
and I think ultimately, that's not a scalable challenge.
Because, we'll talk later in Future Web
about things like wearables and non-conventional devices
and TVs and that kind of thing.
Is responsive web design going to deliver one single solution
to all of those things, and it could be performant as well?
I think that's probably very unlikely.
STEVE: We need to move on to the new topic.
The next question is actually from Andrew Bates.
That's called a seamless segue, people.
[LAUGHTER]
ANDREW BATES: OK, here we go.
So do concatenation and spriting become anti-patterns
with the advent of HTTP/2?
If so, when?
STEVE: Andy's got an entire Velocity presentation on this,
so I'll defer to you.
ANDY DAVIES: I think what we're aware of is what we're
doing when we concatenate and sprite stuff.
And we're merging resources together.
So we're merging, say, JavaScript together
that has potentially got different rates of change.
And we're making them more cacheable together.
So if we can split them out, then we
can cache them individually.
So hopefully they live longer in the cache.
When do they become anti-patterns?
It's an interesting...
I'm going to pass on that for now.
I'll come back to that.
Somebody else pick it up, and I'll--.
LUKE BLANEY: I think you could argue at the moment,
spriting already is an anti-pattern
in some circumstances.
If you're doing it wrong, and you could have,
forcing users to download a whole sprite when they just
need one icon, it's just, that's just
a waste of performance for everybody.
So doing in the wrong way can already been an anti-pattern.
And I think HTTP/2 just makes that more obvious
rather than completely changing the ball game.
STEVE: Peter?
PETER HEDENSKOG: It will be hard for us
as developers when we need to serve to both HTTP.1 and 2.0.
So I mean, we need to find the best way to do it.
WESLEY HALES: Well we're kind of doing it now,
I mean, I know I am.
At least with this site.
It runs Speedy [INAUDIBLE] web but I don't really
care about older browsers right now.
It's a side project.
So I mean, I could afford to--.
But I mean, the way developers are developing sites today,
you've got a lot of controllers or modules,
you have a lot of different JavaScript files
that you kindof divide out to organize your application
on the development side.
But it's a tough question to answer.
Like how do you support both SPDY today and the older
browsers that don't support it.
I mean, it's almost like, I don't know, why would you
not concatenate everything?
That's going to save your older--
But it doesn't matter on the newer browsers.
ANDY DAVIES: I think the challenge is actually
when you need it on the page.
In that, if you've got a web font that's referenced in CSS,
for example, the browser has to download the page,
so we don't know anything about the other resources
until we download the page.
Then we start parsing the CSS and parsing the DOM
and building a render tree, to decide we need a web font.
And then we have to download the web font
and wait for it to arrive.
And the question becomes, instead
of concatenating or in-lining stuff in CSS,
is, can we push those resources using HTTP/2.
To say, can we push the font object early
so the font gets there earlier, so we
can render the page more quickly.
PAT MEENAN: Well, and I don't think it's even just pushing.
I mean, the big problem we're going
to have is knowing when you have to support both, right?
But assuming you don't have to support both,
it's the granularity you get by not having
to concatenate all of that stuff, right?
You'll add all sorts of JavaScript
into your main JavaScript file because it's
needed on three pages or whatever, and all of a sudden,
you're bloating it, and when you break it out
and you don't start concatenating,
you can granularly just pull down
what these individual pages need.
On the browser side, the browsers
won't parse and evaluate the JavaScript until it's complete,
so breaking it down into little chunks is nice for the browser
as well.
Same thing goes for the sprite, you'll
have like three images on there that
are needed for some random page somewhere else,
and you need to rebuild the whole sprite any time
you change one of those.
LUKE BLANEY: I think one thing that
can help with this is rather than using a sprite
or whatever, is, if you have something
that your client-side application can understand
the individual parts, like on the FT web app.
What we do for downloading images
is we actually use JSON, and have all these JSON images.
Then, we can, the client-side code
can cache each of these images separately.
And it can handle them separately
if we need to clear the cache or anything like that.
We don't have to download the entire sprite again,
because it understands each of these individual resources.
It's only for the network bit that we actually concatenating
them and then we split them up again.
PAT MEENAN: Right.
But this is all JSON data URIs, local storage,
back to not making it easy for people to do the right thing.
STEVE: We had a question from the audience.
Jonathan Fielding?
JONATHAN FIELDING: With what you were just talking about,
would you say that with HTTP 1 and HTTP/2,
you will have to have different [? application ?] assets
and it will detect on the server side?
So that people who are already getting HTTP 1,
they get sprites still, and using the old anti-pattern's
[? ways, ?] and then with HTTP/2,
where gzip [INAUDIBLE] caching form the best it can?
By caching individual images?
STEVE: So effectively, you're saying,
it's a bit, the same argument.
You're going to end up with the m.dot version, which
is going to be the HTTP/2 version.
I'm going to have a load balancer that sends somebody
the HTTP/2 pool and somebody still in the HTTP 1.1 pool.
WESLEY HALES: I think you're not going
to be penalized at all if you continue
your old ways of development.
They are anti-patterns, and they do cause us,
as developers, more frustration, right?
Or more work, essentially.
But I don't think you're going to be,
actually I know you're not going to be penalized
for concatenating versus not concatenating on the HTTP/2
side.
ANDY DAVIES: Well, you get penalized in cache terms,
potentially.
LUKE BLANEY: But no more so than you get when there's a block.
WESLEY HALES: But SPDY pushes straight to the cache,
though, right?
I mean, it pushes straight--.
Yes, there will be, possibly a larger download, but I mean,
once SPDY, or HTTP/2, once it opens the connection,
it will push directly to the cache,
even if you have caching disabled in your browser.
STEVE: So during this, I think the issue, if I'm clear,
is, during this intervening period,
when you have to support both protocols,
are things going to be painful?
WESLEY HALES: Yeah, it's really, it's more for the developers,
I think.
It's more for us to have better workflows
and not have to jump through so many hoops,
and that's kind of what HTTP/2 will bring.
ANDY DAVIES: Automate it.
PAT MEENAN: I mean, you'll have to decide at some point.
Do you have server-side logic that detects and spits
out the HTML, because fundamentally it
needs to be in the HTML.
Differently for SPDY or HTTP/2 versus HTTP 1.
Or do you look at your traffic mix, and you go, OK,
now we've got 70% of our traffic coming in that's SPDY
or HTTP/2-capable.
It'll be slower for the older browsers, or the smaller group,
but it doesn't break.
WESLEY HALES: Right.
It's worth it on the business side.
PAT MEENAN: It's, you decide at what point do I cut over?
ANDY DAVIES: Or, you use something
like ModPageSpeed, that's protocol-aware.
So it will do the optimizations for HTTP 1
and it will do different optimizations for HTTP/2.
WESLEY HALES: I know Google had a report out last November
about using SPDY on Maps and Drive,
and a few different properties.
And they observed like, around a 30% increase on all those,
so, I mean, at least we have some larger entities
leading the way there.
STEVE: We need to move to the next topic,
but just on the HTTP/2.
I mean, the timeline, the specification
is not even due to be ratified until November.
You've got to get all of the web server--
PAT MEENAN: Effectively, SPDY's out there, right?
STEVE: SPDY's out there now.
PAT MEENAN: IE 11, Chrome, Firefox, it's effectively,
all the capabilities are already out there.
So it's, what's your traffic mix,
what's your server-side support, and what's your--
STEVE: So if you're expecting HTTP/2
to answer all your questions, it's probably a way away,
but you can start playing with some of the ideas
and the technology with SPDY.
Next question.
Patrick Hammond.
PATRICK HAMMOND: Hi.
So [INAUDIBLE], as I said in a [INAUDIBLE] last year,
that we need to move past the on-page events and metri--
sorry, the on-load event and metric.
And we have a lot of us moving enhancements
to parse that load event as well.
What is the new golden metric, or is there one?
STEVE: Peter, I'll throw that one to you.
You make a tool which measures performance.
PETER HEDENSKOG: I mean, it is Speed Index
that I have in Web Page Test, but we
want to move it to the other tools
and be able to use it in RUM also.
I mean, we need to know when the content is,
above-the-fold conten, is in the browser.
I mean, that's the important thing.
Or how do you guys say it?
PAT MEENAN: Yeah I mean, it's fundamentally,
if you own the site you're trying to measure,
instrument it.
You are the one who knows what you care about.
So put on load handlers for your above-the-fold images,
for example.
Tag your ads so you know when they load.
And then beacon all of that stuff back.
I mean, nothing is ever going to be custom instrumentation.
Doing it generically, that's when
you start to get into difficult cases, right?
If you're a RUM service offering something everyone
and trying to automatically tell them
when their above-the-fold content is complete,
that's a much harder, currently unsolved problem.
Yeah, I mean, synthetic, I like Speed Index, obviously.
But you really need to do need to move
beyond the technical point when everything finished,
because there is so much stuff on pages these days that's
not user-visible.
All of the ads tracking, the analytics,
there's even your A/V platform testing, all sorts of stuff.
The single-page apps that scroll down forever.
Trying to figure out a generic complete load time
metric for pages is tough.
ANDY DAVIES: I try to encourage people
to target start render time.
So when the visitor actually starts
to see something in the browser.
My preferred option after that is Speed Index,
which measures when the viewport is complete, visually complete.
PAT MEENAN: And for what it's worth,
there is start rendering the RUM from IE and Chrome,
it's kind of buried in different places like
window.performance.ms first pane is IE's.
STEVE: But is it real, [INAUDIBLE]
if it has a wide screen?
PAT MEENAN: Right.
So you'll want to make sure-- it's real for some sites
and it's not real for others.
So you want to actually test the pages you're
looking at first to see if it's actually
a useful metric for you, before you start
basing any decisions based on that.
STEVE: So, quick audience poll.
Who realistically, in their websites,
are sort of using the Page.onload event
as their most common page load metric?
No, yes, no, not many?
ANDY DAVIES: The first question to start
with is who actually measures their page load times?
STEVE: OK, so only about half the audience is actually
measuring their page load times?
Who's actually doing custom instrumentation,
so they know exactly when they're above the fold?
OK, and who doesn't work for the Guardian and the FT?
So OK, Christian, Christopher Emery, JWT?
CHRISTOPHER EMERY: Yep.
My question was actually touching
on the meat-- We're going on about using
the page.onload event as the kindof de facto, that's
when the page is ready.
I mean, I work for an advertising company
and I've kindof come to peace with that,
but actually, it brings with it a lot of, actually, knowledge.
And actually, in advertising, they've
actually dealt with this problem about 10 years ago.
And actually, when you're building adverts, specifically
digital ones, you have polite load.
So the idea is the minimum viable content
that you need on the page to get it functional,
without distracting the user.
[INAUDIBLE] passive experiences, when
they dealt with [INAUDIBLE] page.
So the restrictions inherent in that platform
mean that we have to be as efficient as possible.
And actually, now, when we're doing a lot of sites, now,
we've done actually, some very effective responsive sites,
we take the same thing.
There's a polite load.
We actually defer everything that we
need for about 3 seconds, when the, after the page load event.
The idea -- that's enough time for someone to click on a menu
button, and click deeper into the site, should they want to.
So we're not actually forcing them to wait.
And this is a strategy that we're using right now,
and it actually works quite effectively.
And it means that-- it kind of blurs the line-- like the page
isn't ready, that page.onload, but it's actually usable,
and actually, above the fold, everything looks the same.
As soon as the JavaScript fires, after 3 seconds,
we then hook into that, and then all the carousels
start working, all the interactive videos and things
like that fire up.
STEVE: Patrick, you got a comment on that?
PATRICK HAMMOND: Just, that's great, that's good to hear,
from an advertising company that's actually doing that.
UNKNOWN SPEAKER: Yeah, I was about to say.
STEVE: It means it's not going to be
easy to get out of the room alive
PATRICK HAMMOND: On that note, something,
going back to measurement and metrics.
Something that we've been doing with adverts,
and now that Resource Priorities API and Timing APIs is here,
we're starting to have discussions
with our advertising suppliers.
That's so that they can set the timing,
and allow original flags, so that then we
will be able to beacon and measure.
And on our RUM [? drives ?] have the statistical data,
for the load time of our investment.
We'll know when things are going bad or good on that side.
Anyway, it's very pleasing to hear advertising companies talk
about [INAUDIBLE].
STEVE: And contrary to sort of, popular opinion,
you generally do want to get your ads loaded early,
so your users will see them, click on them,
and you will make money.
So it's really important that you actually
know when your ads are loading, though.
And I've talked to several companies that just stopped
loading ads so that they'll get faster page load
times, and that's the wrong answer, right?
It's how do you get your ads, not competing
with your content, but both loading visually quickly
for the users, because at the end of the day,
you still want to make money.
UNKNOWN AUDIENCE #1: [INAUDIBLE] interested [INAUDIBLE]
get warning signs from companies like you,
but you are obviously not allowed to talk about it.
STEVE: We can talk a little bit, but, yeah.
UNKNOWN AUDIENCE #1: This is like the same problem
with Flash.
Flash solved other problems that HTML 5 has,
but the information never got out.
And we probably have a lot of performance stuff in the ad
space that was never talked about because
of competitive advantage over other app providers.
CHRISTOPHER EMERY: If I can just do a follow-on point?
What I mean, actually, the previous point
we were talking about, having a responsive, or actually,
a mobile versus a desktop site?
We're actually fine, because we're
building, kind of, campaign sites.
So we're doing it for brand, it's like 3 to 6 months,
it lives, and then it kind of dies.
A lot of those, because they have
a lot of money thrown at them, both in media
spend targeting to it as well as, kind of, organic search.
We find it's kind of a tricky road.
We're actually finding a lot of times
that having a dedicated mobile site works.
Because at the end of the day, there's
like a path to purchase that we kind of want the user to do.
And the desktop might have that rich, immersive, kind of,
interactive visual video experience,
whatever that might be.
But actually, on the, when it comes to mobile,
the use case is different.
So, actually, we're finding a lot of times,
even though we push for responsive over time,
realistically mobile gives you that direct path to purchase.
There's ad dollars behind it, you've
got to get the customer's product sold at the end of it.
STEVE: So to summary, the answer is there a new golden metric?
No, it's the one you roll yourself using user timing
in the navigation, in the timing API.
UNKNOWN SPEAKER: And beacon it back somewhere
where you actually look at it.
Don't just instrument the page.
STEVE: A bit like bitcoining; you've
got to mine your own gold.
Right.
The next question is from Patrick Hammond.
PATRICK HAMMOND: It's me again.
It's actually, this is a very similar question,
but it's in a different light.
So how, we're now very well-equipped to measure
our initial page load performance
with great tools like Web Page Test and things like RUM.
But yet, we're seeing a rise in large-scale, long-living,
single-page applications.
But, so, do we need new tools to measure these,
and new metrics and new visualizations
to measure long-living applications that we're
seeing on the web today?
STEVE: So performance measurement
in single-page apps.
STEVE: User timing.
PAT MEENAN: Well.
I mean, it it's back to instrument, instrument
it and figure it out.
I mean, you've got-- hopefully the browsers are giving you
all of the primitives you need to understand what's going on,
especially in the single-page apps.
As you swap in content, as you scroll down and do stuff.
If you're not getting the primitives you need,
I know we were talking, like at Velocity Summit a few weeks
ago, or months ago at this point,
about possibly adding load event timing handlers or first paint
timing handlers, to images and stuff like that.
To all elements, so that you can get the primitives.
But it really does come down to, you
know what you want to do with your app.
Time it, beacon it back, and if you can't get at what you need,
let us know.
WESLEY HALES: I think it becomes more of a rendering thing, too,
right?
I mean, your page is already set up,
so it's about image decoding, and making AJAX requests,
and getting the images and decoding those.
And any other transforms you're doing to the page,
and that's really where it comes into Jank
and other kinds of measurements as well.
STEVE: It looks like there's probably
some APM vendors in the house that
would have a conversation about-- if you've
got a single-page app that doing lots of AJAX requests
to the back end, you're sort of, that measurement is also going
to become more inherently related to the measurements
of that round-trip time for that back-end AJAX request.
And you've got to start tying those two things tighter
together for those single-page apps.
LUKE BLANEY: I think it also depends--
if you're waiting on all those AJAX requests, like in our web
app.
This was because we've gone for, we
want something that works offline.
We try and make it when you click stuff,
that you're never waiting for an AJAX request, right?
We want all the content to be there upfront and in advance.
So it really depends on your site,
whether you're doing these click-and-wait for the content
to come, or whether you're doing something
else in the background.
I think that affects what you need to measure as well.
WESLEY HALES: So, you guys are putting everything
in local storage, right?
LUKE BLANEY: Local storage and WebSQL or SP
depending on the right [INAUDIBLE].
WESLEY HALES: So what is like, your limit?
Is there a limit on the content you download, like,
how do you--?
LUKE BLANEY: We actually have, like, it's a bit complicated.
There's different modes, and like, when you first
load, we try and just get the basics.
So just the article content, we don't
bother with all of the images and stuff.
Because, well, particularly old ones.
You need to press a prompt to allow you like 50MB of storage
or whatever, and then we'll go and start
doing all the images and stuff.
But we try to keep all that in the background where possible.
And it should, we try not to get in the way of the user.
They should just be able to navigate the app and not care.
WESLEY HALES: So it comes down to rendering at that point.
ANDY DAVIES: But are you just generating another problem?
When they run out of local storage and they
have to go to actually start managing local storage
themselves.
STEVE: So if we come back to the question about do
we need new tools, new metrics, and new visualizations.
You're saying if a single-page app measuring
the rendering of that thing that I've done becomes critical,
do we have the tools?
I mean, Peter, you make tools.
Do we have the tools to measure that right now?
And I think the answer is probably not.
PAT MEENAN: Actually we do, but they're manual.
I mean, you have the timeline and dev
tools of rendering and painting events,
but RequestAnimationFrame is probably your best friend.
But back to, it's not easy.
I mean, figuring out if your single-page is behaving
smoothly on the client side in RUM
by hooking together a bunch of [? RAF ?] calls,
and looking for jank and that kind of stuff.
It's doable, but it's complicated.
STEVE: The answer to that is yes, we probably
do need new tools and images.
LUKE BLANEY: I think, you get to a point where,
to be honest, sometimes it's best just having
a manual tester using it and going, that's a bit sluggish.
There's some things that-- [INAUDIBLE]
it's not the answer that you necessarily want,
but having somebody--.
WESLEY HALES: But then you are limited
by CPU and whatever your system is running.
LUKE BLANEY: Yeah, but that's what
the user is going to do at the end of the day.
That's [INAUDIBLE]
STEVE: We've got a hand up over here.
UNKNOWN AUDIENCE #2: [INAUDIBLE] images?
Is it worth having maybe, an attribute
for a tag which would be like, above the fold,
or which would basically give the browser a hint that this
is your main content that you want to prioritize.
Or maybe, below the fold, you'd have a lazy, old time.
UNKNOWN SPEAKER: There's a combination--
LUKE BLANEY: It depends on the browser
what's going to be above the fold.
You don't know, whenever you're sending [INAUDIBLE] stuff down,
like, where is that fold going to be?
WESLEY HALES: There is no fold in the browser.
UNKNOWN AUDIENCE #2: Say, like on BBC,
you'd have all your main images for your stories,
and then you've got your sub-stories.
So, you basically want to prioritize [INAUDIBLE].
PAT MEENAN: Yeah, so that's not IE-specific.
IE was the first one to implement it.
It was a W3C spec for lazy load and, I think,
postpone is the other one.
Or, I'll probably get the naming wrong,
but there are sort of two of them.
One is, don't load this until later,
and one is, I care less about this.
And it's sort of the opposite of saying, this is important.
But you'd have to tag all of your content.
And it's kind of an attempt to eliminate the js lazy load
implementations for images while still letting the browser
know about all the content and give it priority hints.
And so this, is above-the-fold and important
is basically the stuff on your page
that doesn't have lazy load on it.
But you should start seeing it come out to the other browsers
as part of the same group that did the performance specs.
STEVE: Got about a minute left.
I'm going to take one last question
from [INAUDIBLE] in the back there.
If you can run the mike, or, yell very loudly, all right?
UNKNOWN SPEAKER: Yeah well for the video.
UNKNOWN AUDIENCE #3: So, about the single-page app topic,
do we need new tools and new visualizations?
Tooling-wise, so there's a couple of RUM tools
we worked into this.
There are commercial ones, [INAUDIBLE]
but [INAUDIBLE] the frameworks, and I
think the story is a two-fold one.
First of all, we need definitely support from framework vendors,
because that's how we made it possible.
We have to interact with them.
Because a lot of their basic forms logic we have to follow.
Just like, typing in a keyword that goes back with an XHR
to the server, updates a table and then renders it
on the page.
It comes down to framework instrumentation
that you would have to do, or frameworks
would build this instrumentation right into those frameworks.
That's the part that definitely is,
I thinl, with the framework providers.
And they have to do it because they are the one who know best
how the frameworks actually work.
The other piece to it is, what we
are really missing is that piece, like with this framework
instrumentation, [INAUDIBLE] to the point,
like, updated the DOM this way, knowing
when certain elements have been painted on the screen.
That would then be something I see in the browser.
The question, where do we fit this instrumentation
of frameworks, on the one hand frameworks
should build it in, for those who have not built it in,
you would also have to have some means in the browser
to instrument the browser at runtime,
because that's what you have to do today.
Because RUM tools are often not able to instrument it
while it disseminates through the browser, [INAUDIBLE]
and re-processing it.
But it's a tool for all stories, a framework
story to a great extent.
And there's two solutions to this,
and it's also a story, really, when
it comes to rendering, that really ties down
to the browser.
STEVE: Got to move on to the next topic,
but just to summarize that, I think
there was a good point made there.
The sort of, emphasis, may be shifting to the framework
vendors to really help people understand
the performance and the timing instrumentation
within that framework.
Plus, there's also the point you're
making about the browser vendors.
But we've got to move on.
Rich Howard?
RICH HOWARD: So my question to both the panel
and to the audience is, what role
will automated front-end optimization tools
play in our increasingly complex world?
Do they add yet another complicated layer
of abstraction or will they become a necessity?
STEVE: So FEO tools like, mod_pagespeed
or Riverbed Stingray Aptimizer.
You're a fan of Varnish, that's a reverse proxy
type [INAUDIBLE].
LUKE BLANEY: I think, I would classify Varnish
as different than those sorts of things,
because Varnish behaves like an HTTP proxy.
It'll follow, by default it'll follow HTTP headers
and it won't do anything weird.
A lot of these other ones, I get very
hesitant about using things that are just
going to do magic in front of any coder, right?
WESLEY HALES: I think it's such a large education
process for front-end developers as well.
I mean, especially those that might just
be entry-level to mid-level.
It's, a lot of companies are trying
to take care of things like performance and security
with appliances.
And plug-ins and things that go along with the web server.
So I don't know if it's needed or not.
I mean I don't know if web developers should just
know the rules automatically, or if they need the help to--
STEVE: Peter, first?
PETER HEDENSKOG: Yeah, I mean, as a developer,
I want to know what's happened, so I
don't like these kind of tools, because I don't know,
like I say, the magic, the magic things about it.
It's made me unsecure, actually.
STEVE: But as an operations manager,
who's been waiting for developers to speed up
the website for ages.
If I can sling $50,000 at Riverbed,
and it makes my site faster, that's
the cost of one developer, well, why wouldn't I do that?
Give Andy [INAUDIBLE] a [? comment. ?] [INAUDIBLE]
ANDY DAVIES: We're trying to optimize
for different browsers that behave in different ways.
Different devices over different network conditions.
I think for a certain number of cust-- people,
I think it's the only way to go.
I think, because we're trading off
the cost of employing developers who may not
be doing a great job anyway, because there
are some great developers in the world
and there are some very average developers.
And we see it in the way that pages behave.
And it's, if I can deploy an automated device that
will speed up that site and give those visitors a better
experience, then why not?
STEVE: I I'll take a point from Perry,
and can somebody refresh Onslyde,
because I think it's crashed.
[AUDIENCE]: No.
We were so close.
[LAUGHTER]
So, Perry, stand up and wait for the mike.
Let's go.
PERRY: So all I was going to say is
I think I agree with Steve and with Andy,
because I think we're getting asked as developers, ops,
webops, whatever we do.
We're getting asked to do more and more and more with less
and less and less time to do it.
And it's actually the business that's going to dictate this,
I think.
And I think these kind of automated tools and devices
are going to be the only way that we're
going to be able to keep up.
So that's going to be HTTP/2, versus HTTP 1.1.
We're already struggling, with just
dealing with what we have to deal with now.
It's only going to get worse.
We're talking about later on, there's
going to be new devices, wearable technologies, TVs
to concern ourselves with.
I just think we're going to have to do it.
WESLEY HALES: And for those of us
that are not in the performance consulting ring, I guess,
it's hard to get time in companies
to work on performance to make your pages faster.
It's hard to sell a lot of bosses
on doing that as developers who work on an application,
or a site that's driven by whatever advertising
dollars, whatever you want to call it.
Someone else's budget basically and they
don't want to allot for that performance boost.
I can speak to personally.
PAT MEENAN: And I think there's some class of optimizations
that you're going to start getting
more comfortable with handing off.
The one that comes to mind, in particular,
is image transcoding and supporting WebP.
There's a whole bunch of things that you're not
going to want to have to rebuild, an image server,
and you'll start pushing off to appliances or a service
to do for you.
Because you're not going to want to maintain libraries
of 10 different image formats as we
get, like, WebP, jpeg, XR, and whatever
else comes down the pike.
LUKE BLANEY: I think there's a difference between making
a conscious decision as a developer saying,
I want to farm this off to somebody else who is got
to look after my iimages, to have just some appliance
stuck in front of all your code that does whatever it wants
because it thinks it's better.
We had problems with mobile operators doing
this sort of thing, where they're going,
like, oh, that bit of JavaScript,
we can optimize that for you.
STEVE: The person who asked this question works for Vodafone.
LUKE BLANEY: Well, in the early days
of the web, that was one of the things that caught us
off guard.
Because operators think that they know things better than
you.
But if you're a developer who knows what they're doing,
sometimes you know better than the appliance,
and the appliance just gets in your way.
STEVE: I think, will they become a necessity in our increasingly
complex world, I guess the question is,
do you see the world-- we've talked about components,
we've talked about HTTP/2.0
I don't see the world getting less complex,
so there's only so much complexity
that you can deal with as a developer,
surely, before somebody's going to say, let's hand
this off to an automated process that does it better.
I'm sure Guy would say that Akamai,
the next-generation CDNs--
GUY: Yes, so I'm totally biased, here, I
work for Akamai [INAUDIBLE], but I do think that, two things.
One, as opposed to the carrier proxies,
the intent here is to be something
that's an extension of your platform.
So it's to save you time.
It operates based on your instructions.
Granted, instead of writing code, you're checking a box.
But you're tuning it and you're configuring it to your needs.
So, not quite as simple as a Varnish HTTP construction,
but not that much different than Varnish in SI,
or the Varnish kind of, elaborate caching policies.
So, that's just sort of a general thought.
And yeah, and I think the point I come back
to every time we talk to users of this,
or people considering using it, is
if you can automate it, why do it manually?
So the exact range of were the tools good enough,
or were they not good enough, what is your level of comfort?
That's something that as an industry
we still need to evolve and improve.
But I think at the end of the day
if you have a way to do it automatically
and it encompasses knowledge and takes away complexity,
the only reason not to do so is that you're not
used to it, right, and you can overcome that.
STEVE: 30 seconds if you want it
CHRISTOPHER EMERY: I just want to add on
since I know what you're saying.
It's great to [INAUDIBLE] yourself
to try and act as an extension of the platform.
But actually, in my experience with the kind of clients
I'm dealing with, our MSAs don't actually
give us free rein on our platform.
We have to play by their rules.
So a lot of times, what is in our control
are things like the build tools, optimizing things down
on our end with the [INAUDIBLE] before we get onto that
platform.
So a lot of times, like I say, sticking to the basics
effectively is often the best platform.
STEVE: Go over here.
One last, very quick question.
Or very quick comment.
UNKNOWN AUDIENCE #4: I'm interested in the hardware
side of this, because I work for a hardware company.
For me, load time is important, yes,
you need to get stuff down as quickly as possible,
but what's more important is how quickly
the user can see this stuff.
And that goes through the hardware.
You were discussing stuff like, batching is bad,
but do you consider the hardware underlying as well.
Batching is really good for the GPU.
PAT MEENAN: So batching, a GPU doesn't
know how to deal with a sprite, though, right?
So, as far as the browser's concerned,
the GPU doesn't deal with the sprite.
The GPU deals with the sprite in a whole bunch
of different places and clips it,
and doesn't deal with it better than images.
STEVE: So I'm going to have, I'm going
to have to cut that one off, and you
can take it offline, because we've
got about five minutes left and we still have a question to go.
From [? Paul ?] Lewis, which actually ties in very neatly
with something that was just mentioned, actually.
PAUL LEWIS: My question got tweeked.
Which is the topic of the day, so I
thought I'd fight fire with fire and tweek it back.
How should teams balance branding and personality
against performance.
I guess I have web fonts and images and such in mind,
and how can they meaningfully measure
the benefits of branding versus the benefit of speed?
And I hashtagged #perfmatters on that.
STEVE: Hashtag #perfmatters, always good.
I mean, you started to talk about this in the answer
to the previous question.
You come up again, do you think, the designers want
their cool web fonts, they don't necessarily want performance.
WESLEY HALES: Right, it's the same argument
as the mobile versus desktop side, I guess.
I mean, you have to measure, the business knows what it wants.
Whether it's a brochure site or whether it doesn't want
it's users to drop off because the page load time is too high.
Or whether there's a lot of jank going on in the page,
because they can't scroll down the page and their user base
drops because of that.
So it's about, do you want better performance
or do you want a better-looking site?
I guess there's a balance somewhere in between there,
but right now, you really have to focus on one or the other,
I think.
STEVE: So who's had this conversation
with their business?
So who wants to stand up and reach for the microphone
to say, we've had this conversation with our business.
And this is how we tackled it.
Oh, Patrick, go on, you know you want to, Or next, go on.
Tag team it.
UNKNOWN AUDIENCE #5: Yeah, we've had this conversation
quite a lot, actually.
STEVE: So what organization?
UNKNOWN AUDIENCE #5: The Guardian, again.
So designers like their beautiful web fonts,
but mobile users don't like to wait for the text to show up.
So what we've found is that maybe we
could find a compromise.
Maybe you can have the realy, really iconic fonts in there
on mobile, but the other fonts, maybe you
can load them only on desktops.
So that's the kind of compromise we came up with.
ANDY DAVIES: Can I add a support question?
Yeah, because the other, the thing lots of people
do is, we talk about fonts, and how many people
just chuck Open Sans or a font on the page,
without considering all the font glyphs in it.
So there are, even using web fonts,
there are optimization options.
In the-- take out the glyphs we're not using.
Which takes-- Open Sans off Google,
if you start stripping it down to just font,
the font glyphs you need for English,
It ends up being 60% of the original size.
Sorry, 60% reduction in size.
So you end up with a much smaller glyph
that's quicker to download.
That you can embed as a data URI,
so you avoid the round trip.
It's a choice.
Some brands, it's as a brand, measure
what the impact of performance is having.
And test whether you can take stuff out of the page
and still keep your brand quality, and whether that
has an impact on visits or behavior. this
STEVE: I think maybe the panel is still
missing the question, which is, how do you get your business
to engage in this conversation?
And I think the answer has to be,
you've got to be able to measure the performance
and tie the performance back to the measurement of your site.
Perry gave a presentation at Velocity this year,
where he gave some great examples.
Where they were tying it into the analytics,
they tied it into their Adobe Omniture,
so the marketing people could see
very clearly that conversion rate changed at this point
in time.
So step one is, measure your performance.
Step two, measure the performance with money.
Step three, start doing A/B testing of slow
versus fast, or pretty versus not-so-pretty.
PAT MEENAN: And the key thing there
is make sure your performance data is
in with the business metrics.
Having them completely separately,
where they can't, sort of, correlate
the two to each other.
It becomes a really big problem.
It becomes difficult to say, the extra two seconds
is costing you 10,000 users a day kind of thing.
But if you want to have a conversation,
just strip out all the web fonts.
They'll come find you.
UNKNOWN SPEAKER: If you're trying
to figure out how to start the conversation.
CHRISTOPHER EMERY: I just want to say, with regards to this.
I've actually had this exact conversation with our clients.
And because the type of company I'm in,
we actually have a lot of media money
in terms of TV ads pointing at web properties.
And we actually find that a lot of the conversation
about performance is actually being talked about
by the clients, actually out of fear.
And it's actually two sides of the coin.
We're talking page load performance.
And if the server can't handle the load coming in,
it's pointless about fonts and stuff like that.
So, a lot of our conversations actually start off with,
OK, well, what's the media buy?
OK, we are expecting-- it's going
to be on how many TV channels?
Stuff like that, who are they driving to,
we have to keep the server up.
So there's that part of it.
And then on top of that, it then becomes,
a path to purchase with regards to buying a product,
then it becomes, OK, well, how fast can they do that?
Not so much, if there are different parts
of the site that load a little bit slower, that's OK.
It depends on that critical path in terms
of how fast they can get to kind of click the volume button.
So in terms of the really big brands are talking performance,
but it kind of comes from a fear factor
that they don't want the site to go down,
they don't want to be a laughingstock.
And they don't want to lose their kindof, value.
It was the performance of metrics,
how many visitors to how many people purchase.
STEVE: We're going to have to wrap it up there,
I'm sorry not to be able to get to your question.
But we've actually reached the end of the session.
I think the message there is, play the fear, uncertainty,
and doubt card.
And your site will crash if you put all this stuff on it.
Which is not an argument I like from an operations
point of view, because it's my fault if the site goes down.
But anyway, thanks very much, guys.
Thanks to everybody on the panel for contributing.
Thanks, great contributions from the floor.
And, yeah, well done.