Tip:
Highlight text to annotate it
X
GUY PODJARNY: Hi.
We've got Ilya Grigorik, who's in developer
relations in Chrome.
And specifically has promoted a lot WebP and, in general,
many, many different performance topics.
So I'll let Kornel introduce the topic here.
KORNEL LESINSKI: Hello everyone.
I'm going to start with an obligatory HDV archive
statistic.
Since 2010 the amount of bytes on a typical website used
for images has tripled.
And it'll probably continue to grow
as larger and larger screens become more affordable.
However, the situation is not as terrible as it seems,
because average connection speed is also
growing all across the board.
In the last three years broadband speed
in United States has doubled.
And in China it has more than tripled.
So on one hand if you do absolutely nothing
with your images they'll load faster by 20, 30% every year,
but on the other hand we'll be using up all the bandwidth
that we possibly can.
Now looking on it the other way, where
are those bytes coming from?
It's probably safe for web.
Adobe nailed this interface 15 years ago.
And it's probably still the most popular tool.
However, this is a manual workflow.
It requires authors to know specifics
of image formats, which are the best,
and tune all the settings.
And as we get new image formats, new optimizations,
the workflow becomes even more tedious and complex.
Fortunately, new tools are coming.
For example, Adobe Generator can automatically
export all layers of a Photoshop file
and optimize it using latest tools.
There are also tools like Mod_Page Speed or Akamai
front end optimization proxy that will automatically
compress images optimized for each browser specifically.
We have more new stuff.
The picture element now in the latest browsers,
we can adapt images to screen depth, screen size, aspect
ratio, and we can also use new formats
with a graceful fallback.
HTP2 is going to make delivery of images
really, really interesting.
With HTP1, browsers have to delay image requests
to let CSS and JavaScript load first.
However, in HTP2 the server has control over this
and can send the first 5 or 10% of an image file that
contains image size and the first progressive layer,
then let CSS load, then send the rest of the data.
This way the browser, when it does the very first line
layout, very first paint of the page,
can already put some rough version of the images in.
Next, new formats like the PNG.
It's been 17 years and GIF is still doing great.
Why are we so bad at getting rid of all the formats?
PNG with GIF-like transparency worked
since IE4, that would be like 147 Chrome versions ago.
And you might be thinking, oh those are probably anim GIFs,
but, no.
With an average size of just seven kilobytes,
all those GIFs, like a quarter of images on the web,
could be replaced with a better format already today
or 17 years ago.
Now the real, real new formats, it's an interesting situation.
With dot WebP, which is a very clever hack,
it's like one frame of a video adopted into an image format,
so it's very good at saving low quality images.
Google is promoting the format very heavily
and is [INAUDIBLE] this.
However, other vendors are still unconvinced.
Microsoft developed JPEG extended range format, which
is aiming to be a very high quality
format for digital cameras that could potentially replace
the raw format in digital cameras.
However, for the use cases on the web
the compression is not that impressive.
And Apple has been supporting JPEG 2000 for a while.
The format itself is as old as it seems.
And yet it failed to get significant traction.
Arguments against JPEG 2000 are the same
as arguments against the other formats.
That is, anything younger than 20 years
could be subject to summary patents.
And the large corporations that are already
harassed by patent controls don't want to take extra risk.
Newer formats are more computationally complex,
so could be slower to decode.
And, finally, it's a matter of opinion
whether the gain of 20 or 30% of the file size
is worth the pain of adopting the new format in all
the browsers, image editing tools, image viewers,
native apps, and everything else.
Mozilla has studied some of the new formats
and they've concluded it's not worth it.
They decided to stick with JPEG, and improve
the compression of JPEG instead.
So most JPEG encoder has been released.
It narrows the compression gap a little bit with newer formats.
However, because of the backwards compatibility,
it cannot add Alpha Channel.
It works with all browsers, but its limited to what
JPEG can do already.
So for Alpha Channel we have to resort to hacks
like lose.png encoders or masking with SVG,
unless we have a newer format.
And there are even newer formats that's like next generation
ahead, that maybe will be in the future or not.
So H.264 is the most popular video codec on the web today.
And its successor could potentially
be used as a static image format.
In tests it looks really good.
However, it's a non-free codec.
It's patented, so it's a big problem
for open source software.
VP family of codecs is being developed and extended.
And it's looking really well.
And there's an experimental new codec from Xiph and Mozilla
called Daala.
But it's still too early to say whether this will be successful
or not.
And a surprise.
JPEG extensions are being worked on.
This is a way of adding new features
to the old JPEG in a way that's hidden from old decoders.
So the old browser will see boring old JPEG.
But new browsers that adapt the extensions
could support JPEG with Alpha Channel, JPEG with better
dynamic range, and all the new features.
And this work is ongoing and could
be finished within a year.
So that's where we are currently.
GUY PODJARNY: Thanks for that.
So I guess kicking off that, the first question deals exactly
with these new image formats, and one that's constantly
on my mind, which is, what's the end game with these new image
formats?
Does one of these formats prevail
and we all agree that it's better than the others?
Do all browsers support all the formats?
Or do we just need to cope, to deal with it,
and learn to live with this fragmentation?
I guess, maybe, Ilya, I'll start with you.
I bug you about this question every now and then.
Should Chrome support JPEG XR and IE support WebP?
And have everybody do everything?
ILYA GRIGORIK: Ideally everybody would
implement every other's format.
There's a lot of gotchas with doing that.
There's cyclical reasons, there's
political reasons, anything to do with patents.
It's a very complicated subject.
I think we need to get away from trying to design one image
format that will rule them all.
And instead accept that there is experimentation.
There will be different formats to back.
Why is it so hard to develop a new format?
Why did it take us 10 years to deploy PNGs on the web?
That was a very painful process.
So what do we need to fix in the ecosystem, in the platform,
in the browsers, and everywhere else
to enable this sort of thing?
So there's gotchas with deploying new formats,
like more fragmentation.
The developers have to deal with all of these more settings.
I think a lot of that can be automated.
So I don't think we're going to see a future where everybody's
going to support everybody else's format.
GUY PODJARNY: So you think, basically,
to live with that fragmentation?
ILYA GRIGORIK: Yeah, I think we need to remove some barriers.
Today it's just very hard.
I think we're going to get to some of the discussion
around saving different formats, and what
happens when I can do something in browser A,
but I can't do it in browser B, and my operating system
can't preview it.
So we just need to work out those kinks,
but I think once that's fixed, it just becomes much easier.
GUY PODJARNY: And Kornel, do you think
that when you look out into the future,
one of these backward compatible image formats is the way to go?
KORNEL LESINSKI: I don't know the future,
but looking at the web's past, you
can see that the web is resisting
backwards incompatible changes.
So we had XHTML but that didn't work.
Now we have HTML5 that they're built
on backwards compatibility.
XForms didn't catch on.
We had JPEG 2000 for a while.
That didn't catch on.
And GIF is still alive.
We have made a video in the browser that's
15 times better than GIF and hardware accelerated,
but anim GIFs are still everywhere,
because they work everywhere.
So I think the existing formats, even though they're technically
not the best, they have a huge advantage because
of the network effect.
Because JPEG works everywhere.
We don't even realize how much the pain there
is, that it has to be supported, not only by top four browsers,
it has to be supported by your desktop operating system,
by your mobile app, by your Twitter client,
by the website where you upload your avatar.
Those decoders have to be everywhere.
GUY PODJARNY: So what's the solution for that?
So we have that problem-- and I guess Ilya you touched on this
as well-- do you think-- is it picture element,
and then kind of smarter--
YOAV WEISS: I think that the picture mind switching use case
is part of the solution.
Another part of that would be adding key header support
so that they accept header.
Assuming that all browsers will publicize properly their newer
formats and their access headers,
you could actually cache it better using the key header.
And so yeah.
Hope for the best.
Hope that everyone will converge.
And plan for the worst.
Plan for fragmentation.
GUY PODJARNY: So I think if we're moving toward this as
if as we need to live with it, because Chrome would always
try to go out-innovate IE, and vice versa.
And also generally where competition is good,
and it's moving us along.
Do you think that-- is that realistic for somebody
building a website?
I mean, it's all good and well on the technology front, but--
ANN ROBSON: Yeah.
I'd like to speak to the question, which is,
will we have all these different image formats,
or will one prevail?
And I think, I mean, I agree that we
should be open to new formats and support them and new ideas.
But Kornel's right.
We have these image formats and they're like cockroaches.
You know GIFS, animated GIFs, aren't going away.
And I think that we discovered in the last few years
that we haven't actually appreciated
what we've had quite enough.
And that JPEG is actually a really good image format.
And there's a lot we can do with JPEG.
So I feel like we should probably respect it a bit more.
It's a 20-year-old format, but-- what is it Kornel?
I think somebody was calling it the alien
from the future, or something?
KORNEL LESINSKI: Alien technology from the future.
Yes.
ANN ROBSON: Yeah.
KORNEL LESINSKI: That quote is in a context
of how hard it is to beat JPEG, even though it's
an old format designed for computers
that had 25 megahertz CPUs.
Now we have codecs designed for computers with gigahertz CPUs,
and we're not beating JPEG by a large margin, maybe 20, 30%,
maybe 40% in very expensive, experimental codecs.
But we don't have a format that is 10 times better.
So for its age, JPEG really nailed compression.
GUY PODJARNY: Can we get a mic to Jack?
ANN ROBSON: And I think that's really important.
Because we have JPEG and we can improve it.
So I think that-- what is it?
Mozjpeg is there.
Mozilla is actually working on improving the JPEG encoding.
And I don't think anybody has done this for a long time.
We have just had JPEG and we haven't
thought about improving what we have and what already works.
So that's something that we should focus on.
I do want to have all these good ideas,
and have people creating new image formats,
and try to support them, and figure out
ways we can support them.
But at the same time, we want to move forward
and we haven't moved forward with image formats very much.
So I feel like we should think about what's important to us,
and agree on that, and move forward.
And I think that-- sorry for taking so much time-- I
think that one of the things that's important to me
is progressive scans.
I think that the image format that we invest in,
that we spend our time talking about, and implementing,
and serving, should be one supports progressive scans.
GUY PODJARNY: If you can stand up for the AV.
AUDIENCE: So speaking from a browser point of view,
I agree with Ilya, that we're never
going to see full support across all the browsers for all
these different formats.
But I don't see why they really need
to if we can somehow figure out a way for a new element,
or something like that, to basically-- you just say,
here's the resource that I want, regardless of some extension
on the end of it.
And just have the browser send along its accepted abilities,
of what it can have returned to it.
Is that something that is on the horizon?
Or is that something that anybody's
had interest in doing?
YOAV WEISS: It's already here.
ILYA GRIGORIK: Yeah, so that exists.
That's content negotiation.
And you send an accept header that's says,
I support these formats.
And the server then picks the right format,
the optimal format, perhaps, for that particular client.
So that works today.
You can--
AUDIENCE: Is it for a specific content [INAUDIBLE]?
Like I could get WebP versus JPEG and [INAUDIBLE]?
ILYA GRIGORIK: Yes.
Yes, so that works today.
Chrome will send an accept header that says,
I support WebP.
So in fact, that's how we recommend you deploy WebP.
And now you can also use Picture where you can manually
specify all the different variants,
if you're willing to do that.
GUY PODJARNY: So just maybe to push back a little bit--
so Kornel, you pointed out, hey it's just 20 to 30% percent,
but image bytes on the web have grown by two
or three-fold in just two or three years.
And for all indications, generally, that trajectory
is up and to the right.
Saving 20 or 30% is not an insubstantial amount.
Should somebody make an effort, if we're
doing all these different-- if we're
going towards the path of fragmentation,
is it worthwhile?
KORNEL LESINSKI: It's a matter of trade offs.
I don't think new format will advance the industry by one
year, because the connection speeds grow by 20, 30% so--
GUY PODJARNY: Arguably--
[INTERPOSING VOICES]
KORNEL LESINSKI: With new format we
can do this year what we would be able to do anyway next year.
So I think if we're looking for a completely new format we
should also look for something that current formats completely
cannot do.
Because what we have is the same thing
we had 20 years ago, but better compressed.
But how about new formats that support different kinds
of Alpha Channel that can do attitude blending,
like all the blending modes in Photoshop,
or hybrid pixel vector formats.
ANN ROBSON: Also, I think that also stereo images.
We never do 3D or think about 3D,
but our eyes are actually seeing different pictures
all the time.
Why not support this in the future?
Stereo cameras may become a big hit in a year.
ILYA GRIGORIK: So it's also a little bit of,
maybe, push back on, the bandwidth is increasing
and hence everything will be solved.
That's not entirely true, because bandwidth
is only one part of it.
We still have the round trips.
And those round trips can only carry so much data.
So even the fact that you have the latest LTE connection
on your phone, doesn't mean you can just push a 10 megabyte
image immediately.
GUY PODJARNY: Yeah.
Right.
Also we have [INAUDIBLE] which is--
ILYA GRIGORIK: Right.
In fact, a 10 megabit connection is but one megabyte a second.
So it'll take a second to download that image.
And according to some of the best practices that we're
pushing, we're saying that your page should load in one second.
So what about all the other resources?
So I don't think it's completely fair.
It's certainly true that as bandwidth is increasing,
but that's not really enough of a justification
to say these 20, 30% don't matter,
and these two things will always go up at the same rate.
GUY PODJARNY: Let's try to move to the next question, which
is actually along these lines.
So let's say we want to live with these different multiple
image formats, what do you think-- well I
guess I'll read the question.
Given the need to serve a different image depending
on factors like displays, so small screen
to small image to small screen, browser
capabilities for the image formats,
network conditions, potentially lower
quality image in poor condition, should we
be advocating for different URLs for all
of these different situations?
Or for one dynamic URL that adjusts?
And, I guess, if the latter, what's the right approach?
Is it picture?
Is a client negotiation?
YOAV WEISS: Ideally, I would say the best approach is
the approach that works for the developer that's
maintaining the website.
Currently there are problems with the single URL approach,
mostly related to caching, mostly related
to lack of key support.
So this is something we will have to work on in order
to enable cache-able single URL images.
But after that it depends on-- it's a development preference.
GUY PODJARNY: Yeah, but still, what do we recommend?
Everything depends.
There is no golden answer.
But should we try to advocate towards pushing people
towards one dynamic URL and to put the effort
behind that key header?
Or should we be advocating towards the picture element?
YOAV WEISS: I think we should enable both
because the use cases are different.
A single URL means you have a smart backend CDN module,
something that you control on the server side that
pushes the right format to the right browser.
Multiple URL gives the control to site's author.
So you can do that.
You can basically support multiple image formats
and mark up.
These are different audiences.
GUY PODJARNY: So you think we have to have them both?
YOAV WEISS: I think we have to have both.
ILYA GRIGORIK: I think the practical answer is, today
if you want to make it work and work well, you probably
have to end up erring on the side of dedicated URLs.
Just because your infrastructure in between your CDN,
your whatever, is not configured,
is not flexible enough to allow all of that.
But I would like to fix that such
that we move towards the world where
you don't have to do that.
You don't have to have a unique URL for each--
you have to have a 100 URLs for the same damn image,
because one happens to be in a different format, one
the scale is set slightly different,
one is cropped this way or that way.
GUY PODJARNY: Why is that?
I mean, what do you think is the primary motivation,
I guess, for having one URL versus-- like today we're
used to having one URL and we don't need that complexity to
[INAUDIBLE].
ILYA GRIGORIK: To me it's just automation.
I just want to move the world towards more automation.
I shouldn't have to think about negotiating
the right image based on DPR.
We have to do that today.
And I think in the long term, that's
just too much complexity.
YOAV WEISS: Yeah, you can also automate it with a build step,
assuming-- but I think we agree.
ANN ROBSON: Oh, just that I think that we'd screw it up.
We wouldn't do it quite right if we did ourselves, and, yeah.
GUY PODJARNY: So you think we should
strive towards that single URL, but we should also
have automated tools, have a generator,
or make some decisions, or help us
make the decisions about which image
to send at any given time.
ANN ROBSON: Yes.
ILYA GRIGORIK: We're already doing a lot of the stuff
by encoding.
Any sizable applications already has an image resizing,
manipulation-something server.
You have one source of origin, or source of truth,
and then that one thing gets rescaled to all
the different variants.
And right now we end up encoding a lot of information
in the URL, even though a lot of that metadata
is already available in an image tag or the picture tag.
Because you're saying, my width is this,
then you duplicate the width in the URL as well.
We can just get rid of a lot of that,
just make it much simpler.
GUY PODJARNY: Well, if I can actually
pull in something from a question we had further down.
I mean, if we do that when we talk
about multiple image and one image,
one use case that comes to mind is
the notion of a single URL that can be fetched by any client.
And then maybe subsequently opened
by your operating system and things like that.
If we focus for now on sharing a link on Twitter
and having anybody be able to [INAUDIBLE].
Do you think that that basically eliminated the possibility
of making it be picture element based?
ILYA GRIGORIK: So this-- can I jump in?
This is actually a good argument against having a dedicated URL.
So let's say you have a dot JPEG, and a dot WebP,
and a dot something else.
And the dot WebP is only configured to serve WebP.
I see that because I accessed that thing in my browser
that understands WebP.
I copy that.
I paste it into my email.
And then you open it in some other browser and you--
what then?
You can't open it.
Now it's a broken image.
So really that URL, even though it says it's a dot WebP,
needs to understand the fact that there
are other clients that may not understand it and serve
a different asset.
So we're back to the same one URL.
GUY PODJARNY: So given that, why do
we need MIME type decision in picture?
YOAV WEISS: You could also, for example,
have the browser expose, save shareable link--
expose the JPEG URL when sharing stuff.
Exposed the-- You can Save As even though you view the WebP.
If you're doing Save As, save as a sharable format
you're saving to disc as JPEG.
ILYA GRIGORIK: So Save As is a little bit of a different case
though.
So I agree.
Now we're into the discussion of save a safe version.
Whatever that means.
And perhaps this is something that browsers should
do, like you viewing this thing in whatever format,
and when you're right clicking and doing Save As,
should we give you a PNG?
It may be not good for size, but at least it
preserves the quality and you can open it everywhere.
Should we do that?
Maybe.
Maybe not.
That doesn't really address the sharing of a copy of the URL
and paste it into email.
[INTERPOSING VOICES]
YOAV WEISS: --copy the URL.
Yes, I agree.
This is a downside.
GUY PODJARNY: Let's take a thought from the audience.
AUDIENCE: Is this on?
So in regards to having a single URL and it not working
if you copied to another-- if you
go to WebP in Firefox, or IE, or whatever--
isn't that true of any proprietary format though?
It was pushed like MHTML formats didn't work in Chrome
for the first 10 versions of Chrome,
although they always worked in IE.
Why is that just because something doesn't work,
why is that necessarily a bad thing?
And how is that-- so if I try and Coral that image
and there's no content negotiation automatically
in Coral without that, what would you serve?
Or how does that fit into this when you want a single URL?
ILYA GRIGORIK: So you're right.
It's nothing new.
I don't think that's a good experience overall,
because people tend to like to share photos and--
ANN ROBSON: Exactly.
That's all.
I mean, people get pissed off.
They're just going to get angry at your site,
at the site that provided this image that they couldn't share
the way they're used to sharing.
KORNEL LESINSKI: It just has to work.
Plus having one URL for negotiation off the format
might actually be the easiest option for the developer
if they install some kind of a server side
software like Mod_Pagespeed that will just do it automatically.
So this is good for users because URLs just work.
This is the best in terms of bandwidth,
because every browser will get the best format it can.
And it's convenient for the developer,
because they just put one format on the server,
it could be even a PNG, and then don't worry about compression.
GUY PODJARNY: Should we-- can we get a mic
to wesbrock and Lucas.
And in the meantime, it's worth noting to highlight a point
that Yoav made before, that we do need to handle caches.
And that if we are going to have a single URL that's dynamic,
it's important that we make sure,
whether it's your CDN, or your server side caches,
or whatever, support that type of flexibility.
GUY PODJARNY: Can we just-- sorry--
sorry let me just take a--
YOAV WEISS: Just one second point.
We have had the same situation for video for a while.
So I don't know if this is a big problem for video,
but it's the same problem.
AUDIENCE: I really like this concept
of having the Save As attach to a different file,
that you can source after you're starting with a dynamic URL,
because we've been talking so far about the developer,
and the developer facing the end user.
But one thing that we haven't been talking about
has been the content creator.
And if I'm talking about my friend's image
that he took in Cabo, maybe it doesn't really
matter if all of the attributions and metadata
that we're pulling out of that file
to make our website's performance is attached.
But if this is a Nobel winning image.
It just came straight out of Pulitzer's.
You're going to want to keep all of that metadata
when someone saves it, because you're not
going to lose that attribution.
So just in support of the single URL
allowing you to have multiple options, because then you
get to serve all three, if not more, constituencies.
GUY PODJARNY: Can we get Lucas?
He has been waiting for--
KORNEL LESINSKI: So in the meantime, actually
single URL for the use case of metadata
is not that helpful, because we don't have any way
to tell browser when you want the version
with metadata or without.
Maybe that's something actually for the picture element,
where you could specify different resolutions,
some for the web and some special version
that's original, or lossless or metadata-rich one.
ILYA GRIGORIK: But you could tackle
that with the same mechanisms.
Because basically what you're asking for is,
you have an asset, you want to right-click and use--
or just Save As.
And in your except header you could
advertise, I want the full fidelity thing,
whatever that means.
Perhaps your proxy re-sized it to my view-port.
I want the full thing.
I want the high resolution image.
GUY PODJARNY: Let me take the two, Lucas and Mark,
and then we'll move on to the next topic.
AUDIENCE: So content negotiation is not a new thing.
And the ability to have a single URL
has been around for probably 10 years, I imagine.
And in my experience it's tough sledding
to sell that to a development team.
So I think there's a lot of elegance
to content negotiation, or the idea of having a single URL,
that it presents itself differently.
But I think that's a dedicated project that
probably needs to be taken on.
I imagine Mark Nottingham, for example,
and other people who have gotten close to the problems there.
But it's a separate issue.
I think that the web developer community has spoken clearly
against it.
I don't quite understand why, because I
appreciate the elegance of it.
But I think the voice of the developer community
is pretty clear.
ILYA GRIGORIK: So I'm not convinced by that argument
because we have GZIP.
And you don't upload a dot HTML dot GZ, or CSS dot GZ.
That's content negotiation.
There is a GZIP version and some other version.
It's the same thing as saying, here's a PNG which is high
fidelity, versus a compressed JPEG.
So it works.
We use it every day.
I think there are problems in our facts,
in terms of the caches, and all the rest, where we don't have
enough granularity to be able to precisely target
particular assets.
So we end up doing things like oh, vary on
accept, but the accept header happens
to be this very highly fragmented-- or high entropy
string, which basically fragments the cache.
So we got to fix those things.
That's what Yoav was referencing by the key stack, which
is something that Mark is working on.
GUY PODJARNY: Let me take Mark.
One last note, and then we're really
way over time on this one.
AUDIENCE: So-- oh jeez.
Yeah, I think 10 years ago, 15 years ago the community
was fairly anti, but I think I agree with Ilya.
The tools are getting there.
There's a lot more intermediaries doing things
with [INAUDIBLE].
It's getting easier.
My question was about key.
It sounds like you guys were depending on it, more or less.
It's not done.
And there are actually some hard problems inside of it.
But we've also not seen a lot of browser interest in it yet.
I need browser engagement by those guys.
And we need to actually implement.
Unless Yoav pulls this trick again and raises 15 grand,
somebody needs to go and write the code.
GUY PODJARNY: That's a fair point.
I'm actually going to pause here,
because we're almost two questions in there.
Let's talk about the next one.
So when we look at these different image formats,
it seems like every entity has their own different image
formats.
So came the question from Patty, when
will we have a test benchmark for image performance?
And if we have that, what should it include?
ANN ROBSON: No good reason.
Definitely, we should have a benchmark
for image performance.
I think that we should have a benchmark for image compression
as well.
We don't even have a-- well I guess we do, the Lenna.
There's this image that people use from 1973.
So I think that we could update this.
And we need to understand what these lossy image
formats do to our image.
So I think that we should have a set of images
that we use that are good examples of how images get
degraded when they get encoded.
So we should do that, and then also for image performance,
definitely.
We should think about that.
When we're considering these things
we should figure out exactly what we need.
GUY PODJARNY: Let's take, actually, Jonas's opinion
and then go back to panel.
AUDIENCE: Yeah, I was actually-- when
we were on the first topic, I was
going to suggest exactly this, that one
of the big reasons Mozilla has not gotten behind WebP
is that there plainly disagreements
about how much better is compared to JPEG.
And I have no idea who is right about this.
But I think having a benchmark that people can agree
is sort of representative of today's web,
would provide a lot more information than what
we have to go on right now.
Because people just disagree on how many percent better
is this versus that.
And this is not just WebP versus JPEG,
it's between all of these formats.
GUY PODJARNY: What would you include?
So in that debate back then, one of the, for instance, questions
was around whether you should use the existing images
on the web as the source, or whatever
it is you were going to encode to,
versus the pristine image that might have been your original.
Maybe there are other debates.
What would you include in that type of image?
And that type of a benchmark?
AUDIENCE: I think compare-- I think
what you want to do-- the thing that actually matters
is how fast can you download, and the image that
is being displayed in the browser.
I don't care how you can compress
that the 10 megabyte original image,
and how much smaller you can make that.
What actually matters is that the smaller images that we
downloaded and render on web pages, how many bytes smaller
can we make them?
One of the really hard things is there
are four different ways of measuring image quality.
And I have no idea what the different algorithms are.
I don't know if we need to measure all of them
in this benchmark, or if we just need to as a community
agree on one or two that are sort of more
representative for what the human eye actually appreciates.
But I think that having a discussion around what we
should actually measure, because a bad benchmark--
this something we see lot in JavaScript-- a bad benchmark
can do just as much harm as a good benchmark.
GUY PODJARNY: Now, Kornel, you deal with this.
You benchmark your own tools all the time.
KORNEL LESINSKI: Yes, that's a big problem
for my work on PNG compressors.
Although there are standard test suites for photo-like images
there's popular codec image suite.
There is no test suite that includes
Alpha Channel for images.
So for testing how well Alpha Channel compresses,
I have to steal images from the web.
But this is not a set of images that I could share with anybody
to let them compare their results with mine.
There's also a big problem of judging actual quality.
So we have machine algorithms that sort of emulate
how the eyes see distortions in images,
but this is very imperfect.
But it's also very sensitive.
The algorithm can detect a half a percent change
in image quality, which is needed
when you're developing your codec.
When you tweak something and make it half percent better,
you want to keep that change.
But if you ask a human, is that half percent better
than the other image, they will not be able to tell you.
GUY PODJARNY: Which is similar to the,
which algorithm do we use for quality?
What's the bar?
What's the threshold?
KORNEL LESINSKI: Yes, and since there
are different implementations of different approximations of how
people judge quality, there are different opinions.
And if you use one algorithm, then
somebody developing a different codec will tell you,
no you've used the wrong algorithm.
You should be benchmarking using my algorithm on my images.
ILYA GRIGORIK: So I think this is also one of the reasons
why we have so much disagreement in all the different formats,
is because everybody uses their own test set.
So when we say images, do we mean photos?
High res photos?
Do we mean the PNGs, and the Alpha Channel,
and all the rest?
GUY PODJARNY: And animated GIFs.
ILYA GRIGORIK: And animated GIFs.
Right.
So one of the things that the WebP team has been pushing
for all time is, we always say we're optimized for the web.
In the sense that, we try to grab images off the web
as they are being used today, not just
a bunch of really high def raw formats off my camera.
That's a use case.
That's a totally valid use case, that
is not representative of the entire web.
And we try to optimize for those.
In fact, we see a lot better compression and gains
on the long tail of weird images.
The stuff we put in PNGs is bizarre,
KORNEL LESINSKI: Yeah.
The problem for me is, I don't run Google Images,
so I don't have access to that set.
ILYA GRIGORIK: Well, we don't actually
have good access to that set either.
But one idea would be to say, great,
let's go to [INAUDIBLE] archive, download all the images,
and just re-compress all of them with our algorithms.
But then you discover that you're
re-compressing artifacts of other formats.
So really, ideally we would have the origin
or the original asset.
And then we would have a test against all
the different formats.
But that's very hard to come by.
YOAV WEISS: That's one of the criticisms I heard regarding
WebP, the fact that current numbers are achieved
by re-compressing, introduce their own bias,
and their own artifacts.
So--
ILYA GRIGORIK: So, and this is funny,
because we actually had this really long discussion--
the WebP team with the Google+ team--
that they really care about photos.
They want to deliver really beautiful photos.
And at one point we found that we
were talking to the product managers,
effectively trying to replicate JPEG compression artifact.
The formats are different.
You're going to get different artifacts.
But they really liked the JPEG artifacts.
GUY PODJARNY: Which once again--
ILYA GRIGORIK: Because it's like,
that's what we're used to seeing.
But look this is so--
GUY PODJARNY: Similarity.
Do you need to be similar to the original,
or similar to the codec?
Let's take Wesley's point.
AUDIENCE: Can you hear me?
There we go.
Going back to the comment that was made earlier
about file size, does that really
matter when the decoder may take longer,
say for the instance WebP versus JPEG?
WebP's going to take longer to decode
that image based on the CPU and what
system resources that computer has.
So how do you get a benchmark for that kind of performance
and are you guys concerned about that?
ILYA GRIGORIK: Yeah.
So definitely a big concern.
So in speaking about WebP, it's definitely an area
that we're looking to improve.
So we're lending new incremental decoding improvements
and all the rest.
There is hardware support that will come down the road.
It's kind of a chicken and egg problem.
And despite that, we do see that when we run test studies--
eBay actually did a study where they
converted all their images to WebP.
And they compared decoding time versus the delivery time.
Because the images were shipped much faster, even though you're
decoding it longer, they still showed up faster.
So you can quantify these things.
You can use something like, see how much of the image
is rendered at each point in time and-- It is quantifiable.
GUY PODJARNY: Let's get David's point and move
to the next topic.
David.
AUDIENCE: I guess the one other thing
to think about is that there's this final size versus quality
trade off.
And what these new formats are doing
is sort of very slightly shifting
that curve in the file size versus quality trade off.
But I guess the other question the developers, I think,
should think about is, are they at the right point
on that curve.
Are people making the file size versus quality
trade off that they want to be making?
ILYA GRIGORIK: Yes.
OK.
This is a big topic and an under-explored topic.
So we're pushing new formats that are getting 10, 15, 20,
30% improvement in file size.
But there is the other side of this equation, which
is all those formats have the quality slider, which
the developers and designers are just not using.
Because we don't have the right tooling
to expose that sort of thing.
And it's like a quality 40 or 50 on a WebP
is completely different from a quality 50 on JPEG.
In fact, it's a different output,
even if you use different JPEG compressors.
Each one introduces its own artifacts.
So I think we could do way, way better job
by providing some better tooling and visualization to when
you're saving these images, or automation.
How do you find the right point on that curve, where
the trade off between how much the image has
degraded versus savings.
YOAV WEISS: The thing is there are--
I'm not sure we have the ideal tools,
but we have tools like image men that Kornel worked on.
It was based on some previous metric.
He added SM, which is more standard image visual metric,
in order to just binary search is the ideal quantity.
So a developer can define, I want a 5% quality loss,
or something much more quantifiable
than the quality setting which is--
GUY PODJARNY: We're kind of going over.
But I think that actually shows up.
We can probably discuss this a little bit more
in the next question, which is, what
should be our strategy for managing
these multi-resolution images.
So a part of it is about, maybe, just the image,
the straight tooling.
But as dealing with that image becomes more complex,
as we mentioned before already a certain bias in favor
of automation, if we're going to use the single URL, what
should be the strategy?
Should people try and pre-generate
these ahead of time?
Because a lot of these tools around the binary search,
for instance, are very hard to do real time, or not
practical at all sometimes to do in real time.
What's the right move?
Kornel, what do you think about that?
KORNEL LESINSKI: So generating multiple resolution images
by hand is super boring.
So we definitely, definitely should
have tools that automate this.
And when we have a tool that automatically generates
different sizes, we can no longer
ask the author to tweak this quality slider themselves.
So I want to completely eliminate the quality slider
from all [INAUDIBLE] for images.
It's a complete lie.
It's not actual quality you're getting.
It's just arbitrary mathematical formula
for throwing away bits of data from the file.
And it makes some files ugly, some files
can tolerate more compression.
And we should use CPU power that we have,
algorithms that we to try to get the right quality.
Even if the algorithm is not perfect,
it's probably still going to be better than many authors who
have their favorite number they always
use for all their images.
GUY PODJARNY: Ann, what would you want to do?
What's your optimal-- you need to support these 17
different variants images and optimize them in your site.
ANN ROBSON: I mean, automatic.
I'm not going to do that by hand.
GUY PODJARNY: Automatic real time?
Or automatic build time?
Like automatic in the delivery?
ANN ROBSON: Yeah, I am not sure.
I think probably in delivery.
Although it'd be nice to check them, right?
I think that's amazing what Kornel
is saying, that we should get rid of the sliders.
I mean, he's saying that.
He saying we should get rid of the quality
sliders in our images that we're optimizing.
Wow.
I think that's right.
I think we as humans-- the numbers are magic numbers
that we come up with, don't work for all of the photos.
And that maybe we can automatically figure out
what's the best, exactly the best
quality, to set a certain photo at.
That's cool.
GUY PODJARNY: I think I cut you off
before Yoav, but [INAUDIBLE].
YOAV WEISS: I think that the question of offline
versus online-- so offline will always get you better ratios,
but you can't always do that.
So offline when you can, and otherwise-- I mean,
even if you compress on the fly, you should probably cache it,
and you should probably try to re-compress
the original after the fact.
But we're talking about fairly complex back end, but--
As we said, the image mend stuff, the binary search,
re-compress the image multiple times, that's CPU consuming.
That's not something you can do on the fly.
GUY PODJARNY: Let's take a question from Mark.
AUDIENCE: So I just wanted to provide
a little color for that.
We do generate a photo-shelter.
We deal with a lot of image delivery
from very high resolution files to non-lossy files,
raws, et cetera.
And we have to generate web-friendly versions.
So we do do the image variants multi res on the fly.
But I just want to tie together with the previous compression
discussion and how difficult it is to benchmark.
And even with algorithmic ways to analyze quality,
it doesn't really capture the subjective,
the perceptual aspect of it.
And we basically change our image pipeline, not too often,
every few years.
But when we do it takes a whole year
to decide what trade offs are worthwhile.
And we did find that this question
about JPEG quality-- we're using JPEG.
We're not using any-- because it's
the only thing that works universally.
But it gets complicated, because with the multi-resolution,
and also the compression, the rescaling,
they are all interactive.
So we have found that actually that doing
that file size and quality trade off depends on also the size
that you're delivering the image.
That actually on a larger resolution image
you can get away with more compression and kind
of counterbalance.
So I think that is very important, obviously.
I'm not saying that any of these image formats
are trying to be prescriptive, but you can only
get so far trying to optimize the algorithm and the format.
Because you need to leave a certain amount of control
to the developer and the delivery mechanism.
Because we did find that we have to fine tune it significantly.
And now with high DPI displays, now you
can get away with even more compression to deliver even
a larger asset, that's not perceptually relevant.
So that's just a little feedback.
GUY PODJARNY: OK.
Thanks.
That's good input.
So we only have a few more minutes,
let me actually skip a couple questions
and switch to the crystal ball question at the end.
Which is, one of the reasons for the growth of images
has been retina displays, and now we have a batch of new 3X,
or kind of 3X screens, which Android
has been doing in iOS, which Android
has been doing for awhile.
What would you guess, I guess-- what's
your prediction for the future?
Are we done at the 2, 3X?
Is there ever a done component of their virtual reality
of 3D images?
You had an opinion about this before.
ANN ROBSON: Yeah, my opinion was I don't care,
because we're not doing what we can right now with 2X.
So I think that at a certain point
we can't really tell the difference.
Like there's not going to be such a jump from 2X to 3X.
It's not really going to be 3X.
I don't think that-- I think that-- I don't know--
I'm on the side delivering worse images faster,
than delivering perfect images.
GUY PODJARNY: It's not about the density
but more about, we need to educate our users
to try to get progressive image.
First get something then--
ANN ROBSON: Definitely.
Definitely.
Let's focus on that first.
But I think it's exciting.
I'd love to see a high retina image on the web.
I think it's really beautiful.
But I'm not sure how much more we could push it.
GUY PODJARNY: And Kornel, did have some opinions on this?
KORNEL LESINSKI: Well, we are upgrading our displays,
but we're not upgrading our eyes yet.
And the eyes are the ultimate benchmark.
So I think--
GUY PODJARNY: We should work on upgrading our eyes.
KORNEL LESINSKI: Yes, we should go for cyborgs.
But before that, I think we'll settle on something
like 2X or 3X, because it just doesn't make any sense
to put more pixels.
GUY PODJARNY: What do you think Ilya of that?
ILYA GRIGORIK: If I remember the marketing correctly,
retina was supposed to be defined
as you can see the difference anymore.
So when they come up with retina HD, and you're just like--
can't see the difference.
GUY PODJARNY: But practically speaking, that's fine for us,
maybe, performance nerds here saying that.
But the reality is that they did shift 3X
and would they shift 4X?
And would our designers insist that they want to use those?
YOAV WEISS: I think we need data on how much of a difference
does it make if we send out 2X images, to 3X,
4X whatever-X displays, and have hard data so
that if the designers come with 4X images,
we can throw them out the door.
And there was at Velocity last week--
this week-- research regarding that progressive JPEGs that
was done, basically, by looking at how people react
to images displayed on the screen.
And regardless of the result, which is somewhat surprising,
I think we need to do the same for contentiously bad images.
What happens if we display 1X images on a 3X display?
How bad is it, as far as user experience goes?
Same for 2X images, et cetera.
We need data.
GUY PODJARNY: We need to switch to the data.
Yeah, the results there were not favorable
of progressive images, despite everybody's hypothesis
in the room ahead of time.
So I think-- OK we still have five more minutes.
So I misestimated the times here.
So I guess we switch to Andrew's question
from here, which is those are all technical questions,
but if we talk a little bit about a use case question.
Move one back.
How important is it that an image
format we use on a website is one
that a user can save and view in a different spot.
I mean, we touched this a little bit
because we talked about the multi URL,
so they can open it everywhere, but does it
matter that the OS gets into the display?
Or are we done with browsers?
ILYA GRIGORIK: So I think it really
depends on the use case of it.
Ideally, it should be viewable.
So if I save it, I should be able to view it
with whatever software that I have, by default.
That would be nice.
We do have some examples where that
doesn't appear to be a problem.
So, for example, Opera and Chrome,
we have the compression proxies, which
have been transcoding all images WebP for years.
And users are happy.
And as far I'm aware I have not heard anybody scream
about saving out images as WebPs,
which is what you would get.
Because if you don't have a the safe save, or something else.
So that's an example.
There are plenty of big sites that
also use WebP, eBay, OKCupid, all those sites.
It doesn't appear to be a problem.
Is that a good answer?
I'm not convinced that it is, but--
GUY PODJARNY: But you don't think it's blocker.
ILYA GRIGORIK: No.
GUY PODJARNY: You think our primary conversations should
remain in browser-land and the OS would catch up.
ILYA GRIGORIK: If your use case actually
involves downloading an image, you have a photo gallery
and an actual "save as" is not a right click--
I'm not even sure how many like nontechnical users know
to right click and "save as."
They're probably fishing for the Download button
somewhere in the UI.
That button should do something smart,
like not save the fancy new format.
It should save the safe format.
Can browsers do a better job?
Probably.
GUY PODJARNY: And Ann do you think that when
you deal with the social sharing platform--
ANN ROBSON: Yeah, I think it's very important
to be able to not break the image.
But yeah, and I think Ilya was getting
on something really kind of interesting.
It depends on what kind of site you have.
If you have your own photo gallery,
I think it's important for you to be
able to "save as" right click, drag to your desktop.
There's kind of cool hack I think only with Chrome where
when you drag to your desktop you
can deliver a different high res image.
That's kind of cool.
But yeah, definitely.
I don't like the idea.
I love right clicking.
I love like taking images offline and sharing them.
GUY PODJARNY: So it's important to be
able to do the Save As we talked about before,
but then if you'd need to convert to something that's
a little bit more universal.
ANN ROBSON: Yes.
KORNEL LESINSKI: But people do share images.
They remics images.
All your LOLCat images, they were saved from somewhere.
So WebP can work as sort of a DRM
for photos, where you save it to desktop,
and then you try to open it.
Oh you don't have the right plug in.
GUY PODJARNY: Can we get Margaret a mic?
AUDIENCE: Hi.
I work on Firefox for Android.
And we actually have UI telemetry
to see what people use in the browser.
And actually the Save Image context menu,
in Firefox at least, is one of the highest menu items
that people use.
And we kind of wonder what kind of images
they're saving necessarily.
But I just want to throw out that data.
That we do have data that people use that a lot
so you probably wouldn't want to break that.
ANN ROBSON: Excellent.
GUY PODJARNY: Interesting.
YOAV WEISS: So safe sharing, safe saving
should probably be a thing.
Either for client side markup picture-based multiple images,
or for server side, because we can play with the except
headers.
ANN ROBSON: I think that this panel actually
has an action item.
GUY PODJARNY: I think there are good items.
So we talked about-- I think we agreed
that we want a single URL, not multiple images,
that we need to create a test benchmark,
although we haven't talked about how we actually
go about doing that-- one that everybody agrees upon.
KORNEL LESINSKI: So maybe if you have really good images that
you cannot compress well, send them to [INAUDIBLE].
GUY PODJARNY: Just email them to Kornel or put them
in DropBox or something, because the email is going
to be too big, and compress them before.
No don't
KORNEL LESINSKI: No don't.
PNG is good.
ANN ROBSON: I think we should seriously
try to get a set of images that are going
to be our test benchmark and do this.
Kornel or one of us should take the responsibility for this.
GUY PODJARNY: So on that note, let
me use the last minute we have here
to talk about building that benchmark.
Who do you think should that type of effort come from?
Should that be Mozilla, Chrome, like one of the browser vendors
that's switching the image formats, doing it?
Should it be a standards--
ILYA GRIGORIK: It's a moving target.
Ideally, I'd like to have some sort of a process,
sort of like [INAUDIBLE] archive, where we just say,
here's the latest set of images.
And perhaps our job is to define some sort of filters
on all those images, to say these are the safe ones to use.
Because I also don't want to freeze that
set, because the web evolves.
All a sudden we are serving 2X and 3X images.
And perhaps they're at different algorithms
and do better at high resolution images.
So that process--
ANN ROBSON: Yeah, it's interesting.
It's like, what is the average kind of content also
that is being served on the internet?
How do we even get that?
But that's important because--
ILYA GRIGORIK: It turns out the most popular image on the web
is a one by one pixel.
It's like, let's compress that.
ANN ROBSON: Done.
GUY PODJARNY: Well, I think we're done.
Thank you very much.