Tip:
Highlight text to annotate it
X
MIKE MAHEMOFF: Yes, so as Sam said, I'm Mike.
Right now, I'm actually working on Player FM, which is
very much around audio.
So audio is really the aspect of media that's been most
interesting to me.
And as well as that, it is actually a responsive web app.
So images for any web app is going to be important.
And I think something that I focus a lot on is the concept
of developer experience, and looking at developers as users
as well, and basically understanding what it's like
for a developer to perform any particular task.
And what are the complications they're having?
For me, as just working as a one man band, my
time is very valuable.
And so a lot of this too is not just about pure
functionality--
what sort of media capabilities have we got?
But it's also about making it really straightforward for
developers to do it without having to spend too much time.
I know most of my images are kind of
responsive to the user.
They look responsive, but they're loading way too much
data, just because I don't have the time to do
it the right way.
So those are the kinds of things I hope we can focus on
as well today.
So do you want to go around, and we'll just really
briefly-- we're pretty short on time.
So just say who you are and what you do.
JUSTIN UBERTI: Hi, I'm Justin Uberti.
I'm a tech lead for Chrome.
And I am one of the founders of WebRTC.
JASON GRIGSBY: Hi, I'm Jason Grigsby.
I'm one of the co-founders of Cloud Four.
And I've spent way too much time looking
at responsive images.
JOHN MELLOR: Hi, I'm John Mellor.
I work for Chrome for Android in London.
I've got view ports and worked on those kinds of things.
And I like to work on images as well.
ANDRE BEHRENS: My name is Andre Behrens.
I work for the "New York Times" on the Chrome app.
And my job title is developer advocate.
My real title is troublemaker.
And I have complained to the Chrome
team a lot about images.
MIKE MAHEMOFF: So I thought actually Andre might be a good
person to start.
Because you've got some impressively scary stats about
how many images you have to deal with at "New York Times."
Do you want to talk a little bit about the challenges you
face and how you actually deal with them?
ANDRE BEHRENS: The challenge we face is that we publish 300
articles every day--
300, 300 articles every day.
And if you're making something that's supposed to show the
whole "New York Times," that means you have to show 300.
So to give an example of how awful that is, I used to have
a section that came from our "Times" wire feed that would
do the last 24 hours of articles.
And it crashed the app, so I took it out.
It just couldn't load that much stuff.
And we like to show these nice, big, beautiful images
that we pay people a lot of money to put up to take-- and
that they put their lives on the line to get
a lot of the times.
And those big, beautiful images in it, in order to fit
into our apps, the way we used to do it on the website was
the whole thing was structured kind of around the images.
This is how big it can be, because this is how
big the image is.
And so we would just have people manually cutting them
whenever they could, pulling from this giant database.
And now we're trying to move into a more fluid style, both
on our website and in some of our apps.
And a big reckoning call was when the iPad came out.
And they were saying, well, we don't want to have to look.
Because we would have images for certain things in
different sizes.
They didn't fit together.
So we made people put in the big ones.
So now you're doing a thing where you're
resizing all your images--
big performance problem.
And then, retina stuff came out, and these
stupid pixels came out.
And now the main image we use-- that's the main one to
work with-- is called jumbo.
And that's 1024 by 768, which is full screen on an old iPad.
Then, the retinas came out, so we had to double the size.
So they are 2048 on a side.
And they take a long time to download.
And they take a long time to render.
And they affect everything else.
And if you are casually resizing them, which a lot of
our developers will be inclined to do in this sort of
lazy way, it slows everything down.
And it doesn't make sense why it's taking so long to load.
And it's just a problem.
MIKE MAHEMOFF: And do you have any particular techniques that
you're using to pick up on that?
ANDRE BEHRENS: We are considering a fairly insane
one for the Chrome app.
We actually built an enormous Amazon-backed socket connected
system that has multiple balancing layers.
And I raised the crazy idea, well, why don't we just draw
every image at every pixel size--
1024, 1023, 1022, 1021, 1020.
And I went to the guys, and I said, why don't we do it?
And they were like, storage is cheap.
So we might try that--
no promises.
But I don't think that's really realistic for the
average web developer.
MIKE MAHEMOFF: Yeah, well it sort of leads to responsive
images-- one sec.
You can come up and ask a question after, actually, if
you want to come up now.
ROB: We're doing that now.
MIKE MAHEMOFF: Oh, OK, you're already doing that technique
of drawing images at every dimension.
ROB: Whatever's requested.
MIKE MAHEMOFF: Whatever's requested, you'll just pull it
out and probably cache it on demand, right?
Yeah, OK, and who are you?
Can you say?
ROB: I'm sorry.
Rob from "Times."
MIKE MAHEMOFF: Rob from "Financial Times," OK, so it
sort of leads to responsive images, which you guys are
working on pretty closely.
Do either of you want to comment?
JOHN MELLOR: I'll start by making a distinction.
There's fixed width images, like maybe a logo or an icon,
where the main problem is resolution switching.
You want to load your retina image or
your non-retina image.
And it's not just 1x and 2x.
So you've got more ones.
But there's actually a simpler problem.
And the really hard problem tends to be where flexible
width images were.
You want the image to be--
say its width 100% is CSS.
It needs to match the size of the page for whatever device
you're loading it on.
And doing [INAUDIBLE] mark-up is really
tricky at the moment.
JASON GRIGSBY: Yeah, so the way that I tend to think about
it is, we've got a big conflict between what the
browser wants, which is to be able to start requesting
assets as soon as it can recognize the assets in the
document, versus what people want to do in responsive
design, which is to have the elements on the page, and
particular images, basically size based on the element that
it actually is, like the size of the element in the page.
And to me, it's sort of akin to when I've gone on vacations
with members of our family who want to script every part of
the vacation.
And then, you've got other people who don't want to plan
ahead at all.
And those are incredibly tense vacations.
And that's what's going on from sort of an imaging
perspective, where we've got this conflict between what
responsive design people generally want to do, which is
to wait until the layout is determined by the viewport
size and other characteristics of the device, and what the
browser wants to do, which is to start selecting assets as
soon as it can.
JUSTIN UBERTI: One of the things I think is interesting
is that for a long time, we were sort of told that JPEG
was fine, that JPEG was a standard algorithm.
We should all use that.
If you save 20%, it doesn't really matter.
Now, with retina size images, saving 20 or 30% is actually
translating into real user experience benefit.
And so with WebP, where we can get an additional 30, 40%
compression on images, that makes a real difference in
terms of page mode when the actual vast majority of the
bytes is coming from images.
MIKE MAHEMOFF: OK, but what are the actual practical
techniques you're seeing, for instance, Jason, in terms of--
are people using JavaScript?
Are people finding it just doesn't work in practice?
JASON GRIGSBY: So I think the first answer is that there
isn't a solution that isn't sort of a hack, right?
What people are doing right now is picking which
compromise they want to use.
So what we've been advising people--
and I just recently consulted a company that has over
800,000 images.
And they've got a very similar process.
Images are coming in all the time.
They're dealing with this stuff.
And essentially, what we're looking at is, one, they've
been hand cutting everything.
They're moving to a system that will automate resizing on
the server.
And then, they're actually using picture fills.
So they're using the JavaScript library to make
decisions like what the picture element would do if
the picture element were to come out.
But they're centralizing that into a single function so that
when it changes, whatever the new format will be, whatever
the standard ends up being, they can
quickly change that out.
Because it's clear that whatever you implement now
will be deprecated.
We just don't know what it's going to be
deprecated in favor of.
JOHN MELLOR: So it's a balance between quality, performance,
and simplicity.
There aren't really any simple approaches at the moment for
flexible width images.
So that's out, anyway.
But then, you want the images to load fast.
And we also want to load high quality.
And it's kind of a fundamental conflict between these.
So in terms of techniques with flexible width images, there's
JavaScript.
But JavaScript techniques tend to load images only when the
DOM is fully loaded, a DOMContentLoaded event.
And by this point, typically about a third of the page's
content, including all the resources, has already
finished downloading.
And the page would be ready to display if it weren't for the
images, which is holding everything back.
And so it's really too late to be requesting
images at that point.
And the images need to be somehow in the markup before
that happens.
MIKE MAHEMOFF: So what's the solution?
It sounds almost like it can't be solved.
JOHN MELLOR: Yeah, so there are some server-side
solutions, and you can do device detection, which isn't
great, because device databases suck.
You can do things like setting cookies or changing the base
href for your page at the top of your head.
And then, any images requested relative URL will be served
from an appropriate directory in your server.
And you'll set up base href based on the screen width and
the device pixel ratio, for example.
So there are techniques you can use today
that sort of work.
But yeah, we need better solutions in the future.
MIKE MAHEMOFF: And what's happening with that, the
picture element?
Is that something that's realistically going to be
coming out this year, for instance?
JOHN MELLOR: The picture element's under
discussion a lot.
There's [INAUDIBLE]
you can go read.
I think it doesn't seem like it at the moment.
And everyone's still hashing out use cases
for it, and so on.
It also doesn't really solve this problem directly.
Because you still get this fundamental conflict--
well, not conflict.
But to actually put a picture element that will load the
right image on various different devices or different
resolutions, you need 10 different rules for every
single image, like all the widths times all the device
pixel ratios.
And it's really hard to actually--
well, it is not a great solution [INAUDIBLE].
MIKE MAHEMOFF: OK, does anyone else want to comment about the
whole issue of images before we move on to
WebRTC, and so on?
Or any questions, by the way, feel free.
ANDRE BEHRENS: Just to add on, the same problem with the
performance, the developers are not used to dealing with
this at all.
Because literally the easiest thing in the world to do in a
web page, is put an image referred to a URL and have it
show up, and you do nothing.
And now I'm coming to you--
OK, so we need about 30 different sizes.
And we have these different stop points in
our responsive design.
We're going to put these in here.
And you have to think about putting this here.
And maybe we're changing formats.
And nobody on our team has ever thought about this in
their life ever until right now.
They have no experience with it.
JASON GRIGSBY: I think one of the things that--
so I've spent a lot of time working with the responsive
images community group and talking about the different
use cases, whether it's the art direction use case or the
resolution switching use case.
And all of this has been in an effort to sort of create that
balance between what the preparser wants and what
responsive designers want.
And I was talking yesterday about how, increasingly, I'm
beginning to wonder if it is solvable, or if we're going to
end up in a situation where we have to choose
between one or the other.
And so Estelle has this really, really amazing
technique using SVG where you can do media queries inside
SVG, and SVG also does raster.
And so basically, you can have an SVG image that is basically
an image bundle that is responsive.
SVG isn't supported across--
it isn't on Android 2.3.
And there are some other sort of weird quirks around SVG,
because SVG is sort of weird generally.
But it's promising.
And more importantly to me, it's like the moment I saw the
clown car technique, which has an awesome name.
I found myself thinking that if I had access to this, I
would totally use it.
Because it's actually the problem that I want to solve.
I want to make decisions about the size of images and the
sources of images based on the size of the images in the
page, not based on device width, not based on viewport
width, not based on all these.
All those other things are basically like wearing boxing
gloves and trying to pick a pencil up
off the ground, right?
They're crude implements.
And so when I saw the clown car technique, I'm not sure if
that actually will end up being something that we, as
web designers, end up using.
But if it actually worked, I don't think that the preparser
would have a choice.
I think web designers would adopt it, and just move
forward, and say, figure it out, browsers.
MIKE MAHEMOFF: Right, but that would need the browsers to
support that format, really.
JASON GRIGSBY: Well, the format
works in a lot of ways--
nearly all modern browsers.
So that's not as much of an issue as it is that, by doing
so, then you are in fact causing the problems that John
was talking about, where the images can't be requested
until much later.
JOHN MELLOR: So there's really two things on the horizon that
I see as possible solutions to this.
One is the client-end setter, proposed by [INAUDIBLE], in
fact, where the browser would provide information as to its
device pixel ratio and device width with every HTP request.
And this would allow servers--
you would just specify a single image in your markup,
and the server would send you an appropriate image based on
your device using this information without any device
databases and that kind of [INAUDIBLE].
The other, and perhaps the most promising but least
certain solution, is to use a progressive image format
coupled with smarter browsers.
The idea is web developers would save one ultra high
resolution image in a progressive format, link to
that in their website, and then the browser would only
load as many bytes as it needs.
So initially, the browser would start loading all the
images on a page in parallel, because it knows it's going to
need at least some data for all those images.
But then, based on the layout size of the image, it'll then
decide when to stop downloading images.
And maybe even if you pinch zoom in, it'll load some more
images at that point.
So actually Simon Jordan has got a prototype of this.
We're looking into it--
no promises yet.
MIKE MAHEMOFF: Cool, so we should move onto WebRTC.
Well, I think Sam asked an interesting question, which is
kind of the standard one.
Do you want to ask that question again?
SAM: Yeah, just really, I see WebRTC with mobile as the
natural home of WebRTC.
And I'm just wondering where we can take this beyond the
kind of obvious application--
video chat apps and so on-- so obviously directed at Justin.
JUSTIN UBERTI: So I think there's a bunch of use cases
that people are starting to realize where WebRTC is not
just making a phone call.
It's being used to kind of take media that's on one
device and push it onto another.
And one example is that you have this very small screen
for a phone.
And wouldn't it be nice if you could take that and shoot that
somewhere else?
Another thing is just that I think a lot of carriers who
typically had very, very long cycles to update any sort of
phone or device that's on their network now see this
idea to be able to push in a web app as a way to provide a
lot more value added services, and do this in a way where it
prevents them from being kind of just like a pipe.
And one of the things that's a challenge to get from where we
are now to there is the fact that right now, if you're a
web app, your traffic is competing with, as I like to
say, cat videos.
You want to send voice traffic or video traffic, and you're
just another packet on the internet.
But carriers have the ability to do
prioritization for traffic.
They do this already for voice.
The voice calls that you make are sent over a separate thing
than all the actual 3G and 4G data.
And to be able to expose that QoS stuff to these voice
communication apps is what's going to be able to make it on
a level playing field and allow this web communication
to be the future of telephony and communications.
MIKE MAHEMOFF: Sort of related to that, the whole issue of
performance and bandwidth, is also peer to peer.
And that's also potentially a big part of WebRTC, right?
JUSTIN UBERTI: Sure, and I think that you can think of
things like video distribution.
One of the historical challenges for over the top
video sites is that every time you have a new subscriber,
that's an additional amount of bandwidth you
need to send down.
And if you compare that to the broadcast model where they put
up the towers, there's a fixed cost, and then they have as
many subscribers as they want.
Moving to over the top video is a lot more costly.
But if we think about--
for popular videos, if you could then sort of get some of
the data from peers who are down your street or something
like that, what could that do to the actual data
distribution cost?
And WebRTC makes peer to peer possible for just
a regular web app.
And I think you're going to see some very interesting
transformations based on that.
MIKE MAHEMOFF: Yeah, I heard someone from BitTorrent
talking about this as such a tragedy that for each extra
user as a video user, you sort of actually have to pay for
that bandwidth.
Whereas from someone like BitTorrent's point of view, or
a peer to peer network, it's actually a benefit for every
extra user.
It's the exact opposite.
So it is a real tragedy that it doesn't happen.
And I think he was explaining that it was to do with
analytics and content owners wanting to be able to track
this sort of thing and license it.
But I think those solutions are also still possible with
WebRTC, aren't they?
JUSTIN UBERTI: Yeah, that's definitely true.
I mean, I think that just because you're not sending the
actual bytes of data down to the client doesn't mean you
can't still collect analytics, doesn't mean you can't still
collect play counts and that sort of thing.
There still has to be some master tracker to know where
you get your data from.
And so you can still keep all those things in control, but
you'll send 1/1,000 of the data.
You don't need to actually send a two
megabit 720p stream.
MIKE MAHEMOFF: So we've got a question from Matt.
Is Matt here, Matt Lockier?
OK, you can read your own question then.
MATT LOCKIER: OK, oh yeah, it was just sort of--
oh, this is kind of unrelated really to this section.
Did I ask another one?
Oh yeah, what's the status of the audio API?
Also, will it kill my battery if I'm analyzing for a really,
really long time, like if I'm just listening and doing
distortion and trying to provide cool things on mobile?
JUSTIN UBERTI: So we're trying to get the Web Audio API lit
up on mobile.
And Ray, who's in the back of the room, would be the person
who has the best answer on that.
But one of the great things about Web Audio is that the
heavy lifting of the actual audio processing is done sort
of inside the browser, not necessarily by JavaScript.
So it can be done very, very efficiently.
And one of things I expect that we'll see going forward
on mobile is that more and more stuff will get offloaded
to hardware.
And so if there are cases where we're seeing the stuff
really used for hotword detection, or things like that
for speech, if it's killing battery, we'll find some way
to make it efficient so that your phone lasts all day.
MIKE MAHEMOFF: Also just thinking about video a bit,
I've got a question here from Rayhan, who's a friend of mine
in London, actually.
And he's basically got a start-up to
do with real estate.
And he was wondering about how he could use the camera and
actually sort of overlay things, or do programmatic
transformations on top of it as part of the stream that
gets recorded and played back or uploaded.
JUSTIN UBERTI: Does he mean in the user interface, or for
actually sort of decorating the video as it's captured?
MIKE MAHEMOFF: Yeah, potentially uploading that or
streaming that.
JUSTIN UBERTI: Right, so I mentioned Web Audio is not
something that can be used to actually post-process the
audio after it's captured.
And we're adding a similar sort of API for video.
This is kind of all pre-standard stuff, but
something where, within your actual web application, you
can get the audio, then process it.
If you wanted to do some sort of special effect to it, put
on a silly hat or whatever, you could then take that,
process it in JavaScript, and then when it's sent out to
whatever the other peer is that's going to then consume
it on the server, or maybe just your friend on the other
side of the country, they'll see the post-process video.
And you could also imagine just generating a video
stream, too.
Perhaps it's not a traditional sort of video chat example.
But maybe you want some way to check in on some sort of
sensor network or something.
And the way it does so is through some video output.
And you could create this actual thing from some API.
MIKE MAHEMOFF: JavaScript.
JASON GRIGSBY: Just on the subject of video, I've
actually got a request.
And that's to get the adaptive streaming
stuff sorted out ASAP.
For people who are attempting to do responsive design, from
a responsive design perspective,
you'd like to have--
in the same way in which images are a problem, source
video is an issue, right?
So you can provide different codets.
But ideally, what you'd like is, you'd like different
sources based on bit rate, based on
the size of the screen.
It doesn't make sense for somebody on a phone to
download a 1080p video.
And at the moment, we've got HTTP live streaming on iOS.
But we don't yet have implementations on the other
mobile devices so that we can do the same sort of stuff.
And so I know that there's a lot of work going into it.
I hope to see it on devices soon.
Because if you end up in the video space, it becomes very
problematic.
And you basically have to, again, sort of make JavaScript
changes in order to make it happen.
MIKE MAHEMOFF: OK, yeah, so then that becomes a challenge
of you'd have to actually have all those videos, right?
So again, you might have to do something like Andre was
talking about of actually generating those videos on the
fly, which could be quite scary.
ANDRE BEHRENS: We're doing more of that, too.
MIKE MAHEMOFF: Getting into that, yeah.
JASON GRIGSBY: The days of save to the web for video or
for images are over.
You need some way to automate it on the server side.
MIKE MAHEMOFF: It leaves us some interesting denial of
service possibilities, doesn't it?
If you generate every pixel, or every quality level, then
you have to generate a new whole video for that.
JUSTIN UBERTI: You don't have to generate every sort of
combination.
I think the matrix can still be fairly sparse.
Because really, you see this sort of linear progression of
bit rate goes up with resolution.
And so if you watch Netflix, what you'll see is that they
start sending you the lowest quality thing while they're
still figuring out what bandwidth you
can actually support.
And as you actually say, oh, I can support 240p, 360p,
they'll keep ramping you up.
In Chrome, we do this through the media source API, which is
the ability to sort of feed in blocks of video.
And those blocks can keep going up in size.
As each block comes in, the actual entire video--
you don't have to get the whole thing each time.
You just get each little chunk.
And each chunk can be in a different resolution.
MIKE MAHEMOFF: That makes sense, actually--
sort of a little bit like media query breakpoints.
You'd end up with.
So just last question before we wrap up is
about Blink, actually.
Also Matt asked, what are we going to see about
fragmentation?
And what are we going to see from Blink in general?
This is for all the media types we've been discussing.
JASON GRIGSBY: I believe that Blink is--
no, I can't speak to this.
JOHN MELLOR: Blink's goal is to work closer to [INAUDIBLE]
community.
And so we're going to try and avoid [INAUDIBLE].
We're also trying to add more testing about WPC, and so on.
And so we're tyring to make sure that the implementations
of different browsers are interoperable.
And so no particular media has been on this.
It's kind of hard.
JUSTIN UBERTI: I think just one of the interesting
questions is, what will Blink mean for WebKit?
Because there are a lot of Chromian
people working on WebKit.
And now that they're not there, what will that mean for
the velocity of WebKit?
I don't know the answer to that.
MIKE MAHEMOFF: Very much an open question, so--
yeah, Bruce.
BRUCE: Can I ask a question?
MIKE MAHEMOFF: Yeah, go on-- one quick question, yeah.
BRUCE: About WebP--
hi, Bruce from Opera.
At Opera, we like WebP.
And we've used it for a long time in the Presto
incarnation.
We've got something called Opera Turbo to speed up the
desktop on slower networks.
And that transcodes images on the fly to
WebP on slow networks.
That's still faster than
rendering the JPEG or whatever.
WebP is Google's format.
What plans are there to popularize it and make it more
widespread?
I know that Firefox's initial
opposition seems to be softening.
I wonder if Microsoft's opposition might soften now
that the WebM format is clarified
vis-a-vis the MPEG LA.
But what are you going to do to allow it to fall back on
browsers that don't support WebP, for example?
JOHN MELLOR: It's a good question.
Ilya, do you want to give them more on this, perhaps.
ILYA: So part of the answer is, we have a session on WebP
tomorrow where we're going to cover exactly that.
But the short of it is, I think we can make the
negotiation of WebP much simpler.
So today, it's actually hard to deploy WebP, right?
Either you have to use a JavaScript fall back, either
you need to use a user agent sniffing solution, and neither
one of those is good.
So we're actually fixing a bunch of things in Chrome to
make that easier.
So you guys actually--
Opera has been advertising image WebP in Accept.
We're lending that, and Chrome now is in Canary.
That's going to fix a lot of things.
It makes it very simple.
Just recently, we actually had an integration
with an image CDN.
So CDN Connect--
cool service which does all the imagery sizing and
everything on the fly.
It'll also detect that you're advertising the right accept
header and serve the appropriate image.
So that's one example.
Another example is something-- so I'm sure you guys have
heard that Facebook was
experimenting with WebP, right?
And they ran into a couple of issues where users would save
the image, but then they can't open it.
So we're fixing that as well.
Right now, we've registered Chrome to
be the default viewer.
So at least you can double click on it
and view the image.
Now, can you open it in Photoshop?
You need a plug-in for that.
And this kind of thing takes time.
One kind of crazy and interesting idea that we're
also experimenting with is, what if you had a safe Save
option, which is, when you right click, and you click,
and you go Save As, we save, I don't know, a JPEG or
something, which is transcoded to some format that we know
that works?
And this is not specific to WebP, right?
I hope that we have more image formats in the future.
And if we can have the right mechanisms to address these
issues, whether that's accept header and a save and all the
rest, that'll help everybody.
BRUCE: Well, there's a level four CSS module whose name
escapes me, because it's typically snappily titled.
And instead of saying, background image URL blah dot
PNG, you can say, background image.
And then, it's image.
And then there, you can give a list of images.
So you can have WebP, then comma JPEG.
And that would make developers' lives a lot easier
if that were turned on in Blink.
ILYA: So that's really interesting.
I actually wanted to ask--
I know that we're running out of time.
But I want to ask you guys about this.
So I think that solutions like picture tag make the whole
situation much worse and much more complicated.
Negotiating which format, or which compression level, like
2x versus 1x--
you don't compress.
You don't have an index HTML and index dot
html.gz on your server.
The server just does that for you.
Negotiating 1x versus 2x versus 2.5x
should be the same thing.
Are we actually doing the developers a favor by giving
them a picture tag, which is 15 lines long, because you
have a combinatorial explosion of image formats times number,
or DPI resolutions times number of art directions times
number whatever?
It just seems like a step backwards.
ANDRE BEHRENS: I am strongly in favor of not having to
think about things.
JASON GRIGSBY: Is it a favor?
No.
Part of the reason--
I look at this like whether it's picture or SRC set or
whatever it is, continue to advance those.
Because if people don't like them, sooner or later,
somebody will be motivated to come out
with something better.
And so the problem is truly a problem.
And it's a problem for the clients that we consult with.
It's a problem on the projects we work on.
So we've got to continue to try to find better solutions.
ILYA: So this is interesting.
Because when I talk to W3C and other communities, they
actually tell me, hey, no, we really want a markup solution.
We don't want to rely on the server to do this.
JASON GRIGSBY: Sorry, I think that it would be strange to
have a fundamental element of HTML that required the server.
There's no precedent for that.
ILYA: But 1x and 2x is just a compression setting.
You don't specify a [INAUDIBLE]
link for--
go to this page, it's compressed.
JASON GRIGSBY: Correct, yes--
I don't know, maybe.
MIKE MAHEMOFF: All right, we've got to finish up.
So just in 30 seconds or less, maybe you can just give out
one best practice that you'd like to see more
developers be doing.
JUSTIN UBERTI: I'd like to see people take advantage of the
peer to peer APIs that WebRTC will be providing.
It's sort of at no cost, the ability to get bulk data much
more cheaply.
JASON GRIGSBY: We know that 96% or 94% of responsive
designs are bloated crap.
And it would be really nice if people built stuff mobile
first and performant when it came to responsive design.
JOHN MELLOR: Be careful that you always get poly-fills for
things like the picture elements and so on.
Don't just assume they'll perform well.
Actually measure that and cross fingers for the best in
the future.
ANDRE BEHRENS: I think I would say, notice there's a problem.
Open DevTools--
notice that not dealing with it is a problem, and notice
that the solutions are a problem, and make noise.
Because that's where solutions come from.
MIKE MAHEMOFF: Get on the Bug Tracker.
All right, thank you.
[APPLAUSE]