Tip:
Highlight text to annotate it
X
Diffie: I'm Whitfield Diffie.
I did, uh, one good hour of work in my life,
just about 34 years ago.
The sp--spring of 1975.
Uh, created something called public-key cryptography.
And I've been making a living off of it ever since.
[laughter]
Um, some other people have made an even better living
off of it ever since,
uh, I'm happy to say.
And I, uh...
I look--I get presented with my panel.
I find two of them I've known for quite some time
and two of them I haven't.
So let me just introduce them as best I can.
Uh, Steve Crocker is somebody I've actually known
even longer than Vint Cerf.
I met him when he was at MIT, '67, something like that,
before I went to Stanford.
And he, uh, I think he's had his, you know, sort of fingers
on the pulse of ARPA's neck ever since then.
And, uh, been working in security
for quite a number of years.
One of a couple of people I'm going to introduce
whose activities move too fast for me to keep track of them.
So I'll let you all say how I misrepresented you
when you get the floors yourselves.
Um, the next two are internal here
and probably better known, um...
to many of you than they are to me.
Um, Chris DiBona's described as "browser security,"
which I assume means Chrome.
And I'm told Chrome goes a long way out of its way
to, uh, to get isolation among various things
that need to be isolated.
Eric Grosse seems to be bipartite.
He's described as "application and network security."
As a practical, developmental matter,
I see those as two different things.
And maybe they should be harmonized.
But I'm curious as to how they get unified in one person.
But I hear he comes from Bell Labs.
And so, you know, universe of my youth.
That means he can do no wrong.
And finally, Howard Schmidt.
I can't keep track of him either.
But he's been at Microsoft, at the White House, at eBay.
But then eBay sold him.
But I've forgotten who they sold him to.
And, uh... man: [speaking indistinctly]
Diffie: the on-- [laughter]
No, that's exactly what I was thinking.
I mean, you know, it's like, uh, Jay Leno quoted,
uh, Schwarzenegger, saying about the budget crisis,
[as Schwarzenegger] "You know, we have no choice now.
"We are going to have to sell Senate seats right here
in California."
Uh, so selling off this personnel
is the, uh, last resort
of the unscrupulous corporation.
Uh, Anita, why don't you all come up.
Uh, two out of four of us have slides.
Two and a half out of five of us have ties.
I have mine in my pocket.
Um, and I was told, I guess,
that, uh, Chris DiBona's slides come before
uh, Eric Grosse's is the only guide
to what the order ought to be here.
So, uh, why don't we hit you off in alphabetical order.
Steve, why don't you go ahead and...
Actually, I-I got-- let me have the mike.
I got carried away by introductions.
So I think I'll also take a second
to make an opening statement,
uh, since after I let other people loose,
I may not get a chance.
I think that the internet is unique
in a combination of things.
First, if you compare it--
I think lots about-- you know,
there are important things to learn from history.
But you can also learn a lot that's misleading from history.
And I think in security,
we've absorbed a lot that is very misleading.
And the critical th-- one critical thing
about the internet is that it's a network
intended for friends to talk to foes.
If you look at a traditional
military command and control network,
which is where traditional security work was done,
you have a network meant for friends to talk to friends.
And when you add the technology that we've developed,
which was indeed intended for a diverse community,
to the easier problem of a less diverse community,
you can get quite good results.
But we still aren't ready for prime time
in a truly diverse community.
Now, you might answer that by saying,
"But wait a moment.
"There have been networks before
"that didn't just connect friends.
"Look at the telephone system.
For that matter, look at the mail."
And that much is true.
But the key point about the internet in my mind
is how much it is its own housekeeper.
So the internet is a matter of a lot of code running
from microcode to applications.
And that code is developed and updated and maintained
over the network itself.
And that makes it--
that raises the security standard required
in an environment that makes it harder
to achieve that security standard.
So with that, I will, uh, let my--
incidentally, I'm talking this way
because I was at a two-day celebration
of, uh, of 100 years of British intelligence
in a conference facility in which the common room
had a ceiling about this high.
And you were shouting at each other.
You know, you know, spies get together and conspire,
but you have to shout to do it.
So I apologize.
At any event, I let loose the rest of the panel
who can correct me.
Crocker: I guess this is-- this is live.
Thank you very much.
Um, uh, you caused me to flash back, Whit.
Uh, so we met, my recollection is,
when you came into my office and asked if you could borrow
one of my, uh, math books
when we were grad students at MIT.
And I remember being, uh, pleasantly surprised
that I had anything that would interest anybody
who was, uh, already at MIT
since I felt overwhelmed by the environment.
Uh, the other thing you mentioned
was having a, um-- what--how'd you put it?
Uh, fingers on the neck of, uh, of...
Diffie: ARPA. Crocker: of ARPA.
Uh, uh, they con--
Diffie: That's the pulse, not the...
Crocker: Yeah. Yeah.
So that--that-- that was the question
is this--is this the fingers around the neck
or is this finding out whether DARPA's still alive?
Which is, uh, in some respects,
an open question.
Um, so I'm h--I'm here in a very narrow capacity
with respect to security.
I've been working on a-a--what looked like
an easy, straightforward problem,
um, actually, in my life,
originated when Vint called up one day
and said, "We got a serious problem."
There's, uh, what he-- he described the basic issues
of cache poisoning.
It had been demonstrated, uh, it had been demonstrated
in a way that rocketed up through the higher levels
of the government, Presidential Science Advisor.
And we started an urgent project right away to fix the problem.
It was obvious what to do.
Add cryptographic signatures to the DNS system.
Early 1990s.
And, um, it's a little sad and embarrassing
as to what the long path has been for that.
The good news is that, uh, the specs
are pretty much done.
And we're now in the deployment process,
which is long and, uh, equally painful.
So I want to give you a very brief walkthrough of this.
Um, I've been cautioned
that I have far too many slides,
but I intend to move along real fast.
So, uh, uh, recap, status, and current issues.
Here's, um, a picture of what DNS looks like.
And there are two sides to this.
To the left of the cloud is the provisioning side,
putting the information together,
making it available.
And on the right side is the, uh, active part,
the reason why it's there,
asking the questions and getting the answers back.
And here is, um, what we'll call
the Google effect, um, perhaps,
or the--the web effect.
There's a rather surprisingly large number of, uh, queries
for DNS just in looking at a single page these days.
This is, you know, you think you will look up something,
get an answer back,
and then you go and get that information.
So that's the reference for a single page
off of, uh, cnn.com.
Those are DNS lookups, not the content lookups.
Here's where things can go wrong.
Uh, the arrows all point to places
where it's possible to tinker with
and subvert the information.
The black arrows are, uh, places where you need
insider access.
Uh, the red arrows, unfortunately,
are the places where the attacks can come
from the outside.
And that's the area that we want to, uh, uh, fix.
So, uh, in ultra brief,
this is not the place for a, uh, tutorial.
And I suspect, um, you know, 95% of the people
in this room can, uh, can build this stuff.
Um, so you have, uh, zone administrators
sign records.
End systems have to check them.
And if the bad guys want to insert false answers
in the middle in one fashion or another,
either on the fly or by poisoning the cache
or whatever,
that gets detected and, in principle, discarded.
So one of the big, uh, questions that comes up,
not in the inner circles,
but at the next layers out of, uh, how do--
"Why should I adopt this?
Is this going to solve all my problems?"
And the answer is the usual,
uh, "it does part of what you need
and it won't do everything."
Is it necessary? Yes. Is it sufficient? No.
It won't solve D-- DDoS attacks.
It won't solve vari-- identity theft.
It won't solve a bunch of other things.
But it is a base layer that a lot of things
are going to be built on top of.
And it's important to, uh, implement it.
Uh, and so--so, uh, getting down a little bit,
there's a couple of chicken and egg problems.
One is that in order to work, you have to have zone signed
and you have to have a validating, uh, uh, resolvers
check those signatures.
Why should I put it into my resolver
if there's nothing signed?
Why should I sign anything if there's nobody checking it?
And until you get on the right side
of that cycle, it's not virtuous.
Um, so we have to get zone signed.
We have to get signatures checked.
Uh, as it turns out, it is easier to put the pressure
on getting things signed.
And so, in fact, the place we are
is that we have more things signed
and we are in a earlier, but not zero stage
of getting signatures checked.
There's the status.
Um, there are a handful of--
a growing handful, I should say--
of top level domains that are signed
and in operation.
If you're not familiar
with all those two letter acronyms,
that's Sweden, Brazil, Bulgaria, Puerto Rico,
the Czech Republic, and Thailand,
uh, for country code, top level domains,
and .MUSEUM and .GOV,
which is the U.S. government internal, um, uh, one--
one of the U.S. government's internal ones.
.MIL is the other.
Coming soon, Canada, UK, uh, Portugal,
.US, uh, Australia, India, uh, Malaysia,
um, Switzerland all testing,
uh, .ORG, .EDU, .MIL, .NET, and .COM.
All have announced, uh, various degrees
of, uh, development and, uh, uh, likely to come.
And the root is very much under discussion.
And, um, I-I--there's just a huge amount going on there.
And I'll say a bit more at the tail end.
Dan Kaminsky's work last year,
uh, is probably the strongest and best thing
that ever happened with respect to DNSSEC
despite all of the years of effort
that we've put into it.
Uh, we have to raise him as the poster child for DNSSEC.
Um, the U.S. government has purview
over .GOV and .MIL, of course,
under .US and .EDU,
under, uh, various contracts.
And it has, uh, I think--
uh, uh, talk about fingers on the neck,
um, fingers around the neck of the root.
Um, there are government regulations, um, in-depth,
uh, saying that .GOV has to be signed
and below .GOV, all of the subordinate zones
by the end of this year.
Um, .MIL is coming.
There's, uh, f--federal standards
that are not only forcing the adoption
inside the U.S. government,
but will have a cascading impact on vendors.
And that is, um-- one would have thought
that the power of the purse of the U.S. federal government
would have waned in the I.T. area
and, uh, could not control anything.
But it still has a quite, uh, powerful, um, uh, effect.
On the validation side--
on the--the--the sig-- checking of signatures--
ISPs are beginning to operate validating resolvers.
Telia in Sweden rolled out, uh, in parallel
with, uh, the, uh, Swedish registry
when they rolled out the first, um, uh, full-up
operating, top level domain that was signed.
Uh, Comcast in the U.S. has been a leader,
The University of California, Berkeley,
and I think there are, uh, a bunch of other places
that are beginning to run validating recursive resolvers.
The list will grow.
Um, what are the issues?
Um, well, are there performance problems?
And the answer is "not really."
Yes, it is true that signed answers are longer
than unsigned answers.
Yes, it, uh, takes longer to check.
Uh, well, you have to spend time
checking the signatures computationally,
and bandwidth utilization, and so forth.
These are small potatoes.
One can make them look very large
by saying, oh, look, the signed answers
are three times bigger than unsigned answers.
That's out of a, uh, uh, a single digit percentage
of the total bandwidth that gets consumed,
perhaps, at--at best.
Um, we did discover, uh, that some of these,
uh, small routers and firewalls
have trouble processing signed answers.
Um, and, uh, there's a report reference there.
Uh, we did some careful ex--experiments.
Documented the, um, the tests.
Uh, Comcast redid those and, uh, had a summit meeting,
um, several months ago.
Impressed vendors.
And I think that over time, we will get that problem,
uh, fixed up.
All right, so what isn't in good shape?
We need, uh, a rather complete, predictable response model.
What happens if a signature doesn't check?
What happens if you ask for a signature
and it isn't supplied?
Those two questions don't have crisp, clean,
fully, uh, implemented answers, uh, throughout all
of the, um, uh, end user software.
Um, even Microsoft is fully on board.
Nobody wants pop-ups saying "a problem's been discovered."
Okay. Not really.
Um, and I don't appreciate you asking
at this particular moment 'cause I can't do anything
with the question, right?
Um...
man: Can they afford it?
Crocker: It--it's--it's just completely the wrong thing.
Um, we need products and services
that make this easy.
On the product side, Secure64 and other companies
are producing, um, uh, appliances.
On the services side, Afilias has, um, brought out
a managed DNSSEC service.
I expect that UltraDNS run by Neustar,
uh, will be along momentarily.
And, um, I think this will move along nicely
so that, uh, it is not going to be necessary
that you have to become an expert
at all the different aspects of generating keys
and managing the key rollovers and--and all of that
if you want to outsource it.
If you want to do it, that's fine.
It's all well documented.
Um...
on the, uh, signing side,
we have to have, uh, um, tools
and, uh, other mechanisms available
to make it easy.
One of the chicken and egg problems that we're dealing with
is not only the, uh, the checking versus signing,
but within the signing community,
you have the root, you have top level domains,
you have enterprises below that.
What happens if an enterprise--
So--so a point that I-I should've mentioned
is that, uh, DNSSEC depends upon a chain of signatures
from the bottom up.
So that if an enterprise has signed it's own,
the--it's signature is vouched for
by the top level domain that it's registered under,
and the top level domain signature
is vouched for by the root signature,
and the root key is then made available
in, uh, broadly.
Would that it would all be done. That would be great.
We're in a, um, a state where the tree is full of holes.
Or less charitably,
there's only a few places that are filled in on the tree
and it's mostly holes.
How do you operate in this incomplete state
which we're going to be in in some degree
for a very long time?
You have to have a way of tying together
the loose ends.
Trust Anchor Repositories is the term of art.
Uh, a bunch of things that are not perfectly,
uh, worked out about all of that.
Uh, and then you have some big players like Google
and Akamai and others
that play very, very fancy games with DNS.
And that when you bring those fancy games together
with the design of DNSSEC,
there's some technical challenges there.
And I have been hoping and, uh, having discussions
on the side that Google, Akamai, and others
will take a, uh, forward posture in this.
And, uh, and the results are encouraging.
But I'll leave it to--to them to make their own announcements.
Applications.
What do you do with all this besides you get to take comfort
in the fact you've got the right, um, uh,
that the domain name translates to the right address?
Um, well, the interesting thing
is that once you do all of this,
you actually have a platform
that other things can be built on top of.
Should we mention that? Would that scare people?
Or should we mention that and excite people?
An open question.
Uh, meanwhile, off in another part of the space,
the email people are working on, uh, DKIM--
domain keys, uh, for checking email origin.
And those keys are stored in DNS,
but not tied to the DNS, uh, signature process yet.
Although it looks like that it's obvious
that they should do so.
And there are plenty of other,
uh, public key infrastructure apps
that could and will exist.
Um, I want to mention a, um, a thing
that is quite, uh, new.
Uh, in the last few days, we put together
a, uh, a symposium looking at the specific technical issues
of what it's going to mean to roll out a signed root.
Um, the, uh, email address there is for submitting an application
or requesting information.
Um, here's some topics that we'll be looking at.
It'll be a small, um, symposium
about maybe the size of this or a--or smaller
and the results will be made public.
And that's it.
I have in this slide deck if it's distributed
some backup slides on other things.
And, uh, but thank you.
Yeah.
Diffie: I was looking terrified at that 20 out of 36.
[laughter]
We're going to have to throw this guy off the stage,
I mean, somewhat--it's a little like you just read,
um, Magister Ludi.
It suddenly ends in the middle of something and tricks you.
Plus it has a long series of appendices
so you still think you're in the middle
of the book [makes duck-like noise].
Um...
Um, how shall we handle this?
I don't know how it has been handled.
man: [speaking indistinctly]
Diffie: Questions in the middle.
Sure.
man: Yeah, one of the obvious questions--
and it sort of goes to the discussion with Eric.
Uh, one of the next steps after you get to what you think
is the right side is actually doing,
uh, using certs to verify that you're there
and sending up a-- an encrypted pipe.
Google along with a bunch of other players
have done a marvelous job in the last 18, 24 months
at the CA/Browser Forum in coming up with that solution.
And there's no reason in the world
why, uh, basically EV certs shouldn't be rolled out
worldwide to attack the cyber security problem.
In fact, what's terribly embarrassing
is even within the United States government--
I'd love to see Eric do this next time he's in Washington--
is try going to a, uh, U.S. government agency web site
and see what you get in terms of a cert.
See how many are--are using EV certs.
Bad news is most of them are broken.
So the government agencies themselves
are not even using the technology.
So what--one of the obvious next steps
is to capitalize on what Google,
the browser vendors, the CA vendors have done
in this--producing this marvelous specification
and just getting it rolled out worldwide.
Diffie: I'll give you each a microphone.
man: No, no. I-I-I have nothing to add to that.
So I'll just pass it.
man: Eric Schmidt. Eric, I can answer.
Shall I answer?
man: EV certs. [speaking indistinctly]
man: [speaking indistinctly]
man: Hi, I'm Andre Broida.
Yeah, the problem that we have with EV certs
is they have much latencies.
So it takes more time, actually, to check EV certs.
man: Anyone else want [indistinct]?
[laughter]
man: [speaking indistinctly]
man: [speaking indistinctly]
DiBona: I know they're dim.
Okay, so, um, Vint asked me to speak, uh, broadly
about browsers.
And I asked him if he would mind specifically
if I sort of told the story of security
behind the Android handset.
Um, and--and there's a fellow here from Chrome
if you want to deep dive into sort of sandbox design
and the rest.
So if you have any questions,
I'm sure you can answer those.
Isn't that great how I can volunteer people
in the audience?
Um, so, yeah, uh, I'm Chris.
Uh, for the purposes of this discussion,
I guess the background that matters
is my time at the state department
as a cryptographer.
And after that, um, working on the Entrust CA,
uh, when Tandem used the Entrust CA
for the Singaporean government.
Um, so as you know, uh, Android is a Linux based device.
And there are a couple of cell phones,
uh, that have been out, actually, for very many years
that use Linux, uh, as the base for the, uh, for the device.
And there's some really interesting sort of things
that--that you get for free with Linux
that's pretty awesome on a cell phone.
Uh, the Linux security model is the bog standard,
UNIX users, and groups, and user IDs,
and--and--and the concept of root.
And--and it's something
that we're all very, very comfortable with.
But it's also pretty solid.
It's pretty well thought out.
In a lot of ways, Linux, uh, distinguishes itself
by trying not to innovate too much in security
and thus make things insecure.
Um, and we understand it.
So it's--it's easy enough to lock down.
Um, Linux in general has a problem
in that people are aggressively advancing subsystems.
And so sometimes you'll have security problems.
And--and our SREs, and sysadmins,
and our production network planners
here at Google
are constantly trying to ensure the security
of the Google network.
And--and you see this across the internet as a whole.
And it's both been a pretty good business
for Red Hat to help people keep up to date on these things,
but also it's--it's been a very big challenge,
uh, considering, I think, that 30-plus percent
of the internet is running on top of Linux servers.
Um, but it's not significantly broken.
And that's great.
And--and I think the problem for some of the people
who have tried to fix security,
uh, with regards to large scale
and fairly complex role-based systems
where, for instance, Netscape can talk to Netscape
in just Netscape
in--in it's file space, but not go outside it
like you would maybe program with a security-enhanced
Linux model, uh, which came out of the NSA,
is that, uh, it's very complex to manage
for--for large installations.
And--and it's had other-- other issues.
And, specifically, if you look at Linux,
uh, in--in--in an environment where the battery
is--is--is actually quite precious,
um, when you start adding these things,
you can actually destroy the battery profile.
For instance, the, uh, new update of Android
that's coming out, Cupcake, uh, if you--
we had to take off, uh, the netfilter
and iptables layer of--of the operating system.
Now, a lot of people say we're doing this
because, uh, we want to break tethering
'cause that makes certain partners happier,
I think.
Uh, but in reality, it was destroying
about 25% of the Wi-Fi network stack's efficiency.
And it was also hurting battery life.
And I wouldn't say significantly,
but it was single percentage points.
So you have all these really interesting trade-offs
that actually affect more than just,
uh, what people think.
So that's pretty interesting.
Now cellular phones are really interesting
as a--as a threat space.
Um, you know, you can-- we've seen, actually, already--
not on--not on our device, thankfully,
but on other devices,
um, uh, tricks to make people call, you know,
99--976-BABE and--and-- things like that
that are, you know, a penny, you know, or a dollar
per minute extra than their normal charges,
uh, to actually make money directly.
Uh, and--and that was, uh, fairly popular
about three or four years ago-- those kinds of exploits
against, uh, Symbian devices and others.
Uh, there's also social engineering techniques
where you would SMS somebody with a number
and they go, "Oh, I have to call back my mother."
And they're actually not calling back their mother.
Um, so those were social engineerings.
You can't really fix that, uh, uh, in my opinion.
Um, but there are other things that--in a lot of ways,
as phones become more like computers,
they have the same problems.
They have a lot of our contact information.
They can be used as platforms to send spam.
Um, and there are other things
where you can break your phone
and you can break other people's phones.
And that's no good.
So another thing that's very, very unique
to cell phone networks and--and even to a degree
certain cell phone towers
is that, unfortunately, the cell phone towers
are way more brittle than you might expect sometimes.
When we were developing the Android phone,
we--we had a bad habit of sending malformed packets
'cause we were really bringing up the phone.
Uh, we were sending unexpected packets
and this was actually bringing down the c--
the local cell phone towers' data model,
um, which is kind of shocking if you think about it.
SMS and voice were still very, very reliable.
But you could destroy the network stack on the tower
if you send the wrong kind of packet.
And you say, "well, wait a second,
"how can you create an open operating system
"that can do that?
You know, is that allowed?"
And, uh, it's allowed. It's just tricky, right?
But then a lot of people were worried
about, um, people getting root to their devices
and thus causing those kinds of problems
before the--the manufacturers of the cell phone towers
could put--push out pa-- patches that would keep it
from happening.
And--and another thing that people
were really worried about
and have been worried about historically
when you give users too much access
to the internals of a machine like this
is that they'll manipulate radio power
and thus destroy the local spectra's picture of itself.
And--and that's been pretty interesting too
in how the--the industry has moved around that.
Um, so--so we try to address this
at a lot of levels.
We address this at the hardware, the OS level,
as well as the application layer.
Um, so at the hardware, we have,
uh, a reserved partition,
which is where the operating system exists.
Um, and we sign the binaries, uh, for the phones that ship
as part of a carrier.
So for instance, the T-Mobile phone
has signed binaries, reserved partitions,
and users/developers can have access to that area.
And--and--and I'll--I'll say that--I say can't,
but I--I'm also very familiar with the root exploits
that are out there.
Um, we also send over-the-air updates.
And we have to insure that those over-the-air updates
are trustable so that when they're installed,
they actually work
and that they actually think they are--
they're complete, they are what we think they are,
and they do the right kinds of things for the phone.
And then another thing that's really interesting
about modern, uh, smartphones
is if you look at the system
on chips that these are based on,
um, there are really two computers in there.
There's two ARM chips.
Uh, one runs what we consider to be the cell phone,
the operating system,
and the other part runs-- and I mentioned this earlier--
runs the radio, right?
And the radio deals with all the issues
around cell tower hand off,
intercompany, intercountry hand off,
and--and radio power levels.
And it's actually very, very difficult
to access that part of the--of the chip.
And that's very locked down
in--in the case of the G1 by the manufacturer Qualcomm.
So it's, uh, and it's actually more complex
than the main operating system.
And they communicate with each other, as I mentioned,
through the AT command set.
But there's a price you pay for that.
One of the things that a lot of people talk about
is, like, "well, what I really want to write
"is a program that listens to the PCM stream
"that goes to your ear
"and if somebody says, 'Let's meet tomorrow,'
it'll automatically put something in your calendar."
And it's, like, "Yeah, we would like that too."
It's actually impossible for us to get
that PCM stream though.
And--and we're working with Qualcomm
and figuring out ways of doing that.
And they're, like, "But we did it on this one phone
we got from China back in 1992."
It's, like, "Yeah, that was in 1992
and things have changed."
So--what's that? Oh, thanks.
Um, s--yeah.
Well, see, this is the funny thing.
They're, like, "You know,
"but before we would just unplug the cable
"and then we'd plug into the other exchange
and we could listen."
And it's, like, "well, we don't have party lines any more."
So, um, so, yeah.
And--and it's funny
'cause there is a phone out there that actually ships,
uh, using the SELinux security model.
And it just, like, boggles my mind
that somebody would want to cram that into this device.
Not this device,
but, uh, a similar one that's based on this.
It's funny too--um, I know we're not going to do
a collection of anecdotes here.
But, uh, Linux phones have been shipping in China
for, I think, over six years now.
And--and it's been, uh, kind of stagnant
since we brought out Android.
But, you know, um, so, uh, and another thing
we wanted to do is we wanted to create an environment
where you can run applications
that are written by third parties
and they could talk to each other
in sane ways and not actually break the phone.
And so we had to make a decision to give the users enough rope
to hang themselves with.
So if you have a G1-- and we can actually talk
about providing one for everyone.
I could bring over a pallet or whatever.
Um, you can install software and it'll say
"Do you want them to be able to access
"your contact list network,
these different parts of the phone?"
Then the user will say, "well, sure."
And then it installs.
But that actually allows a user to kind of screw up
and to break their own phone
if they download the wrong package
that does the wrong thing.
Um, so--so we tried to think of ways
of minimizing that danger.
So the virtual machine that exists on the phone,
uh, for running the applications,
uh, has a very specific sort of model.
Uh, whenever you start an application,
it actually starts with a new, unique, uh, user ID.
So it--it helps to keep these things
sort of in their own little holes.
And then even if they call, uh, native methods,
uh, and they have native code, uh, if that's being run
within that-- that user ID context,
and then they talk to each other through known interfaces
through the virtual machine
where we can ostensibly protect the user.
And that's been pretty successful to date.
We've had about, you know, 15,000--15,000?--
maybe a bit more, uh, applications written
and submitted and/or released in the wild.
You can install applications from the web.
You can install applications pretty much from anywhere
if you click the "unknown sources" button.
And that's even true on the carrier-based appliance.
So--and we're actually pretty proud of the model.
It's--it's working pretty well so far.
Um, and then you've got the--the app level
and the browser level.
Now, browsers on phones are really, really interesting
because, um, when you-- when you think
of a browser in a phone--
it actually identifies itself as Chrome Mobile
and they share a number of components,
but it's actually a different team,
although we're merging them now--
um, the browsers can't really take over the phone.
They can cause problems inside the rendering engine
that take too long to render.
It can reduce the efficiency of the phone,
maybe burn up a bit of battery.
But the way that they're created now
is they really are just applications
on top of the phones.
So even in--in some of the cases where we had a WebKit bug,
uh, where a person sort of jumped out of the browser,
it would really just crash your browser.
And that's not the end of the world.
Um, so by sort of sandboxing these applications
and keeping them out of the rest of the phone,
it works pretty well.
And yet, we want to give them access
to the services of the phone so they can actually be useful.
So that's sort of something we--we have to deal with
an awful lot.
One of the things we're doing as a company,
we've released a package called Native Client.
And what Native Client allows you to do
is basically to ship x86 code as part of a web page.
Um, a lot of you, what we think right now about ActiveX
and the disaster that that was from a security perspective,
you're right.
It's something we're obviously very sensitive to
and we have a contest to see if we've done it right.
Um, they're doing, basically,
subsetting of the x86 command structure.
So it's all pretty interesting,
uh, despite the fact it totally breaks the idea
of HTML, but, anyways...
Um, the funny thing is is though
for all of our good intentions,
we're still screwing up.
Um, this is natural.
And--and so don't take it as I'm saying
"We're all a bunch of screw ups."
Um, so, like, for instance, when we first shipped it,
one of the things we had as we were developing,
is while it was booting,
you could--you could punch in a command
and hit return and it would execute that command as root.
Um, which isn't so secure on a commercial device
when you ship millions of them.
So after we had shipped about 150,000,
somebody basically found out if they typed in "reboot"
while it was booting, it would reboot.
Okay. How about that? So they typed in rm -rf*...
[laughter]
and it bricked their phone.
They're, like, "I bricked my phone. Whoo-hoo!"
[laughter]
You know, yeah, okay.
But, you know, but people who are familiar with Linux
are, like, "Oh, that's no good."
And so it became like a "Ha ha, look at Google.
Ha ha."
And we're, like, "ugh," you know.
So--so w--we've made those mistakes.
Uh, we've made other mistakes.
We had a Telnet daemon running as--as user 0.
Hmm. Uh, that's no good.
And then, um, you know, so we--we--we fixed those
and pushed out those fixes ver--fairly quickly
in my opinion.
So in a lot of things like security always is,
it's how humans react when they screw up
because we always, always screw up.
Um, and then you have the issues
that--that are brought about by people who want
to have market uniqueness or want to modify
how the phone works
so that it works within their business model.
And that's caused interesting questions
around security.
And, well, you know, "We don't know
that we want them to be able to download ringtones."
And for us, ringtones are really a file.
You know?
How do you--you really want to make that some files
can exist and some can't?
How do you approach that?
And--and the reality is--
and we all know as cryptologists--
that these are kind of artificial distinctions.
And thus our offenses are preventable.
So, yeah, the other thing that I like to point out
is for all the worry about radios being exploited
and all the rest,
for the--for the devices where the radios
were actually available--
for instance, in the, uh, Linksys WR54T routers--
um, the number of people who actually changed the power
was so slight as to be unnoticeable
and it never became a problem beyond the theoretical.
So--and we haven't seen that yet
with the phone.
We've shipped a little over a million phones.
So that's the story and I'm happy
to answer any questions about that.
If--was that the kind of thing you were looking for
from--from me on that?
So, yeah, okay. Cool.
man: Let's find somebody who doesn't think
what you said was accurate.
DiBona: Yes! Yes.
man: [speaking indistinctly]
DiBona: No. No. Good. Good.
You know, you're--you're our standby.
We need a microphone. You should just keep it.
man: [speaking indistinctly]
DiBona: Yeah. Really. [woman laughs]
man: I'm playing a Vint--Vint-- the Vint role. [laughs]
[DiBona laughs] Uh...
Diffie: Go back to [speaking indistinctly].
man: It, um, and one of the obvious ones
is, uh, are you doing anything or planning to do anything
with respect to SIM chips?
DiBona: What do you mean with respect to SIM chips?
man: Well, interaction with leveraging them.
DiBona: Yeah, I mean, you can obviously pop them out
and pop them back in.
So--so this is the funny thing.
So when a carrier ships a phone,
um, they have a concept
of which SIMs they want to lock you into.
So we ship a regular dev phone
where you can pick whatever you want.
You put it in the AP-- uh, uh, the network information
that you want and that's fine.
Um, one of the--the--the battle we chose to fight with Android
was to have an open source cell phone operating system.
We knew that--
we know that actually the best cell phone
for the world in our opinion is one that allows
for a free and fair internet, right?
A free and fair cell phone marketplace.
We also know that if you look at all of the marketplaces,
especially in the United States,
that's going to be slow going.
So we decided let's--let's start with an open marketplace
if we can get there.
Certainly an open source phone operating system.
And sort of raise the bar
of what people can at least expect
from a quality perspective on a cell phone.
And then let's fight those battles
as we come to them over the years.
It's certainly not going to happen overnight.
I mean, if--if you look at the-- like, T-Mobile, for instance,
they--they have a policy-- a lot of people don't know this.
But if you've had a T-Mobile phone for six months
and a T-Mobile account with a two-year contract,
for instance,
you can call them up and they'll give you
the unlock code for the SIM.
I mean, they've got you.
They alr--they already know they've got you.
You've been there for nine months.
You're past the part where you can return it
in California or whatever.
They're happy to give you the unlock code.
So there's a lot of reasons why T-Mobile
is sort of a good first partner with us.
So...
But, yeah.
I mean, we're not going to win every battle
right out of the gate.
And so to a degree, we've self-defeated ourselves.
man: [speaking indistinctly]
DiBona: Hi. man: Hi.
So, um, I know you just said
you were trying to build a cell phone operating system,
not a PDA operating system.
DiBona: [laughs] But it is, right?
man: But let me ask-- let me ask the sort of,
you know, next generation PDA question.
DiBona: Sure.
man: There are a number of people who are interested
in making their PDAs smarter
and actually having their PDAs--
I mean, the cognitive radio's only one example
of cases where you would imagine there's an app on the phone
generating new code that it wishes to load
into the, you know, execution stream
of at least some of the communications behavior
of the phone or some of the applications behavior.
It's customizing to its user.
It's learning. It's adapting.
How do you envision that in this world
in which you're trying to s-- you know, you got an app,
which you've secured.
But it's generating new code, which...
DiBona: Yeah. man: Right?
Or do we just say "hopeless"?
Or, I mean, I've heard people talking about...
DiBona: No, no. I-I--actually, I think it's quite hopeful.
You know, I mean, you know, so for instance,
there are certain apps right now
that have come out where you're dialing a number,
but what it's really doing is calling your Skype account
and having it dial your number.
Um, and, you know, so we have apps like that right now
which are, in my mind, fairly innovative choices.
And there's a SIP client now where if you dial a number,
you're really dialing a SIP, um, uh, server.
And--and that was something we knew could work
and we made a--we made certain accommodations.
But they had to write some native code
and it sort of worked their way through the system
to make it work.
Um, the thing is, we try to make these things available.
And we say, "Here's the phone.
Here's access to the different parts."
Um, and we know that people are going to do things
that we'll consider fairly specious.
Um, and our only real leverage there
is we'll say, "Well, listen,
"we can't put that in the marketplace.
"Uh, we can't, you know, ship that out officially.
"But if they want to install it--
if people want to install this, they're going to do it."
So how much are we really going to fight that?
How much is that really a problem?
So... man: [speaking indistinctly]
DiBona: Well, so, I think we enable it
an awful lot, um, by basically having
an environment where people can create
general purpose programs.
They can exist in the background.
They're not super sandboxed the way that Apple does.
Um, and so people will certainly abuse that for evil.
But who cares?
You know, I mean, we feel that it's better
to--for the sophisticated users, right, who are going to click
that "unknown sources" box,
that's a significant event, if you think about it.
I mean, how many people of the million plus phones
we ship actually will want to hit "unknown sources"
and allow that to happen, right?
These two guys over here,
but they're obviously UNIX people.
So...[laughs] But, yeah.
So--so we've enabled it to a great degree.
I mean, you can do an awful lot with native calls.
So, I mean, it's Linux, right, so--at that point.
So it's pretty-- pretty cool.
But we have another speaker or two.
I don't want to, you know, step on them.
Diffie: I think that actually is a good idea.
Let's, uh, get on Eric Grosse.
I'm just going to stay here since I was,
you know, told I was in the way of the video
and stole Steve's face recognition.
Grosse: Okay. Vint asked me to speak for five minutes
or maybe a little more about what kind of attacks
we see at Google.
You know, network and application layer attacks.
I guess that's the context.
Of course, that's a bit of a challenge, right?
We do see a lot of attacks all day long.
So covering everything in five minutes is a challenge.
I wish like the earlier session we had more numbers
in this panel.
But security and attacks in particular
are a really murky topic.
Uh, if you go to the RSA proceedings,
you can see our team, uh, Steve Weiss
and--and John Corflan gave our best view of metrics
on security attacks that are seen
both on the outside and--and within Google.
But that's pretty hard to calibrate
against my anecdotal gut feeling
for what the bad guys are trying to do to us.
So let me just go over my perception
of what the main attacks are that--that we face.
I would categorize those into two kinds.
There are the attacks
that are in some sense not really our fault.
They may be-- they're our problem
because our--if our users are attacked, we care.
But they're the classic kinds of attacks
that happen all over the place.
And they're somewhat outside of our control.
That's a kind of thing that we still care about.
But there's another kind of attack
that's squarely ours to control.
Some place where we've made a bug.
And those are the ones, of course,
we have to spend more time on.
It's interesting though that the actual data breaches,
you know, the actual loss of user information,
which is my pa-- you know, my obsession--
I'm trying to make sure that never ever happens--
tends to happen more in the first category
of attacks then second cater-- of attacks.
So we don't actually put our energies mostly
where the main problem is.
And it's sort of an interesting, you know, "Why is that?"
I-I think that's pretty true in risk analysis
known through the whole world.
There are-- people perceive the risk
as much, uh, more disturbing when it's outside
of their control.
So...
So in the first category of attacks,
you know, what are-- what are the things
that, uh, really do wind up losing people's data?
The--the prime one by far that we observe is malware.
It's just rampant out there.
It's really a problem.
It's a problem for us even as a corporation.
But it's especially a problem for our users.
When we look at why someone's mail
got leaked to the internet or something,
in every case we've investigated
through our logs,
it's turned out that it's because a bad guy
was able to somehow steal their password
and just logged in from some distant address
that that individual had never been at.
And there was no password guessing involved
or anything of that sort.
They just had used a password to log in.
So that's the number one problem.
If you're looking for, you know, what's the main security issue
to work on, that's--that's absolutely it.
We try to address that in various ways.
We have a fair chunk of my team
that, uh, that looks at anti-malware
and anti-phishing approaches.
As we crawl the web,
we're looking, using various techniques,
try to guess at which content we've downloaded
might be malware and then run that
in some, um, virtual environment to see what it does
when you execute it.
And--and if it clearly is installing software
and changing registry entries and clearly is bad,
then we'll flag those as malware.
Right, so if you do a search, actually,
a fair fraction of the time,
you'll get back a result
that says you don't really want to go there because it'll down--
you know, it'll install bad things on your machine.
So--so that's sort of our-- our first issue.
Uh, this malware is stealing credentials off your disk.
It's modifying files.
It's executing code,
which may be running, of course,
inside an intranet and with your permission.
So it's really nasty.
And we do what we can to--to protect you from it.
But it's fundamentally your machine, not ours,
so there are limits to what we can do.
We can't even denounce everything that we find
because it would be terrible if we ever were,
uh, falsely denouncing somebody.
So we try to err on the side of being cautious.
We've made mistakes.
In January--I-I hear a chuckle over here.
In January, you may recall a period of about half an hour
when we decided that everything on the internet was malware.
No. [laughs]
Maybe that's not far wrong.
But, uh, but that--that was a little excessive.
So, uh, you know, so we learned some good lessons
about how many layers of checking one needs.
You know, fortunately,
we had multiple layers of checking.
And some of them caught it.
But one--in one place, it had slipped through.
So...
man: That was so efficient.
Then we can mark everything on the internet malware
[speaking indistinctly]
Grosse: Yes. Yes. [laughs]
Good.
Uh, let's--
man: But we knew in that moment
things that were actually bad were marked bad.
[laughter]
Grosse: The only time it was true.
Diffie: Just like the broken clock
is right twice a day.
[speaking indistinctly]
Grosse: The--the other thing--
and this won't surprise you either--
is spam, right?
There's email spam coming in.
You know, I've-- I switched to--to Gmail
before I came to Google because it just was doing
good spam filtering for me.
And I was having a hard time even hiring that service
from anyone else.
So filtering spam on the way in is--is a tough problem.
But when you've got lots of data to work from,
it--it gets a little easier.
We also try to work hard to make sure we don't get used
to spam others, right?
We got a lot of mail infrastructure.
So--so there's that. There's other kinds of spam.
Blogs and people trying to play games
with web page ranking and so forth.
So those are all classic kinds of problems.
At the network level--
so when we talk about DNSSEC,
that--now we get
into sort of more technical attacks, right?
And we absolutely are seeing those.
This is a very live problem.
As recently as a couple weekends ago,
you know, we saw that .PR was up there
as having a signed re--
but actually, .PR had gotten, uh, compromised.
So if you tried to go to google.com.pr,
you got sent to a malicious site off in Germany somewhere
that, uh, was definitely not Google.
You know, they were pretending to have defaced the Google page.
No. Actually, they compromised .PR.
So, yes, DNS is a problem.
We see poisoning attacks around the world.
Uh, if you're sending mail to somebody at Google,
you have to think about, "Well, did somebody
"along the chain of that SMTP store-and-forward,
uh, compromise a DNS?"
Then the mail will get routed somewhere else, right?
So DNS is very important.
We would love to see security there.
And DNSSEC has a way to distribute certs
with the traffic so that's good.
We see occasionally BGP attacks
that--that e-- that effect us.
You remember an incident with Pakistan.
What's the fourth?
We see network layer attacks
even at a place like this.
There will be a conference and somebody will hijack DHTP.
And you--you know, you pop open a browser session
and try to go someplace.
And if you click through a certificate warning,
bingo, they've got you, right?
So definitely be worried about the network.
It's a hostile environment.
And then finally, there's, uh, at the network layer,
there's denial of service attacks
that we have constantly here.
Uh, I think that our-- we are reasonably effective
at fending those off.
But it's nev--it's--
it's a never-ending, uh, game against these.
man: What did you call it? Grosse: Denial of service.
I'm sorry. man: Dental service.
Grosse: [laughs] D--denial of service.
Distributed denial of service. Like pulling teeth.
Yeah.
S--so our--the main issue really with defense there
is minimizing the collateral damage.
And minimizing the chance that someone can take Google's
vast array of servers, and network capacity,
and so forth,
and turning it against either ourselves
or even worse, against other people.
So that's--there's a lot to be done on DoS.
And--and that's another, I would say like malware,
is another very fertile area for research.
So love to talk with people about that.
Uh, surprisingly, insider attacks
are not high on this list.
We tend not to see that very much.
And we're looking constantly for that around here.
I don't know where the "80% of all attacks
are insider attacks" comes from.
That certainly is not consistent with what I see.
But malware looks a lot like an insider attack.
So from a perfs-- from the point of view
of detection, it may as well be the same thing.
So I-I said those were-- there were two categories.
So let's get to the second category.
The things where our team spends all its time.
There are a lot of things one can do wrong
when you're writing web applications, right?
There's--there's, of course,
all--there's all the classic things
we have to worry about just like any enterprise.
Making sure the perimeter's secure.
Making sure that we evolve towards not just network
segmentation defense,
but defenses that are much more focused
on--on, uh, access controls right close to the data.
As we were, you know, hearing this morning,
I-I'm completely in tune with that.
But from a--from a company like Google's perspective,
uh, most of our energy goes into trying to make sure
that the engineers writing software
and innovating at a high rate have the right tools,
and education, and backup from security reviews,
and various external audits, and responses to outside reports
against things like cross-site scripting.
So yesterday, I think, it was The New York Times,
there was an article
talking about, you know, "what is cross-site scripting?"
and giving an example in line
which turned into a cross-site scripting attack.
So the--it's ironic.
It's--the stuff is very, very tricky.
Essentially, you have to make sure
that you--every time you're going to put out a page
that you're sending back to the user's browser,
you know the context that the bytes are in,
so you know whether it's Javascript rationing or what.
and you do the proper kind of escaping for that context
and never ever screw up.
Keep track of what's coming from the user
and make sure it's escape.
So we're getting better as time goes by
at using template systems with auto-escaping built in,
But that's still a major, uh, source of attacks.
We don't actually see any cases
where actual user data's been lost that way,
but it would be very easy for that to happen.
So we focus a lot of attention on that.
Uh, cross-site request forgeries,
another good example, right,
where the bad guys can--can send something to you
that causes your browser to make a request
back to the server with all of your credentials,
cookies, or whatever-- whatever the mechanism is
by which you're authorized,
and trick your browser into making some change
on the server.
Uh, in our case,
changing--setting up a special custom filter
on your account, right?
That was a kind of thing that one could do
for a short period of time.
We found out about it and fixed that instantly,
of course.
Uh, we care about these things a lot.
So those are the nature of web application vulnerabilities
that we're constantly fighting.
It feels like there's some progress.
Ten years ago, we would've been talking
about buffer overflow attacks
or format string vulnerabilities.
And those don't come up so often any more.
So we're making some progress,
but it doesn't feel like we're getting close
to, uh, zero.
It just feels like we're-- we're pushing the ball along
and we got a different set of things to worry about.
Um, other things that you have to worry about
if you're a company like Google
is understanding so much about the browser
that you know what subtle things a browser may do
to try to interpret you, right?
So the browsers-- the web grew up in this world
where you would be very relaxed about what kind of errors
could be in the input.
The browser would just do its best job
to render it anyway no matter,
you know, what kind of errors were there.
As security people, we're trying to reverse engineer
that process.
We--we publish a browser security handbook
that details for every of the major browsers
and every one of the different weird features,
you know, character set, and content sniffing,
and how they handle cookies,
you know all of these different features
how this works so that our s--
engineers can look through and come up with a scheme
that will work on all these browsers
without holes.
That's very challenging.
There's another comparable set--
it's--it's an easier task, but there's a comparable set
of things about proxies.
If you don't get exactly the right cache header controls,
some proxy out in the middle of nowhere
will save up some content
that was intended for just a specific user.
So that's tricky.
So you have to know your-- your application environment.
And I guess, finally, I'd say, uh, there--
another part of network technology
that's really important--
and it comes to the EV certs-- is SSL.
I just don't understand why SSL is not more widely used.
Sure, we should worry about the certs,
but first, before everything else,
at least switch to SSL.
So, you know, in Gmail, you--if you care about security,
you should reach into those settings
and say "only use SSL."
Because there are s-- people out there
taking these hijack networks
and trying to trick your browser
into using HTTP even though you started out with SSL.
They'll try to subtly come in on the side.
So, yes, we--we want to--
so we've ramped up quite a lot of capacity
here at Google.
And the--the next step on that front
is really to work more closely with the browsers.
And of--and of course,
the fact we have a browser helps.
But it doesn't give us sort of monopoly control
or something.
This has to be a collaborative effort
with--with all the people out there
to find ways to reduce the number of round trips
when we're starting up SSL.
That's really essential
because latency does really matter to people.
That's probably the main thing today
that would prevent us just from saying,
"Let's just use SSL all the time for everything," right?
It's--it--it does actually have a--an impact on people.
So...
Okay, uh, I-I just about stop there.
I-I--you--you have a very nice paper out there,
uh, by Tom Leighton on how the network's evolving
to push more content out close to the edge.
And we completely agree with that.
The one security proviso I have is that we have pretty strong
physical security around our data centers.
We're pretty comfortable about that.
We also are going down the same path
of--of putting caches out close to the user,
'cause that's--that's, again, the way
you're going to get speed.
And that's perfectly sensible and easy for public content.
Let's say YouTube videos. Perfect fit.
For your Gmail content, not so sure.
Providing the physical security
that sort of-- our--our guarantees
that no one's going to get to your data,
I'm not comfortable with.
So if the world's switching all to SSL,
we have a bit of a problem
about how we're going to do caching
up close to the edge.
So I'll leave you with that. Thanks.
Diffie: Well, [speaking indistinct].
Grosse: [laughs]
man: O--one obvious one is, um, as you're seeing
these incidents, are you capturing them,
um, in IODEF or the MITRE CVE suite
and are you reporting vulnerabilities to anybody
or is that basically an in-house, uh, exercise?
Grosse: So, um, that's an interesting question.
Traditionally, Google was focused completely
on a server-based thing
where the attitude was "as soon as there's a problem,
"we'll just patch it on our server
and no user even knows or needs to care about that."
So it wasn't as essential to have all of those things
reported with a CVE number
so that enterprise security officers could know
whether they checked off that--that vulnerability patch.
As we get more client software out there--Chrome, say--
it does make more sense to report those things.
Even though with Chrome, we're taking a very aggressive
"we patch everybody--
we're not giving you choices," right?
Which is maybe a bit controversial.
But we are convinced this is the only way forward
in security.
I mean, the--the time scale for the bad guys
to look at a patch, reverse engineer it,
start an exploit,
and--and actually put users at risk
means you--you--we can't afford this "let the enterprise
cogitate on it for a month."
That--that just doesn't work any more.
So...
Uh...
man: I just wanted to ask about, um, uh, DDoS.
And, I mean, you mentioned it.
But can you just say a little bit more
about the--the frequency, the scale?
Um, I mean, it's--it's partly--
it's a--it's a problem that is a consequence
of, you know, the architecture or the infrastructure.
Grosse: Yes.
man: And, um, and, uh, so it would be really useful
to--to understand a little bit more
about, you know, how frequently, how big,
um, what you would like to see happen
to--to mitigate this fact.
Grosse: All right.
So we get all kinds of attacks, right?
There's both UDP and TCP.
Enough are TCP that lots of the classic problems
about trying to figure out who it is,
you know, which IP address it's really coming from
is not such a problem.
But you--you can't say anything absolute.
Yes, there are still UDP attacks that we care about.
And--and so,
yeah, we're interested in that as well.
But what--what I would really like to see is the following.
We currently do all our DoS defense
within our own environment.
And that, fundamentally, is the wrong approach,
I believe.
The right approach, I believe,
is to detect the attack here
and to push the filters as close to the source as possible.
So one of--one of the things that I was doing at Bell Labs
before I came here was to build a little box
that could go into the carrier's network,
ideally close to those DSL lines,
and a protocol that allowed--
even during the network congestion times
that occur when there's a DoS attack--
allowed some upstream control
from an--an enterprise like Google
to the carrier's filtering box saying
"If there's any packet coming from that IP address to me,
"throw it away because I promise you
"I will throw it away.
"You may as well save your network bandwidth
over there."
And there are other, you know, other approaches
that have suggested pushing that upstream
through BGP and so forth.
But, uh, you know, it's a bit challenging
to make that work in--in network congestion times.
Uh, that's what I'd really like to see.
The trouble is it needs cooperation
between a lot of different players
and so it's hard to bring that to market.
Diffie: Um, we have-- we've only got 25 more minutes
and I--we've heard from four geeks.
And I think it's time we should hear
from Eric Schmidt and get a, uh...
Grosse: Howard. Howard. Howard.
Diffie: Howard. Howard Schmidt. Sorry about that.
[laughter]
Grosse: And--and I'll be around for--so hold your--
you know, come catch me during a break
if you have more questions.
man: Well, I-I just want to add three letters
and two numbers to the answer that he just gave.
That's BCP38. Please.
man: [sneezes] 'Scuse me.
Schmidt: Thank you for the promotion.
I appreciate it.
Diffie: Well--well--well, it's a good sign.
Schmidt: Yeah, we all get sent to the Schmidt house
one time or another.
Uh, anyway, uh, thanks for the opportunity.
The--the good thing about being last
is all the really important stuff that was said,
uh, but I get to sort of opine on it
and--and, uh, have a few pieces of it.
I want to start out by the comment
about Android specifically.
Uh, and there was, I mean, the question
about EV certs and expenses and stuff.
There was a, uh, really neat quote I found one time
that says "There is scarcely anything in the world
"that some man cannot make a little worse
"and sell a little bit more cheaply.
"The person who buys that on price alone
is this man's lawful prey."
Uh, and if we ever see anything going on the internet today,
that therein is one of those things
that we see on a regular basis.
Uh, the other thing I want to touch on briefly
was a comment that Eric had made--
the oth--the real Eric when he was here--
um, about criminality on the networks and stuff.
My first movement in this area
was back in the days of bulletin board systems.
RB--long live RBBS.
Uh, but when we-- I was running
a-a combination CPM users group,
ham radio, and a, uh, uh, packet radio
bulletin board systems on, I think, Commodore 64s
or something at the time,
and started to get notifications
about these bulletin board systems
used for distribution of child ***,
which is generally pretty much unacceptable.
Uh, and having been as they referred to us
at the time, "a geek with a gun,"
uh, being law enforcement,
it started sort of down this path
of where I am today.
Uh, but--but basically-- [clears throat]
listening to the previous speakers
and a lot of the things that are going on,
I want to touch on a few things.
One of the things--
particularly when we start looking
in the-- in the development world--
and this has--this has been a-a bone of contention
for a long, long time--
uh, my first recollection
of a, uh, buffer overrun incident
was long before we've ever seen anything around the Morris worm
or anything else.
About around that.
Uh, it was basically a system operator
or their assistant manager that was trying to get access
to a system that couldn't,
found out how he could break out of a routine,
run some arbitrary code, and get the access they need.
So it wasn't even malicious.
Uh, but yet today, we see--
and Eric had mentioned about, you know,
buffer overruns and these sort of things.
Even though they may not be relevant
to getting attacked,
they still seem to be an attack vector
that are being inadvertently built
in to code that we're writing today.
Uh, and from an international perspective,
which what is Vint asked me to--to sort of focus on
the best I can,
is what we're seeing now,
particularly in some developing nations,
the hand-me-downs that we have,
uh, are winding up in these environments.
And I think of some of the, uh, developing countries
where some of the old banking systems
that we have gotten rid of years ago
are now those systems that are the core basis
of what they're using to do their financial tracks--
uh, transactions.
So the...
ability that we have today to withstand
some of the threats by the modern systems
are being irrelevant to them
because they don't even have the capabilities there.
But yet they're a part of that global, financial system
that we're in today.
So the--and--and one of the things
that I think is really interesting
from a D.O.D. perspective--
I've seen this with M.O.D. in the UK.
Seen it with some of the NATO countries as well.
And the--the number's pretty consistent.
That the successful intrusions into their systems,
whether it's for exfiltration of data,
whether it's to in--installation of malware,
or whatever the motivation may be--
and it varies from one end of the spectrum to the other--
is running at about 75% plus based on known vulnerabilities
in software that have not been mitigated
for 90 days or further that continue to allow someone
have access that could've been patched,
should've been patched,
but for a multitude of reasons were not patched,
which gave that--that foothold getting into the system.
And so when, you know, once again,
Eric talking about some of the things
that are going on that we have been fixing,
there's one fundamental thing that we have not fixed,
uh, and that's the whole issue
of the software development lifesite--cycle.
Uh, we're getting better at it.
Uh, we're seeing reductions.
But invariably, we'll still see the same issue.
Uh, the other thing which--which touches directly
to this subject as--as well is looking at,
uh, the mobile community.
Now, a number of us have been concerned about this
for quite a while.
And--and I realize the work--
where is--oh, he left on the, uh, Android.
But, yeah, Chris-- oh, there he is.
I'm sorry, Chris, you--you--you were off your blue ball
and I didn't recognize you.
Sorry.
Uh, but, you know, when you talk about,
you know, the code signing and the things that go into it,
especially when the spectrum has opened up
to third party applications that basically run
under the context of a stri--
of a root, uh, basis
that now are trusted environments
that then spawn some child process,
those are the issues that we have little control over.
Uh, and I think of, you know, App World on the Blackberry,
uh, iPhone and-- and all the App Store
going on there,
all these things going out there,
uh, and the progression that we've had.
And--and I'll give you a-a personal illustration here.
RSA last year-- info security last year--
were the first two years in ever since I can remember
lugging a LUG aroun-- a LUG about
around to some sort of a show
'cause wireless may have been available somewhere.
I did not take a laptop to the show floor with me.
Everything I wanted to do, could do,
I could do on one of two mobiles that I had.
Whether it was booking hotels, changing flights,
doing online banking,
uh, finding what the bid was for me when eBay got rid of me,
or whatever the case may be, uh, I could do it
on a mobile device.
And every time I turn around,
I'm looking to yet install another application
that gives me the ability to do something else
that presumably is signed,
that presumably, uh, is, in many cases,
from a valid source.
Doesn't he have to be a third party,
malicious malware, uh, generating system
that basically can still have that same vulnerability,
that can still be exploited, that still gives us access.
Now, from an international perspective,
that's magnified much greater than what we have here
because this has been a way of life,
uh, in Asian countries in particular,
uh, and growing increasingly in Europe,
where rarely do you see someone walking around
with a mobile device hanging around their neck
and doing everything they need to do with it.
So if you start looking at that natural progression
and the thing that many of us have been championing
for a couple years now,
uh, is as we evolve here,
there's always a complaint about fighting last--
the last battle.
And we've learned an awful lot
from fighting those battles.
And we moved from the--the browsers,
and the internet mail connectors,
and the servers, to the desktops,
and then the da--desktops to the laptops.
And now we're moving this entire environment
to the mobile environment
and we have to start anticipating these things.
The other thing that--that I-I think
is really relevant--
and--and it's really interesting how we in industry--
and--and my first really really insight into this
is when I, uh, left the Whitehouse
and went to eBay,
uh, and looking at the broad spectrum of things--
looking at, you know, the whole issue
about protecting our assets,
our customer data, HR data, all--financial data,
everything we needed to look at--
and when it came down to the discussion,
"but what about the 100--"
no, I think at that time,
it was, like, 110, 114 million end users,
but security's their problem.
It's not our problem.
And I can guarantee every time somebody became a victim
of a phishing email
or had some money lost out of the bank,
they went to their local congressman,
it started to become our problem.
And so the concept of not only taking care
of our resources, our assets,
to protect them for whatever reason--
and generally good reasons, you know,
to protect our intellectual property,
our shareholder value, you name it.
[clears throat]
But we also have to remember that the end user
now becomes our responsibility
because they have no one else to blame
but, you know, the--the browser issue
that I'm dealing with now that'll allow someone
to get access to my computer 'cause I didn't update.
Uh, and--and therein lies one of the issues.
And speaking of that,
when we--when we get into that issue as well--
uh, talking briefly about the browser issues
and the browser, uh, it's ability to do things.
A few years ago, uh, as we started
to roll out EV certs, which I'm a tremendous fan of--
so, you know, want to make that very, very clear.
Uh, and matter of fact we just did a paper
from the Online Trust Alliance on EV certs.
And notwithstanding, some of the technical issues
were longer to resolve,
longer to--to, uh, you know,
get the va--ver--revocation information back,
et cetera, et cetera,
it's still much better than standing out there
in a snow storm naked.
I mean, having a little bit of extra protection is there.
So, uh, but-- but with this concept,
one of the issues was that there was a vulnerability--
and if I remember correctly, it was IE6 and below--
that allowed you to actually embed a graphic image
over--anywhere on the page,
perspics--per--specifically to the URL, uh, bar.
So what happened, you can actually go out there,
go to a malicious site,
and the overlay would be a green, extended verification,
vel--validation that you're at the delitimate--
legitimate site.
Those are the sort of things that, once again,