Tip:
Highlight text to annotate it
X
GTAC - Day 1 Hugh Thompson
October 26, 2011
>>James Whittaker: We had to have a token founder. And Kevin crushed it. So we went
out looking for security keynotes. And we got nothing short of the best of the bunch.
This guy, Hugh Thompson, has literally given a talk at every testing and security conference
on the planet. Okay. We should be good. And he's nailed them all.
In fact, for four years, I held the record at STAR -- maybe some of you all have been
to STAR -- for the best presentation of all time, number one.
And he was the one that knocked me from my podium a few years ago. So I have a lot of
reasons not to like this guy. I have a lot of reasons to like this guy as
well, because at one point, he was my student, many, many years ago. And then years later,
I turn on the TV, and I'm watching MS-NBC, and Lou Dobbs is giving his show, and all
of a sudden this guy walked on to Lou Dobbs's show and gets interviewed by Lou Dobbs about
some virus or something nasty that happened to the world. So the display is -- Try it
again. >>> (Off mike.)
>>> A Windows user. >>James Whittaker: Well, that is the old joke;
right? There's an electrical engineer, a mechanical engineer, and a software engineer in a car.
The car breaks down. The mechanical engineer gets out and starts checking hoses and stuff.
And the electrical engineer gets out and checks the battery. And the software engineer rolls
down all the windows and rolls them back up again.
[ Laughter ] >>James Whittaker: Okay. So -- and then Hugh
also is a program chair of the -- this might even be one of the largest technical conferences
in the world -- the RSA show that -- is it still in San Francisco?
>>> Yes. >>James Whittaker: Or did it outgrow -- and
so I'm really pleased that Hugh is here to join us. And I won't introduce him any longer
now that the equipment's working. Hugh Thompson.
[ Applause ] >>Hugh Thompson: Thanks for that introduction,
I think. I think, man. I haven't quite known whether to thank you yet or not for that.
But it's a real pleasure to be here at GTAC. Thanks, James, so much for inviting me to
come. You know, it's great to be able to talk to the testing community.
And I know this is the token security talk. But I'll do my best to represent our kind
of community of NO, which is what the security community is usually known of. But I'll try
to represent us well. So this is a talk, really, about risk. So
when you add a feature to a piece of software, to a hardware system, you get utility, but
utility trades off with risk in a very, very interesting way. So this is really a talk
about looking at the features that you add and trying to understand the risks that come
from them. And how is a bad guy going to look at this thing?
So not how the user's going to interact with it and how much they love it and enjoy it
and the great things that they're going to say on their blog or their Twitter feed. But
how is an attacker going to view it? And to set the stage, I want to share a personal
security story with you. So I'm from Nassau, Bahamas. I don't know
if anybody's ever been there, beautiful vacation spot, if you haven't. We need your tourist
dollars. Even though it's a down economy, consider us.
And there's been two key events in Bahamian history, at least from my perspective. The
first was our independence from the British in 1973, which I'm sure you guys can relate
to. There was actually some applause for that, which is a little troubling.
And then the second, which is just as important to me personally, was the introduction of
Soda machines in the high schools. Dude, this was, like, the pivotal event in Bahamian history.
We had always seen soda machines on pirated reruns of Beverly Hills 90210, that kind of
stuff. When the government announced they were buying a bunch of them, everyone was
really excited. This was about three weeks after they installed
the first soda machines, me and two other guys are kind of hanging around one. And one
of the guys was my best friend all through high school who later became a priest, which
may be relevant shortly. The second guy, though -- I'm just going to
say it up-front -- we did not know him very well. So I just want to throw that out there
early. Not a friend. Just a guy we just kind of, you know, casually knew.
We're standing around this thing, and it's got a big handwritten sign on it that says,
"U.S. quarters only." Now, to give you a little background, the
Bahamas has its own currency, the Bahamian dollar that's artificially pegged one to one
to the U.S. dollar which was a good idea until a couple of years ago. The way things work
over there is, if you buy something in U.S. or Bahamian, you'll get mixed change. So you
can use either currency interchangeably. Now, the government had purchased these machines
very cheap from the U.S., so they'd only take U.S. quarters. And none of us had a U.S. quarter.
And so we asked a question that any of you would naturally ask in that kind of circumstance:
What else can we put into this machine to force it to give us a Soda?
[ Laughter ] >>Hugh Thompson: Now, we tried legitimate
things at the beginning. You know, we put a Bahamian quarter in. No dice. That didn't
work. We tried a U.S. nickel even, kind of didn't register.
So then we essentially started to fuzz it, like a guy, you know, pried a washer out of
a chair, threw that in. Nothing was happening. Another guy tried a pencil. But there was
a user interface problem there. That didn't work out very well.
Another guy tried saltwater. It turns out that was just messy. So that didn't cut it.
And eventually, we got down to using a Bahamian ten-cent piece.
Now, if you've never seen one of these, they're kind of interesting. They're corrugated around
the side. It sort of looks nothing like a U.S. quarter. But when we drop this thing
in, the machine instantly registers 25 cents. Now, we are understandably pretty excited
about this. You know, it's a pretty substantial discount on the Sodas. So we look for as many
ten-cent pieces as we can find. We put another one in. It says 50 cents. Another one, 75
cents. A last one in, it says a buck. I hit the button, and we get a Soda.
And, you know, so we got 13, 14 sodas. And on the fifteenth soda, when we went to hit
the button, the unthinkable happened, worst-case scenario, the red light comes on. This thing
is out of sodas. Now, not willing to take a loss on our fifteenth soda, we hit the coin
return button. [ Laughter ]
>>Hugh Thompson: Now, given the wording, you would expect coin return to mean we have just
given this machine some coins, and now it will return those coins from whence they came
kind of thing. It turns out, though, how it was implemented
as a feature, then it takes coins from the bottom of the machine, slightly more mechanically
efficient than taking the coins you had just put in. We hit coin return, and we get one
U.S. dollar worth of coins. Now, at this point, the three of us react
very differently. [ Laughter ]
>>Hugh Thompson: So personally, you know, I'm all for discount sodas, but I'm like,
dude, now we're into money laundering. So I followed what's known as the responsible
disclosure method, and I said, "Dude, you know, we just have to tell the principal.
This is what you do when you find something really bad, you go and tell the company, hey,
look." Okay. So at least that's the way the story
goes. So the second guy, who later turned out to
be the priest, you know, I said it might be relevant, this guy was more of a gregarious
kind of fellow. And he took the full disclosure model. So he's like, "Man, we've got to tell
all our friends." Right? [ Laughter ]
>>Hugh Thompson: So that's reasonable. I thought that was reasonable.
And then the third guy, again, I just want to emphasize, he was not a friend, not a close
associate. I don't know where he is now. This guy followed a very interesting model that's
now known as nondisclosure. He said -- and he was very forceful, so we listened to him
for a while. But he said, "Don't tell anybody. We're going to make us some money." Right?
So, again, it was a very convincing argument at the time. So the next day, I see this guy
walking up the driveway to school, and he's jingling like an ice cream truck, you know,
just loaded with these ten-cent pieces. So he wiped out that machine. The guy who
wasn't supposed to tell anybody told his cousin in another school. It escalated into this
issue that became known as sodagate. Not a lot happens in the Bahamas. It's easy to make
the papers. But let's look at this from a vulnerability
perspective. What went wrong? When the machine measured and asked the question,
"Is this a U.S. quarter?" It did it in two ways. It looked first at diameter of the coin.
Right? So it took the diameter of the coin, measured it with some type of caliper. And
then it looked at the weight of the coin. And it turned out, just as a matter of coincidence,
if you measure a Bahamian ten-cent piece, the size is almost equal.
If you measure the weight, again, almost equal, indiscernible by the equipment that they were
using. Now, think about this feature, because this
is a choice of how they decided to implement it. Think about this from a risk perspective.
When they built this machine, they built it in the context of the U.S.; right? That was
their deployment model. We're going to sell these things within the U.S.
When they did that, I'm sure they probably went to Home Depot or whatever thing like
that existed back then, and bought any round thing that they could buy that cost less than
a quarter and threw it in, and that was their testing; right? Dude, if it costs more than
a quarter, why do we care? If it costs less than a quarter, it's kind of an interesting
case. But they had this assumption in mind but didn't
track that assumption through the life span of the product. Right? So it seemed like a
good idea at the time. But now they're selling into a different market. It's going into a
country where every single user in their pocket has a weapon that they can use against this
thing. A very, very different risk model. So I want you to keep this soda machine in
the back of your mind, 'cause we're going to refer to it again a little bit later.
Because I think it gets to the heart of where security problems are today. We still have
buffer overflows. We still have SQL injection problems. Cross-eyed scripting's still a big
issue. But more than that, we're getting into this very interesting place where we built
a set of assumptions around features and who will use those features and how they will
use them. And attackers are starting to use those against us.
But before I go there, I wanted to make a quick public service announcement, because
this is, as I mentioned, a talk about assessing risk.
And the announcement is, there is a current shark crisis that I've labeled Sharkmageddon.
I don't know if anybody has been following this. But a report came out earlier this year
that said the number of worldwide shark attacks is up 25%. Dude, 25% increase. That's crazy.
You know, I said I was from the Bahamas. I'm looking at this. At any one time, I probably
have four family members in the ocean, you know. Do I call somebody? Do I have a cousin
blow a coconut? How -- or a conch shell? How big an issue is this thing? I decided to investigate
further. MS-NBC, front-page story, shark attacks rose
by 25% across the globe. Look at this very scary picture of a shark which emphasizes
the problem. Los Angeles Times. U.S. led the world in shark
attacks last year. Again, panic, serious concern.
Now, let's take a look behind the numbers that got us to this point of panic earlier
this year. The number of shark attacks --
[ Laughter ] >>Hugh Thompson: -- worldwide went from 63
to 79. Yes, that is a 25% increase. I will give you that. But the chances of you personally
getting attacked by a shark are, like, minuscule. Like, the chances of you getting maimed by
a coconut and then, like, some kind of rabid squirrel attacking you in combination are
much greater than your actually getting attacked by a shark.
And I think for that reason, we're in a really interesting state in risk management.
We either don't assess risk at all, or when we assess it, we assess it on very bad or
incomplete data. So we're going to go back to that in a second.
Oh, yeah, actually, much of the increase was due to two very angry Sharks. I thought that
was kind of interesting. One shark can make a difference.
[ Laughter ] >>Hugh Thompson: If you take nothing else
away -- No. Now, this is another interesting risk case.
Here was a fellow who was on vacation, didn't realize he was taking a risk, but took a photo
at one might say an unfortunate angle. [ Laughter ]
>>Hugh Thompson: Somebody put it online and labeled this guy "the thumb man" because of
his striking -- striking resemblance to a human thumb. Right?
Very interesting. Again, you know, very poor risk assessment. Didn't quite understand,
you know, what was going to happen. But I want you to consider this thumb man
for a moment, because he is going to be kind of our inspiration for the rest of the talk,
I feel. Because the thumb man was not defeated by this incident. In fact, the thumb man owned
his thumb likeness and created thumbman.net. He went Web 2.0 entrepreneurial, you know,
I'm going to deploy. He started selling thumbman merchandise. I own a thumbman tee shirt.
And I have a discount code, if you're interested, come see me later, after the talk.
He overcame this kind of unfortunate risk choice and turned it into an advantage. And
I think that that's what we can do today, with some of the risk choices that we're making,
particularly around sharing data. And I'm going to get into some very specific
examples of that. Risk is a really tricky thing. So we looked
at the case of the Sharks where we think we're assessing risk, but we're actually assessing
it very inappropriately and messaging it very inappropriately.
This happens often in the security space. We rely on fear, uncertainty and doubt often
to convince somebody that you should do something from a security perspective.
If we don't rely on fear, uncertainty, and doubt, we tend to go back and rely on some
set of data which is grossly incomplete and truly misrepresents what the real risk is.
So I'm going to talk a little bit about how we may be able to get better risk assessments.
And I think the biggest problem with security is that it's difficult to measure what you
prevented, even retrospectively. So I talked to my mom last night. James does my mom. Probably
talk to her every -- Oh, you have thumbman.net? Dude, I got the
discount code. Don't buy without the discount code. You're only hurting yourself.
So every time I talk to my mom, when we end the call, she says, "Son, did you take your
vitamins?" And last night, she says, "Son, did you pack your vitamins," which I guess
is a small variation on that. And she's never built for me the right business case to convince
me to take my vitamins. 'Cause, look, I take the vitamins, and sometimes I still get sick.
I don't take the vitamins, and sometimes I still get sick. It's very difficult for me
to quantify how much benefit that prevention gave me.
So we're going to look at some methods, I think, that may help to give some more clarity
around that. But before we get there, I want to share another
story with you. I think this is before I met you, James, and before I met Ibrahim, who's
speaking tomorrow, Roussi, who was hear earlier. It was definitely during a misspent period
in my college career, I was really young. Everybody kind of makes some unfortunate choices
during those times. And for me, you know, I hung out in kind of
a sort of geeky crowd, and I say that with pride. And it was a very important time, I
think, in the formation of technology, because it was when the world's first talking Barney
doll was introduced to the market. Now, we have seen this talking Barney doll.
And we had heard that some people had gone out and were able to actually reprogram the
EPROM that it was sort of based off. So we went to the toy store, we bought one of these
Barneys. And we played around with it for probably eight hours or so, and had it say
key phrases from Star Trek. So instead of, like, the classic, "I love you," "You're great."
"Your mom's so cool," that kind of stuff. It would say, "Damn it, Jim, I'm a doctor,
not a miracle worker." And we had it say some really inappropriate
things, like, "I think your mom has a knife," that kind of stuff. But I didn't sanction
that at all. That was the other guys. So then we're playing around with this thing
for awhile, and that was kind of entertaining, and then somebody in the group says, "Dude,
let's return it back to the store and then watch people interact."
And I'm like, "No way. We're not going to do it. Absolutely not."
So four hours later we returned it and watched people. And then -- and then we felt really
bad about it, so we bought it back. One of them still has it actually. But keep that
Barney in mind for a second because what it got the group thinking about is who do you
trust? And I'll get back to that in one second. So you've got the Barney. Now it's two weeks
later, same group of guys. And we used to get together every Saturday and play a game
called Whist. I don't know if anyone has heard of Whist. It's very much similar Bridge. It's
a partner game where you play against two other folks.
And this other team had been crushing us for probably the last month and a half, and I
knew they weren't that good. Finally we accused them of cheating. "Dude, you guys are cheating.
The stack is marked." And they said, "Look, next week what we'll
do is we'll go together and buy the card deck together as a group from 7-11," which was
right on the corner, a trusted third-party authority, unless you're getting a hot dog,
but other than that a generally trusted third. So we went together, we bought the deck, we
played the game and again our team got completely crushed.
And later they felt guilty and sort of told us what they did.
And what they had done is the day before they went to 7-11, they bought all the card decks,
they marked them and returned them back to the store, inspired by our Barney incident.
Now, I bring that story up because it was an interesting trust relationship there, right,
that was inappropriate. We trusted 7-11 as a third party. We truly thought that that
was an unbiased third party that didn't have any sort of skin in the game favoring us or
anybody else. What's interesting about that is that today
users make very interesting trust assumptions that are inappropriate.
So I teach the software security class at Columbia University and we always have the
students do very interesting projects. So this year we made up a term -- and this was
right around January -- called context reflux. So you may have never heard of this term because
it totally doesn't exist, but we made the definition of it to be the cost of context
switching. So if you're moving from one task to another,
that cost of changing is context reflux, or at least that's our BS definition of it.
So I asked them to, quote, "seed it into the internet."
So one guy puts it on his blog, another guy creates a Wikipedia entry, a third guy creates
like a horrible one-slide YouTube video about context reflux. And within a couple of days
we had the first two pages of Google search, a previously nonexistent term, and now it
has a definition. And the definitions are all consistent.
So then probably, I would say, a month later we had this RSA conference, and for the closing
keynote one of the things that we did was get up in front of the crowd and -- I don't
know, maybe there's 10,000 people there or so. And we played a game of Balderdash. I
don't know if anybody has ever played that game. It's where one person has a legitimate
definition of a term and the other people have fake definitions, and they're trying
to sell you on the fact that their definition is correct.
So there's not really a definition of this thing, so I went up and kind of sold people
on my definition, which is the one that is out here on the net.
And I had a buddy go out and try and vigorously sell -- there was some kind of gastrointestinal
thing. I can't remember exactly. Which makes sense because it's reflux related.
And it was fascinating. You could see people in the crowd, first thing they do is pull
out their phone, type it in to search and go and see what Google says. There's a trust
relationship there. And this is how you choose to optimize sites.
And it's not wrong, it's just a natural function of taking data and presenting it to the user.
But I'll tell you how attackers see this, which is very interesting. The spam and phishing
email I'm starting to get says, "For security reasons we are not clouding a link." And that
feeds into the type of security awareness training that they're getting inside of big
companies. Look, if you get a link don't click on it,
it could be really dangerous. So the email says -- and confronts that and
says we're not including a link. So instead of a link what we want to you do is we want
you to go and we want you to search for this term and find our website. It will come right
up. And then go there and download whatever tool you need to download.
Very, very interesting. So they're laundering trust through a third party.
And I want you to think about that for a little bit because I think that's where the future
of attackers are heading. They're looking for vulnerabilities in trust relationships.
Things that people are inappropriately looking at one way when they actually mean something
else. So what was really cool, and we don't have
it anymore, but we also owned suggest, so if you type in context our reflux ours was
the first thing, but we don't have it anymore. I forgot what beat us out. I'll check it out
after this. So with this in mind I want to introduce this
concept of gateway data because the internet now is all about data. The data that people
choose to volunteer, the data that others volunteer about us. And really the data that
institutions have about us that are now becoming searchable.
I don't know if anybody here has an ancestry.com account, but it's incredible to see the kind
of data, biographical data, that you can now mine, search, categorize and find out about
a person over time and their family and their history, and that's a really interesting thing.
So gateway data is data that seems harmless. It's the name of your pet, the fact that it's
your cousin Sal's birthday, but when used properly can facilitate access to highly sensitive
information. So I think there's three types of this gateway
data. One is direct use. This is data that's convertible directly into access. A password,
for example, is convertible directly to access. It fails a definition because it's something
that you hold close, but for many people the name of the place they went to high school
actually gives access through password reset. Very, very interesting. So that's direct use
gateway data. I'll talk about the other two in a second,
but on the topic of direct use, let's think about how this is used today.
To the thing about standard password reset questions, now, this is another one of these
like the Coke machines, something that was a good idea from a risk perspective a long
time ago and now the risk climate has changed. Think about that for a second.
I'll tell you a quick story. This was about three years ago, I was under a vicious deadline
to get this privacy article out for Scientific magazine, and I just hadn't written it yet.
And now it's like closing in to D-day and my wife tells me that she's having this dinner
party and it's with people I hardly know, and I'm like, man, I'm hoping my dentist is
available for an appointment during that time so I have something else to do, but instead
I thought this would be a very interesting experimental group to do something with and
then write the article based on. So as people came in I asked them would you
mind, with your permission and under your supervision, if we sit down together and I
try and get into all your accounts online. Not through hacking, through password reset.
So a lot of people said no. There was one very vivid gesture that I can still kind of
see if I close my eyes. Very creative actually. But a couple of people, a couple of people
said yes. I did it with three folks. For two of them, within our hour I was able to get
into every account they cared about. Bank account, first place to start. What do they
do? In many cases they ask you to answer a biographical question and then they send you
a reset email. So they don't just let you reset your account.
So the next place you go is that email account. How do you know even where it got sent? You
mine online and you look for any old resumes this person may have, what email is associated
with it there. You look at other sources that may tell you,
for example, their college email account. And depending on the school that you went
to, that email account may still be valid. And it was fascinating to see because it turned
out in those two cases, and probably like 30 other times I've done it since then, that
-- sanctioned, sanctioned. [ Laughter ]
>>Hugh Thompson: This is being recorded. That your identity online precariously rides
on a set of biographical questions that is asked on your oldest email account.
So think about the chain of trust, for example. A really interesting experiment to try on
yourselves and your own account. How does your bank reset? Probably sends it to an email
account. How does that email account reset? May send it to another email account. Eventually
you will get to an email account that doesn't know any other email accounts and it will
ask you very simple questions. Questions like what city were you born in? Questions like
what was your grandfather's occupation? I had a student this past semester do a comparison
between the password reset questions of the top five free email providers and ancestry.com,
the stuff that you could find instantly on ancestry, and it was a 30% overlap. Really,
really interesting. So this choice -- and it was a design choice.
This choice of using biographical data to reset your password was like a good idea 20
years ago. Like, who knew this stuff except for your
close friends and family? And they could do a lot worse stuff to you than reset your password.
But now you're so knowable at a distance by somebody you've never met that it's no longer
an inappropriate -- it's no longer than appropriate risk choice for many people in many cases.
Category number 2, amplification gateway data. So here's where things get more interesting.
Instead of something like talking about your cat Fluffy online that you can translate into
access through password reset -- "what's the name of your favorite pet?" This is data that
when you bounce it off a person will get you more sensitive information.
Like, for example, if somebody called you up on the phone and said, "Hey, this is like
your bank or something "-- maybe they would say it more eloquently.
[ Laughter ] >>Hugh Thompson: "For security reasons, can
you tell me your Social Security number?" I don't think many people would -- unfortunately
there are some people that would just give it automatically, but I don't think most people
would do it. But what if the bank called and said or the
person pretending to be the bank called and said, "Hey, look, this is your bank. For security
reasons I am going to give you the first five digits of your Social Security number and
I need for you to give me the last four. And that way can I confirm to you that I'm your
bank and you confirm to me that you're you." And it turns out that a lot of people will
actually give it in that circumstance. Now, the first five digits of somebody's Social
Security number is based on things like where they were born. There's some wonderful research
by a guy named Alessandro Acquisti at Carnegie Mellon, fascinating paper on how to predict
those numbers. But forget about that for second. If you have access to Lexus Nexus, for example,
you can get the first five digits of someone's Social Security number very, very easily.
So it's really the last four that interesting things hinge on.
Another very interesting thing about Social Security numbers is that they're being used
in an authentication context where that was never the original intention of how those
numbers should be used. When I went to college, my Social Security
number was my student number. I wrote it everywhere. It was on all kinds of pieces of paper.
I would say for many of you that it was probably the same situation.
Now a Social Security number is sacrosanct. If you ever expose it to anybody, now you
fall prey to the breach notification laws. So it's really interesting if you take a piece
of data that didn't used to be sensitive and you suddenly make it sensitive, we don't have
a mechanism to do that today. Everything is sticky. Once it was not sensitive and now
it is, it's not a tenable sort of situation. Here's another interesting example. So the
way that many banks authenticate to you that this is actual email from the bank is they
put the last four digits of your account number in the email.
How many people have gotten an email from their bank that does this, out of curiosity?
I would say about 40% of the crowd. So there's a really great researcher, he's
at PayPal now, he was at Indiana University at the time he did it, Marcus Jacobson, who
did a study where he sent a bunch of people these kinds of e-mails that had the legitimate
last four digits and then he sent another group of people the same email, but it had
the first four digits of their credit card number.
Almost no difference in between those two groups from a trust perspective, but those
numbers actually represent very, very different things from a sender's perspective.
The first four digits of somebody's credit card number are basically public. They just
say what issuer issued the credit card number. So if you've got a Discover card, it's like
the same for everybody. If you've got a Visa or MasterCard, it depends on the issuing bank.
So this again is a type of data that when you bounce it off a person is interpreted
as a higher level of trust than it's truly meant to be.
Now, the third category I think is the one that is going to plague us in the future.
It's collective intelligence gateway data. So it's data that seems to be totally useless.
Even if you put it in the context of password reset, it doesn't do anything really interesting.
If you told it to somebody in an email, it doesn't sound very interesting from a trust
perspective, but when you take it and you cross-reference it with a bunch of other stuff
about a group, you can tell really interesting things.
There's a couple of types of data that fall into this category. One is location, where
you are. Now, you know, most people don't think twice
about telling somebody where they are. In fact, a lot of the tools that we use now regularly,
Twitter, for example, include a geotag. When we take a picture by default, many of the
phones, certainly my iPhone does, automatically encodes it. So I can find out where somebody
is and where they've gone and I don't think most people think that's a huge secret.
But now think about this again through the lens of the attacker, and he may be a bad
guy or a competitor. What a attacker can do is look at the movement
of groups. For example, I would love to find out where the mergers and acquisitions team
of HP, for example, has gone for the last month. That would be really fascinating. Or
where has the salesforce of my competitors been moving over the last month or the last
couple of weeks. That's really interesting data. That's incredibly sensitive data to
the company, but it's the kind of things now that it's a matter of tooling to be able to
aggregate. The data is out there, it's not a matter of the data being out there, it's
just a tooling question around aggregation. Here's a tweet that's not very interesting
when you look at it, "Flying to Bentonville, Arkansas for a quick trip and meetings straight
through the day." Let me ask you a question, if you're flying
to Bentonville, Arkansas for all-day meetings, dude, who are you meeting with?
Wal-Mart! Who else is in Bentonville!!?? Really interesting.
So you've just told the world that you guys are courting Wal-Mart, or maybe you already
have a relationship with Wal-Mart. That's really interesting again from a competitor's
standpoint. We're starting to see tools in the research
community, the security community, good and bad, that are focused on mining big data and
aggregating this data in really interesting ways.
This was a cool project from a couple of years ago. Many of you may remember this, pleaserobme.com.
Does anybody remember this? Anybody ever go there?
They called it a robbery opportunity portal. [ Laughter ]
>>Hugh Thompson: So it kept track based on things that you tweet, are you away from home?
That's all it cared about, were you away from the home? And if you were, that's a fascinating
piece of data to a physical burglar who may want to break into your home. Really, really
interesting. There's another really cool project appropriately
named called Creepy. I don't know if anybody has ever run into this one, but the name really
kind of lives up to it. It allows you to track this geolocation data
over an extended period of time. We've seen some tools pop up in the attacker
community, for example, that are mining personal tidbits of data and automatically integrating
them into hyper focused and hyper personalized phishing attacks.
Think about that, for example. If you started to get hyper personalized phishing attacks,
how vulnerable or not vulnerable do you think you personally are to that kind of threat?
Think about the choices that you make when you see an email. You look at who sent it.
I don't know how many people here routinely use PGP, but it's nowhere near what we thought
it would be 10 years ago. So you make trust choices often just by the
content. That's the problem with phishing e-mails now is they're starting to good get
boring. I miss the days when there's like a guy who has 10 million bucks and is willing
to give me two million if I'll just help him get it out of the country. Those were great!
They were horribly spelled, there's always money involved. It's like wonderful times.
Now those same types of e-mails that we're starting to see from a company perspective
are very tailored, very personal and they look just like the boring work e-mails that
you routinely get. Really, really interesting, and they're built because of data that's publicly
available. Related to this topic, the ability for people
to make good choices about risk and security, I think it was never great to begin with,
but it's starting to erode because of the data that's out there. And I think we often
create processes that expect too much of users. They expect users to make very fine-grained
trust choices. This is an example -- I don't know if anybody
has been to New York recently, but you kind of live and die by the Metrocard. This is
the thing that gets you into the subway. And they have interesting things on the back
of 'em. Some have like a message, "if you see something, say something. Report a crime,"
that kind of stuff. One of them on the back has emergency instructions.
So this is what to do if there's an emergency inside the subway.
So it's a little blurry, so I'll read them. Number one, "Notify train crew or supplies
if you see someone in distress or notice unlawful or suspicious behavior." That's pretty good.
That's certainly a good thing. Number two, "Do not pull the emergency cord!"
Let me ask you a question -- [ Laughter ].
>>Hugh Thompson: -- if you are in the middle of an emergency in an enclosed subway, are
you going to, A, start reading instructions on the back of your card or B, pull a huge
cord that says emergency on it? [ Laughter ]
>>Hugh Thompson: Very interesting. Making some bad choices about what the user can or
can't do. This is another very interesting one. So a
friend of mine recently went to buy a purse at a very high end store and she finds one
that's just perfect. Exactly what she wanted, but didn't have a price tag. So she figured
it was something that somebody else probably had returned, they didn't like it for whatever
reason. So she went up to the counter to check out. They look up the price, she buys the
purse -- I think officially it's a wallet. And she takes it back home, starts to transfer
all of her stuff over, and she opens it up and she finds something fascinating inside.
This piece of paper with admin and server passwords written on it.
So this is somebody who not only made the bad choice of writing their passwords down,
but also put it in a purse that they then returned back to the store.
Now, I bring this case up because it's an interesting one and one that we shouldn't
be surprised about. In fact, it's one that we should expect of users.
And we do interesting things in the security space we do it to ourselves all the time.
We crank up a dial on security forcing users to do even worse things than we were trying
to prevent. For example, hey look, people are picking
bad passwords. They're choosing "password" as their password or they're choosing like
their favorite bunny's name or something like that. So we want to deal with this problem.
So what we're going to do is we're going to enact a 12-character password policy. It's
got to the have least three special characters in it, one foreign language character, and
it can't be similar to the last 78 passwords you just set.
[ Laughter ] >>Hugh Thompson: This is going to be good.
This is going to solve our password problem. Well, what happens when you crank up a dial
like that in isolation? It's not psychologically acceptable to users. They're going to do things
to work around it. They're going to write stuff down and put it in a purse! That's like
natural. So I think it's really interesting when we
start to think about the user's experience. So my mom is just such a great sort of user
case in general. So she -- [ Laughter ]
>>Hugh Thompson: It's interesting to watch her online and then on her mobile phone.
So online if she gets to a site, for example, that says, "Hey, look, the thing you're about
to do is really dangerous. You should definitely not do it, like seriously, seriously don't
do it." She's always going to hit okay because she just wants to see what the site does.
[ Laughter ] >>Hugh Thompson: She doesn't even read the
message, just hits okay. And it's fascinating to see her install stuff
on her android phone. Hey, here's an application. It wants to access
your payment cards, it wants to check the security log, it wants da, da, da. Okay. She
doesn't even look at it. All she wants to do is be able to shoot the bird towards the
pig and get done whatever she needs to get done.
[ Laughter ] >>Hugh Thompson: And what we're doing is we're
outsourcing security choices, complex security choices, to a user we know is not going to
make good choices. That's crazy when you think of that from an
ecosystem perspective. It's a good idea in isolation. We want to inform the user. We
want to tell them what they're using to. In practice if you look at it in a macro group
it's like a really scary scenario because you know in practice most users are going
to make the bad choice. So the question I guess I pose from a security
perspective is how are we going to help the user make better choices in general?
And I think it's designing systems that are easy to use securely and difficult to use
insecurely. Because historically we've done the opposite. We've designed systems that
are super easy for a person to use insecurely, but very difficult for them to use in a secure
way. In fact, they don't even think about that
trade-off usually when they do something. Now, don't be deceived by this slide saying
summary because I still have quite a bit left to go.
[ Laughter ] >>Hugh Thompson: I want to share with you
another very interesting story, and this happened -- I don't know, I guess this happened about
five years ago. And the most amazing bug types of things happen to me on planes. I can tell
you some other stories later, but this one was about five years ago. I sit on this plane,
I'm sitting next to a guy and, you know, I don't know how it happened, but like almost
immediately we get into this argument around whether "Star Trek Deep Space 9" should have
been included in the "Star Trek" family of television shows.
And I was adamantly against "Deep Space 9." I really didn't like it. And he was a huge
"Deep Space 9" fan and we just locked heads on it. And finally we agreed to disagree.
And finally I asked him what kind of business are you in? And he said I.T. and I said I'm
not surprised. So we started to talk a little bit more and
he had been hired as an admin at a manufacturing company 10 years before this. So think about
it, that's like 15 years ago. So he gets hired at this company, he does
a quick assessment and he realizes there's no backup and recovery disks for a critical
system that is operating floor equipment. So first thing he does is create a set of
backup recovery disks. And back then it was old five-and-a-quarter-inch disks. I don't
know if anybody even remembers those. But when floppy disks are really floppy or when
they even existed. So he builds a set of disks, there's 10 of
them. He walks over to the secretary, admin's office, and says, "Can you please label these
disks and put them in a fireproof safe?" They should have taken them off site, but it's
fine, it's better than where they were. She says no problem.
Six months later something horrible happens. There was a power spike, the system goes down.
He has to take a secondary system and bring it online.
So he goes to her office and says, "Hey, can I get those recovery disks?"
She gives him the disks, he puts the first one in and he sees the two scariest messages
you could ever see as a system administrator, "media error."
So that disk is totally gone. He puts in the second disk, "media error."
The whole stack is gone. So he spends the next three days, no sleep,
rebuilding this thing from scratch. Finally, it's up and running. The machinery
is working. He goes and he makes another set of backup recovery disks, takes it to her
office. He walks over to the coffeepot, is sipping a cup, thinking about a career change.
You know, I think all of us have been at that point.
She sees he's very upset and she says, "I'll do it immediately."
So she takes the first disk. She puts it on the desk. She takes a label out of the drawer.
She sticks it on the disk. And then she takes the disk and she shoves it in the typewriter,
crank, crank, crank. And then she starts typing, "B" for backup, whack, it hits. "A"!
So now he realizes every disk he's ever given her is gone.
[ Laughter ] >>Hugh Thompson: Now, this is a very interesting
moment. In the days that followed, the big question there was, was this her fault? Very,
very interesting question if you think about it.
She'd been given a task. She didn't know how to use the tools that she'd been given in
a safe and secure way. She didn't know anything about disks. She used a typewriter, all right?
Her process was, if thin enough to fit in typewriter, put it in the typewriter. Right?
It's CMM level 5. It's a repeatable process, it worked consistently.
[ Laughter ] >>Hugh Thompson: There's never been any complaints
before, you know. This was just fantastic; right?
But what she'd been given is, she'd been given this new technology, this new thing, but no
instructions on how to use it safely and securely. She didn't understand that what she did was
about to ruin those disks. She didn't know. Was it her fault? I think that's a position
we're putting users into today. We're making them -- we're forcing them to make really
interesting decisions that they're not equipped to make. We haven't educated them on the ramifications
of those choices. Really interesting. Now, think about what
came afterwards, after the five-and-a-quarter-inch disk. Does anybody remember? Go way back.
The three and a half. What was the difference between those two things? It was hard. It
had a hard shell. It still had the same floppy disk inside of it. But some amount of knowledge
on how to treat that thing right was built into the product itself. You knew not to bend
that thing. Dude, the plastic's going to break. It was knowledge built into the system. It
was easier to use in an appropriate way than the five and a quarter inch.
And I think that's a model we need to start striving for in the security space.
Let's make it easier and let's make it more discoverable for people to make good choices.
So the key things, I hope, if you reflect on this later, or maybe you totally won't
and I'm so glad it's dinnertime type of thing, but it's security and privacy really has become
one of the most interesting areas, I truly think, in software development and test. Because
there's so many interesting security ramifications to the stuff we do. And I'll just tell you,
over this past 12 months, in the security space, we've just been shaken. I'm sure you've
been following the headlines. But we've seen the rise of hacktivism in a very interesting
and material way. And we've also seen the rise of at least what the industry is calling
advanced persistent threats. So these are very targeted attacks by very smart groups
of people on the other end. And they are exploiting these trust problems
that we've had for a long time but frankly weren't an issue on scale. And now they're
becoming an issue on scale. So we need to really consider the types of
choices that we're making users make today. I think that's going to be so critical not
just today. But if you think about the data exhaust that people are creating, think about
how -- interesting ways that might be mined in the future. And we have to also think,
which really makes the space exciting, that we have an active adversary on the other end.
And these guys are smart. This is not your grandmother's hacker kind of thing. These
guys are smart and motivated. There's this old saying which I'm sure you've
heard a million times. You don't need to be faster than the bear. You just need to be
faster than the slower guy. That assumes that you're dealing with a hungry bear; right?
If you're dealing with a hungry bear, the algorithm is really simple. Dude, just bring
a slow guy with you every time you go camping. It's a simple algorithm. You can follow it.
It leads to a very safe outcome. But today, we're not just dealing with hungry
bears. Hungry bears are the cybercriminals of yesterday. If it's more difficult to get
to you than somebody else, they'll go after somebody else. But now we've seen the rise,
especially in the last 12 months, of the angry bear. So this is somebody who will go after
you personally, either because they feel that you've done something wrong or because you
have very interesting and unique capabilities that others are using. They're going after
the supply chain. And the angry bear, they're going to run past
the slow guy. They may just slash him for fun on the way. But they're going to keep
going. And they're going to go after you. So it's a really interesting time.
And thanks so much for taking the time to be here this afternoon. I really appreciate
it. And thanks, James, for having me. [ Applause ]
>>James Whittaker: And as it turns out, we have time for some questions. So Bonnie has
the mike. >>Hugh Thompson: Avoid the Sharks at all costs.
The statistics are real. Is it the discount code? Thumbman discount
code? >>> 4738 -- So thank you for a very interesting
and stimulating talk. There's been research in usable security for
ten years. Carnegie-Mellon has an entire graduate program in it. You are sitting here and telling
us, make security easier to use so we'll use it.
How do we do that? >>Hugh Thompson: I don't know.
[ Laughter ] >>Hugh Thompson: Next question.
No, no. You bring up a very interesting point. So the folks at Carnegie Mellon, Lori Cranor,
Alessandro Acquisi is there, too, who is looking at the behavioral stuff with security. I mean,
there's a lot of people there who have been thinking about this for a while. But the truth
is, it's been very difficult to make progress. And I think the reason for it is that security
typically trades off against other things that we really like.
So you crank up security, in some cases, it cranks down performance or it cranks down
functionality, or it cranks down a user's ability to kind of intimate the space.
I'll tell you one thing that has me sort of optimistic. And this is some research that
a guy I mentioned before, Marcus Jacobsen did, I think that was just about a year ago.
And so his research was on friendly fraud. So imagine, you know, you've got a device,
like an iPad, for example, or Google tablet or some other device, and you're passing it
around to others. You're looking at a photo album, you're -- those devices are a lot more
social than they have been in the past. Typically, you wouldn't take your laptop and kind of
pass it around the same way you might one of these other devices. So I think it's important
to think about that socialness of the devices. So what he did is, to be able to pay for something
-- because there was a kind of interesting rash of somebody taking your phone or taking
your device, you're auto logged into some site that you can buy stuff, like a headless
Mr. T doll, for example, from, like, eBay. And you can, like, buy a buddy, ten headless
Mr. T dolls by hitting a button and saying one click and buy it and ship it out.
But what he did was change the payment mechanism so that instead of clicking the button, you
had to drag cash from a wallet into a cash register. It's amazing, just that one change,
how it reframed it to the user, how it felt different to them. And the friendly fraud
cases, at least in his study, went down in an interesting way.
So that kind of research gives me a lot of hope. And I really think that from a security
perspective, it's about signal mapping. So my mom, you know, I -- don't -- I promised
this last time I'm not going to pick on mom. I love her so much. If we walk down a bad
neighborhood in New York, we both know we're in a bad neighborhood. We see graffiti on
the walls. There's gunshot holes in the side of the building, there's huge locks. So we
know those signals. We have natively grown up with them. These are signals we're in a
bad spot; we need to get out. The problem with the Internet and with technology
is that most people aren't attuned to the signals of danger. And I think the better
that we can do signal mapping in that way, even tying it into something that we already
know is dangerous or know is problematic, I think that that's an area where we can make
some interesting progress. That's a very long-winded answer. Or very
long-winded expansion on "I don't know." But, hopefully, that's --
>>> Hi. I wanted to know if -- Well, I recently got a Time Warner Cable in New York. And my
-- >>Hugh Thompson: Should have got FiOS.
>>> And my SSID was SBG6580, and my password was SBG6580DF6. And SBG6580 is the model number
of the access point. >>Hugh Thompson: Nice! That's good. That way,
you won't forget it. >>> Yeah, and I couldn't change it. The only
way I could change it was to go to 19216801 and type in "admin Motorola" for the default
password for the device and reset it to a secure password.
My question is, these companies like Time Warner Cable that are not interested in your
security, they don't give -- >>Hugh Thompson: Yep.
>>> -- you know, they don't care about your security. So what do you do about that?
>>Hugh Thompson: Great question. I -- and I am so sorry to have to bring mom
back into it. [ Laughter ]
>>Hugh Thompson: But there really is -- there really is an interesting kind of mom tie-in
to this. So she bought a wireless router. And, you
know, she bought it from the store. She plugged it in. She, you know, found it, and I'm walking
her through it over the phone. She finds it on the list of available wireless access points
and clicks on it, and it needs a password. Right? So this was one of the ones that, by
default, has the key on the back of it, and, you know, I'm like, look, I'm sure it's around
there somewhere. And it was -- the sticker wasn't on the back. Maybe it was on the box.
I don't even know where it was. So she returned it. Right? She's like, dude, I can't deal
with it. So she bought another one that, by default,
was open as a wireless access point. And I think that speaks directly to your story,
or to your kind of question in that because security trades off against other things that
we prize and are so visible, that folks are just pushed to make those kinds of decisions.
You know, it's one thing to have it on your router. It's another very interesting thing
to have it on voting machines. And it turns out that the way that most tabulators
work for e-voting is very similar to your story. There's a hard-coded password. It's
the same for everybody. And it's security through obscurity. You think that nobody's
going to have access to that particular manual that has the systemic problem of exposing
everybody's password. And nobody ever changes it.
And it's been fascinating to even see in that context, where you're dealing with critical
data -- I mean, that's critical infrastructure stuff -- that we still haven't seen a change.
And the problem is that we can easily assess the usability of it. I'm sure Time Warner
can easily assess the kind of support benefit to them of just telling you, "Well, what's
your number?" Oh, it's underscore DF6 at the end of it and you're set.
There's a clear support benefit of it. But the user isn't pushing back on risk, because
that's not quantified in a very real way. So I think the way that we're going to get
past that problem is have users be able to assess security in much more interesting ways,
in a very different paradigm than they are today.
We're already seeing that happen at the business level. If you look at RFPs or RFIs that go
out for B2B and look how those are changed over the last five years, they didn't ask
anything about security before. And now you're starting to get very interesting, very probing
security questions. So it's happening in B2B. B2C, we just haven't
seen it yet. Mom is still hidden okay, because she wants to shoot that thing towards the
pig. She really hates the pig. Great question. We'll move it over there next.
>>> I'm curious if you're familiar with the electronic currency called bit coin, and if
-- your opinion on the security. >>Hugh Thompson: Very. The mysterious creator.
>>> Yeah. Satoshi. >>Hugh Thompson: What was the -- the question?
Like thoughts around it? >>> (Off mike.) Around risk, security (Off
mike.) >>Hugh Thompson: Well, you know, it's kind
of interesting. I mean, it's -- if you look at -- at the economy it's created on the back-end,
you know, you've got guys that are taking server clusters and just mining for coins
and trying to be -- which is a very interesting sort of egalitarian way to disburse it.
But from a security perspective, I think we're going to need something like that on a go-forward
basis. A big problem now with electronic currency is traceability. And a lot of people don't
think about that today, the traceability aspects of it. They just ignore it, meaning that they're
happy to kind of use their card and somebody in the sky knows that they bought this particular
product or -- because they're not -- they're not thinking about it actively.
But I think in the not-too-distant future, we're going to have, like, a privacy Armageddon
incident. I don't know what it is. I don't know how it's going to look. But I know it's
going to affect some senators, which is going to force a legislative change. And that's
going to be a very, very interesting and scary period for all of us.
And I really think that that's going to happened. I don't think Appsalon did it, even though
everybody was getting crazy emails from the hotel they didn't even realize they had ever
given their email address to, but I think we're going to see something happen in the
next few years that is going to affect some key policymakers in Washington and force privacy
to be a big issue. And getting back to your bit coin question,
I think at that point, people are going to be really freaked out that there's then traceability
from what they spent to the merchant and how it was used, and it's going to spawn the use
of more disruptive payment technologies that can disassociate those things electronically,
which I don't think we have a good mechanism for today.
So I don't know if that at all kind of answers it.
>>> (Off mike.) >>Hugh Thompson: Yeah, I would use it. Yeah,
I would. And there's a couple reasons for it. One,
just 'cause I'm really curious about it and I think it's really neat.
But the second thing is, you know, there looks like to be some really strong, if not kind
of esoteric and somewhat very difficult-to-decipher crypto behind it. And I think that's kind
of fascinating. I think it's just such a great project to see kind of how it's evolving and
how it's disrupting the space. I think we're going to see more stuff like that, too.
>>James Whittaker: Okay. We are officially out of time.
I have no desire to stop this, though. I'm fascinated. I think everyone else is, too.
So let's just continue, if you still have voice. Can you continue with another ten minutes?
>>Hugh Thompson: I don't want to launch a denial of food attack, though, man. People
tend to remember that kind of stuff when you're the -- yeah.
>>> Okay. My name is (saying name). Thank you for an amazing presentation.
I have a couple of questions. The first is, what do you think about the future of security,
privacy on the Internet? And the second, how do you estimate the current
state of security/privacy on the Internet? >>Hugh Thompson: Okay.
We're doomed, and bad. [ Laughter ]
>>Hugh Thompson: One and two. No. Okay.
So I think the first one is kind of the future of security and privacy. And I think -- you
know, I'll tell you I'm very, very optimistic. And I'll tell you why. Every year at RSA Conference,
we run a competition of startups. So these are companies, you know, back-of-the-napkin
kind of stuff, and then they got some funding, and then they kind of moved to a stage where
they really have a product. And they compete. I don't know if anybody has ever seen Shark
Tank, where there's, like, the real one, which is the British one that spawned it. But there
are some guys, a guy will come up that's an inventor and there are some investors, and
just rip that guy apart. It's really interesting television. But that's sort of what we do
on the security space. And fascinating companies have come out of it.
And there's companies that are starting to think about security differently, which I
think is so critical. I think the old legacy stuff that we've been relying on is eroding
very quickly, like signature-based technologies, for example.
We're in a very interesting state with signature-based technologies; right? In the past, five years
ago, when all the malware was stuff you've seen before and it was just about protecting
users on the desktop, those things were great. And they're still important. But most of the
attacks that we're seeing today that are meaningful are freshly compiled malware, like, compiled
three hours before they were actually sent and deployed through a phishing email or through
a download link. So that's very disruptive to the current set
of security technologies. So I think that's one interesting path, is
the sort of death or the reimagining of signatures. The second is on privacy.
I don't know what's going to happen. I think we're going to go down one of maybe three
roads. One is, we recalibrate as a society on what privacy means. People are so knowable
now in ways that they never were before. You know, you mentioned the Carnegie Mellon team.
I don't know if anybody saw, but there was a presentation at a conference called Black
Hat earlier this year where a group of students there had built an app, I think it was an
iPhone app -- I have nothing personal against Android -- I think it was an iPhone. It may
have been Android. But this app would let you take a photo of someone at a distance.
It would then cross reference that photo with a database of pictures and then figure out
who that person was before you even interacted with them.
That means that you were forming an opinion of that person and they bring a legacy drag
of history before you even shake their hand for the first time. And that's -- from a societal
perspective, that's going to be really interesting to see how we recalibrate to something like
that. The other thing on the privacy front is that
the trust choices we've been making natively for a long time are just going to be broken
in the future. Somebody sends me an email saying, "Hey, dude, great meeting you at GTAC.
Remember, I said I was going to send that video. Here's the link."
Okay. You know, that's kind of interesting. Maybe I did meet you at GTAC. Maybe you did
say that there was a video link. How am I going to know the difference anymore? All
right? Folks know I'm in GTAC. It's online. That could be a legitimate person, an illegitimate
person. And I have no reasonable data to make an assessment between the two.
And how do you deal with something like that as a society? That's going to be really interesting,
especially when tooling makes that process accessible for everybody. Right? Right now,
there's some effort associated with it. I have to, like, really have an ax to grind
with Hugh to kind of send him this. But when the marginal cost of sending one
more hyperpersonalized email goes to zero, I think we're going to be in a really interesting
state. So I think we're going to need to have security
tooling that brings visibility to data even in an email.
Okay, when that thing pops up, I don't just want that virus scanner run on it. I want
it to highlight for me key pieces of data that are about me available online to everybody
so that I can make a more reasonable choice. So I don't know if that even starts to get
sort of the future of security. But I think that we're going to need some disruptive trust
technologies now that make security a lot more visible to the user and make them make
better choices. Because right now, they're making choices to share data, which is great,
it's connecting people, it's bringing the world together. But it has implications that
we don't fully understand yet because we haven't seen the bad guy tools mainstreamed yet.
I don't know -- But, on the other side, I wouldn't worry about the Sharks.
>>James Whittaker: That was like a hundred answers. So because you're such a long-winded
answerer, we're only going to have time for one more. And then don't leave, because we're
giving away free cool stuff immediately after this.
So one more. Bonnie, you have one over there? >>> Hi. I have a quick question.
First, companies like Google and Facebook have taken -- has been warning users that,
hey, you possibly could have been hacked because someone from Nigeria logged into your Gmail
and you don't usually go to Nigeria. Do you think users are capable of handling
that? Like, some places I've seen this before, for example, credit cards, I got asked questions
by TSA on the way down here because my credit card decided my flight purchase was suspicious.
>>Hugh Thompson: That's good, yeah. >>> I know, totally.
There's got to be an example of where that can go wrong and maybe go too far. Do you
think these kinds of things of letting users know when they could possibly face a problem
that they're able to process it, but, more importantly, able to figure out why they were
hacked and then take steps to avoid it happening to them again? Because you were just telling
me you were hacked and they still click on every link and log into Facebook from the
Apple Store. Like, it's going to happen next week.
>>Hugh Thompson: That's a great question. I'll try to give a short-winded answer to
it. So about maybe four years ago, I started a
company called People Security that just does security education for enterprises. And I
give that preamble because that's one of the biggest challenges we faced, is how do you
go to a development org, for example, so how do you go to developers, how do you go to
testers, how do you go to architects and designers and make security relevant and them thinking
about security with every incremental choice? But then how do you go to the general populace
of users inside that company and first make them care, which is, like, a really important
thing; but, second, make them change after something bad has happened?
And it's very interesting, at least the stuff that I've found -- and this is totally anecdotal
-- is that, one, if you show people vivid examples, then they will remember them. And
I think, first, that's a very interesting point to recognize. If you just tell people,
"Don't click on stuff. It's dangerous." Or, "Look, don't access it from a kiosk or the,
like, Nigerian hackers guys are going to get into your Facebook account." Those things
don't tend to be very effective, even though they're very true.
But if, indeed, you say, hey, let me walk you through this process, so I am the hacker.
I go into this space, and I do this, and I do this, and I do this.
The retention rate on something like that is much, much higher.
So for -- and I'm just bringing it now to kind of the developer domain.
I think that software security, for example, which is so critical. I mean, I think the
pressure on software developers to make more secure software has increased dramatically
over the last few years. But for a software developer, you have to make it personal. You
have to show them vulnerability in their own type of code or own type of system. And you
have to go all the way through from here's an interesting problem, here's how a bad guy
looks at it. And now let's exploit it and do horrible things with it. And you have to
show them those horrible things for them to get it on the back-end.
So I think it's just a vivid display of what the bad guy does to the system.
So just telling somebody, hey, look, we think your account was hacked, that's going to lead
to them making a series of sort of even more paranoid choices that may not be appropriate.
Like, they won't click on the linked email anymore for phishing, but they'll always do
the, hey, for security reasons, we're not including the link, and look us up on the
Web. It changes their behavior, but not necessarily
in a good, positive way. So, again, sorry. That was medium-winded.
Not quite long-winded. >>James Whittaker: But you are going to send
everyone an email with a download link for the presentation.
>>Hugh Thompson: Absolutely. The presentation will be sent via link. Trust me. It's good.
Trust me. [ Laughter ]
>>James Whittaker: Thank you, Hugh, very much. [ Applause ]