Tip:
Highlight text to annotate it
X
>> The topic for today is to disclose or sell an exploit without getting in it trouble.
I'm Jim Denaro an intellectual property attorney based out of Washington, D.C., focus my work
on security technologies, before I went to law school, I used to spend far too much time
tweaking around on max bug on my power PC and figured that no better way to keep doing
that than to do this. So here we go. Just because I'm an attorney and this does have
some legal component to it, although this is not a law talk, really, I to give the standard
disclaimer that this presentation is not legal advice about your specific situation or your
specific questions. Even if you ask question a question, we're still talking about hypotheticals,
if we develop a attorney‑client relationship, then we can give legal advice. This is not
attorney‑client relationship alone through this talk. We can maybe do that later. Just
as a quick overview what we're trying to accomplish in the next 20 minutes we're going to cover
the types of risks that are being faced by researchers, some risk mitigation strategies
that researchers can take to try to reduce those risks. Some options for disclosing a
vulnerability that may be less ‑‑ that ma I have less risk and then some of the risks
that are associated with selling an exploit. The overall goal of this is to make yourself
a harder target. If someone asks you well can I be sued if I do this or if this happens?
The answer is always yes. You can always be sued by anybody for anything at any time.
The only question is who's going to win? And the goal is to make it more likely that you
will win which disincentivizes someone from actually suing you in the first place.
So let's start out with just some great examples of the kind of research activities that might
get somebody in trouble. For example, some of these ‑‑ these are generally real life
cases. You found out how to see other people's utility bills by changing the HTTP query string.
I talked to somebody at a party the other night who was wondering how to do that, he
was wondering what to do about it. You discover your neighbor's Wi‑Fi is not protected.
How did you find that out? You broke the crypto that's protecting some media that you had.
It's getting more serious. There's actual money at stake or maybe you wrote a better
remote access tool. That sounds like you might make a lot of money. So many of the same risks
apply surprisingly enough whether you're just looking at changing HTTP strings or whether
you're actually taking apart a DVD. So in general, we're talking about techniques. I've
sort of defined it here. Broad spectrum, everything from denial of service, a technique that might
be used for denial of service attacks to something that's just you know more akin to sort of
investigatory web browsing. First, when is there risk to a security researcher.
There are three general areas where we see the risks starting to show up, one, there
can be a threat of legal action before you go to a conference or make this disclosure,
there's examples listed here. You might be the recipient of a legal action seeking an
injunction barring you from disclosing something before a conversation. So now removed from
merely saber rattling to an actual lawsuit being filed against you. And then there's
the possibility of a legal action being initiated against you after you make the make the disclosure.
And these are all real examples. Declan McCullough of CNet and his colleagues have written articles.
I recommend them to you. Some of these seem to be around Blackhat and DEF CON on a regular
basis. That's when it can happen. Your number one concern is typically going to be the computer
fraud and abuse act. You've probably heard about that lately the perhaps here at other
conferences. The main issue is it prohibits access without authorization or exceeding
authorized access. The two times you're likely to run into possibly exceeding authorized
access or acting without authorization would be in the investigatory phase of working on
your ‑‑ whatever technique it is that you've got. And when you actually create a
tool that performs whatever this technique is. You might actually have a problem or that
tool does the act that is prohibited. So in light of everyone's talked much about how
vague this notion of the computer fraud and abuse act is of authorization. I've created
a handy checklist to figure out if you might have a computer fraud and abuse act problem.
(Laughter) There we go. Are you connected to the Internet?
Probably. Are you accessing a remote system? Probably. Do you have permission to access
that system? This is the real hard question. It's really hard to know if you have permission,
if you saw a banner go by that said you don't have access, you probably don't have access.
But there are a lot of cases where it's not so clear. And that's really where you have
this sort of like the Andrew Auernheimer situation where he's querying an API on a regular basis.
There's no banner or clear prohibition for doing that. It was a public facing API after
all. There are real risks in figuring out whether or not you have permission. But that's
really all it takes. Unfortunately, it's not just about what you do. The computer fraud
and abuse act is about what your friends do. I believe the risk of being caught up in conspiracy
to violate the computer fraud and abuse act is most certainly enhanced by the prevalence
of social media today. If you're on Twitter or other easy to use social media platform,
you're talking to your friends about how you might do something or answering questions
about how you might do a certain thing, with a technique that you've developed, you're
starting to head down the road of conspiracy. Conspiracy typically does require an overt
act. In order to really fulfill the conspiracy and typically just discussing something with
someone does not. But, if you start providing technical support for something that someone
else is doing, you're really definitely increasing the risk of being caught up in the conspiracy
to violate the Computer Fraud and Abuse Act if not violating yourself. We've got examples
where the Computer Fraud and Abuse Act has been applied. Because that's how we see how
it's being applied if we look at examples and we can compare what we're doing to some
things that happened in the past to other people and see how close those comparisons
are. And since we're in Las Vegas, we actual absolutely have to talk about the case of
Nestor. Nestor was really into video poker and he liked to play and play and play. He
got really good at it. He played it so much that he discovered a bug in the video poker
software that enable him to play one type of game and bid a bunch of money in that game
and switch to another game and ate multiplier would be applied to his bid so when he won,
he got this enormous payout, and he figured out how to reproduce this big and he and his
friends were doing it and getting a lot of money. Eventually how these stories always
end, he got caught and he was charged with violating the computer fraud and abuse act.
He was looking at the computer we saw it's mostly unauthorized access or exceeding authorization
that you had. And it's hard to imagine that sitting ‑‑ he didn't access the firmware
or take the game apart. He sat there and pushed the buttons on the machine. How you could
seed authorized action says to video ‑‑ access to video poker machine is mind‑boggling
but those charges were assessed against him. Ultimately, the Department of Justice did
not pursue the charges. They went ahead with other fraud charges. But, nonetheless, for
some period of time, he was facing computer Fraud and Abuse Act for doing exactly that.
It's also worth looking at the tragic case of Aaron Swartz who spoofed his MAC address
to download journal articles. That was Computer Fraud and Abuse Act. Andrew Auernheimer, who
allegedly conspires to run an automated script to plug in identifiers for iPads and get e‑mail
addresses, didn't even do it himself. He's doing several years in federal prison for
that. Also worth noting that Department of Justice as said in their manual that conspiracy
to hack a honeypot with can violate computer Fraud and Abuse Act there's no end to the
sorts of things that can violate the computer Fraud and Abuse Act so you're looking at a
situation where a computer Fraud and Abuse Act acts as an ex post facto law where the
Department of Just sis is able to look at what you did after the fact and if they don't
like it or they don't like you for whatever reason, you may be trollish for some reason,
you're likely to be on the wrong ends of the computer Fraud and Abuse Act prosecution.
There's the company that whoever the target is of this exploit can also pursue who's ever
accessed the system without authorization. The question then is ‑‑ is there anything
we can do to try to reduce our chances of being on the wrong ends of this type of lawsuit?
Well, let's take a quick ‑‑ we don't want to go too far into statute. Not a continuing
legal education conference. But let's look at the statute and see if there's key words
we can at least identify. Here we are whoever having knowingly accessed the computer without
authorization. Another part. Whoever intentionally accesses a computer without authorization.
So one of the things you can do is to try to avoid unintentionally creating knowledge
and intent. It's a little bit hard to do this for yourself if you intend to do something
you can avoid doing it in connection with other people. So for example, I would suggest
that you do not direct information about how to use some kind of technique to someone that
you suspect or have reason to know is likely to use it illegally. Be careful when providing
technical support for a new technique you've developed. If I were your lawyer I would advise
you not to answer that tweet if someone is tweeting into asking about how to make something
more effective perhaps. Next slide is more detailed.
Some more approaches that you might take. Don't provide information directly to individuals,
especially if you're not sure who they are or what they might be up to. Consider just
posting things on a Web site only. Do not post information to forums. Where you suspect
or you ‑‑ or forms that are known to generally promote illegal activity. If you publish it
on your own Web site or have control of the post, consider disabling comments so you don't
have a situation of people discussing potentially illegal uses of your technique. Lastly, don't
maintain logs. (Laughter)
So one of the things we've seen happen, that's enough of the computer Fraud and Abuse Act
for now. There's not a lot you can do about it beyond just being careful. Let's move on
to temporary restraining order. This is particularly timely, actually, because you may have read
the story about the VW group and the mega most encryption that was used on the mobilizers.
So European security researchers figured out how to bypass the ‑‑ or discovered a flaw
in the encryption that was used on the vehicle immobilizers used on Porsche and Audi and
Bentley. And they were going to present this at the conference in Washington, D.C., in
a few week and they got themselves slapped with a temporary restraining order preventing
them from making the disclosure at the conference. How did this happen and how can we keep this
from happening again. We've seen this here at Blackhat and DEF CON. The talks have been
stopped by a temporary restraining order. Take a quick look at the factors that courts
look at when deciding whether or not to grant a temporary restraining order to prevent a
researcher from disclosing information about a vulnerability the VW group will they have
temporary harm if the TRO is not issued. Good evening, sir.
>> Who knows how this works? Is he a new speaker. >> Yes.
>> It's really hard to be selected as a speaker to the DEF CON. You need to present talks
to eventually have yourselves up here, right? I've been drinking all day. A big round of
applause for Jim. All right. One more order of business. We need a new person his first
time at DEF CON. First hand up right there. Red shirt, we've got extra, let's get more.
Two people. First hand up over there. There we go. Cheers to our new speaker. Let's see
if he can pick up where he left off. We're going to work on new material for tomorrow.
Thank you.
>> JAMES DENARO: All right. So that was great. Thank you.
So just a quick look at some of the factors that the court is going to look at when deciding
whether or not to grant a temporary restraining order to someone like the VW group who wants
to prevent something from happening at the conference. Will they suffer irreparable harm?
They've got an embedded system. They're going to figure out how to break it. Impossible
to figure out in a period of time. Probably irreparable harm. Money isn't going to fix
it. That goes in VW group's favor. Will there be harm to the researcher. Your paper got
delayed? You couldn't put in some part of the algorithm, you have to pare back. Hard
to see that as a huge harm to the researcher, might feel bad about that but in terms of
the huge sums of money the VW group is going to have to pay to fix this, it's not going
to look good for the researcher there. Public interest, this is a fun one because we might
think the public interest clearly favors disclosing the vulnerability so it can be fixed. The
court is probably going to go the other way on that and see that really, the risk to all
these BMWs or Porsches and Bentleys and things being stolen is much greater, much more in
the public interest than having your obscure crypto talk go forward. The last factor is
the likelihood a requester will ultimately prevail. This is the one we need to focus
on. Because the VW group has to have cause of action, they can't just say we don't like
it. They say here's why you need to stop. It's because you did something bad to us.
And in the case of VW group, with the mega most case and also in the case of this Sysco
disclosure, what we had was the use of copyrighted material and that was the hook that got the
TRO to issue. So the obvious advice then is to avoid the use of copyrighted material.
So, if you include source code or object code from whatever it is that you're working on,
that gives leverage to whoever it is who wants to stop you from disclosing it. There is a
fair use of exception if you use pieces of code. That's a case by case analysis. You
can't just say this is fair use. It depends on how much you use and other actors that
are very specific to what's actually going on in your case. So just try to avoid it if
you can. It may not be possible. But to the extent you can do that. Also avoid dark net
sources for whatever you're getting this stuff. In the mega most case, the court talked about
the fact that the researchers obtained some information about how the Megamost system
worked through some sketchy channels. I don't recall it saying exactly where they got it.
But it was some sort of Bittorrent PDP type thing, wherever they got it. It wasn't from
VW group or mega most. So another thing you want to do is be aware of pre‑existing contractual
relationships that you as a security researcher might have with the target of whatever it
is you're working on. These contractual agreements could come in
the form of a term of service, end user license agreements. Nondisclosure agreements or employment
agreements. An end user license agreement might well have
provisions in it that prohibit reverse engineering software for example and that might be what
you're doing as part your exploration into your technique and that could give leverage
to someone to try to stop using, oh, you've breached this. Nothing is for certain. It's
just an argument they have. There's not much you can do about it. Pretty much every piece
of software you have is going to have some software ‑‑ assuming you've come to it
legitimately. There's not a whole lot you can do about that. But you at least can be
aware of the risk. If nothing else. How far you need to do to mitigate the risk depends
on the technique you've used in research. If you've done things that clearly look like
some of the examples of what people have done that's gotten them prison time, that's something
you need to be careful of and maybe take more aggressive mitigation techniques in order
to perhaps hide some of the information about what you're doing.
So for example, if in the mega most case no one had identified that it was the VW group
that was ‑‑ where the crypto system had been compromised, VW group would not have
been able to issue ‑‑ to go after a temporary restraining order against the researchers.
So perhaps there's an opportunity here for the conference going community to create a
track where people could present things that sort of get a little asterisk or something
next to it. This is something that had to be kept quiet. Confidential disclosure, trust
the review board, this is going to be really cool but we can't tell you about what it is
because then you won't get to hear it. So maybe that's one approach. I'd like to talk
about some of the ways you might make a disclosure that are relatively less likely to get you
in trouble. You can obviously disclose to the responsible party. That's what we'd like
to do. That's what the responsible disclosure paradigm is all about. You have a problem
with the system. This is actually unfortunately, relatively high risk. And risk scales with
the questionableness to whatever technique it was that you use to find out about this
vulnerability. So, if you're connected to the remote system, you don't have permission,
that's how you did it. It may not be a great idea to tell them about it because if they
don't like it, they've got an action against you. If you're inconvenience, that's a problem
for you. You might think you're doing them a favor, they might not agree that you're
doing them a favor. If you're able to submit them anonymously to whoever the vendor is
or whoever the responsible party, that's great. Depends how good your op sec is, I guess.
A lot of times you think you're anonymous by not as anonymous as you thought you were
or hoped you were. That's a risk in yourself you need to consider. If you're a bug bounty,
maybe you're at less risk, you can disclose to a government authority perhaps maybe you
don't believe it will ever get to the vendor. But again, if you ‑‑ if your techniques
were perhaps questionable, you might not necessarily want to be submitting it to a government.
A governmental authority, you may want it separately, you may have an interest in keeping
your identity anonymously. You may try to submit anonymously to the government but I
don't know how much we can really trust that any more.
Fortunately this is a legal talk, somewhat legal talk and you almost never get to a legal
talk where someone will tell you something for sure, absolutely 100% you will not get
in trouble if you do this. But for Natalie, we are in a case here where there is one group
of people who really don't have to worry about getting in trouble with the computer fraud
and abuse act when they disclose the vulnerability and here they are. Okay to disclose if you're
one of these people. Although she really should not have been hacking the palace computer,
we're not going to hold that against her. So we're thinking about ways that we might
be able to leverage opportunities for security researchers to make disclosure while keeping
the risk as low as possible. So we're working on creating a pilot program where attorney‑client
privilege can be leveraged to hide the identity and the techniques used by security researcher
in making a disclosure. So the concept works like this. The researcher would disclose the
vulnerability to a trusted third party which would be an attorney.
Only to the attorney. It's critical that this be a completely confidential
disclosure to maintain that attorney ‑‑ the confidentiality of that disclosure so that
other entities in the outside can't get to it. The trusted third party does not publish
the vulnerability on behalf of the researcher, however, the trusted third party does disclose
the vulnerability to whoever the third party is, whoever has this vulnerability. The researcher
remains anonymous through the entire process This is possibly abuse if there's no better
option. It's a little bit cumbersome and there are some side effects chiefly that the researcher
remains anonymous, doesn't get public credit for whatever the research was. But it is one
possible way for the researcher to be able to disclose and remain about as anonymous
as one could possibly get. So this is a pilot program we're currently working on it. We're
kicking out the bugs right now. If anyone is interested in talking to us further about
it, we definitely welcome your input and please see me afterwards. We should now turn to selling
very quickly. Right now there is no law in the U.S. that prohibits the selling of an
exploit. And that is a situation that is probably likely to change in the not too distant future
but for now there's not too much to worry about unless your techniques in developing
your exploit have problem, then you still have a problem. But the fact of the sale itself
is not something that's going to get you in trouble. However, there's a lot of focus on
this market now. And here's some recent articles from May of 2013 booming zero day trade as
Washington experts worried. My favorite "The U.S. Senate wants to control malware like
it's a missile" stuff is dangerous. So every year, the Congress has to pass the National
Defense Authorization Act that sets the budget for DoD and includes a bunch of other stuff
that gets stuck in there. And this year well, for 2014, the Senate version ‑‑ it hasn't
been passed yet. It's still in Congress. The Senate version has provisions that seek to
begin the process of regulating the sale of exploits. The bill ‑‑ the house version
doesn't have this. This is still just in the Senate. But you know, I think this is where
it's headed. The bill notes that the president shall establish process for developing policy
to control the proliferation of cyber weapons through a whole series of possible actions.
Export controls, law enforcement, financial, diplomatic engagement and so on. The Senate
armed services committee, that had the bill before it was passed to the rest of the Senate
said they had some commentary on this. And they referred to the dangerous software, a
global black market, a gray market, it starts to look really bad. But they note that there
is ‑‑ we need to have a carve out for dual use software. And pentesting tools. In
Europe, the European Parliament recently passed a directive, they're a little ahead of us
this prohibition on the sale of tools as they call it, basically exploits, will be required
to be enacted by all of the member states in short order and this provision prohibits
the production, sale, procurement for use, import, distribution of these tools that can
be used to commit these enumerated offenses which is pretty much all the bad things you
can think of doing with a computer. However, there's an exception, very important exception.
For tools that are he created for legitimate purposes such as the test or reliability of
systems and further notes that there needs to be in order to violate this law, you need
to have ‑‑ need to show a direct intent that the tools be used to commit some of the
offenses. So in both cases, both U.S. and in Europe we're seeing this trend. Well, it's
really going back to the definitional problem how do we define what an exploit is and how
do we make sure that legitimate tool can still be bought and sold this is perspective. We
don't know what the laws will look like but I would start thinking of this. Think about
dual use tools. If you write something don't put it together as the next greatest hack.
This is ‑‑ you're creating pentesting tools. This has gone on for a long time. If
you look at software I'm sure you've all used it. Copy 2+. Apple 2 or locksmith? Backup
software. And the manuals for these softwares have they elaborate disclaimers, this is being
used to back up your floppy, this is not being used to make illegal copies. And that is really
the conundrum and that's where exploits will go. Some exploits will never be able to be
looked at as a dual use tool for sure. If ‑‑ you know, if ‑‑ sort of like if
you have a nuclear missile equivalent of an exploit, it's hard to justify the pentesting
value of that. For a lot of tools perhaps that's where they should go. If you are selling.
Know your buyer. To the extend you go. I think regulation is just one bad outcome away. Someone
in the U.S. is going to sell an exploit and going to go through some channel and get used
against some U.S. Interest, we may not here about it. It may be secret but this will happen
and then there will be a huge drive to stop this from happening quickly. It's the same
reason as soon as if someone is murdered with a certain weapon that weapon has to be banned,
that's the way laws are created is very reactionary and I expect that to happen here. Maybe you
can prevent that from happening. Know your buyer. If you're selling something don't sell
into channel that's likely to go into embargoing with the United States. Maybe your best bet
is to sell it to the U.S. asking for assurance from your buyer. You don't have knowledge
of someplace it's not supposed to go. You can be lied to, but you can't control everything.
At least you can get assurance that it's not going to be used in some illegitimate way.
Also, you can always use disclaimer language. So have nice examples of disclaimer language
here. This huge chunk of text on the top is actually from a software product that many
of you have probably used many times. It's good stuff. I've highlighted the best of the
operative language in it. But, if you're selling something be sure to use disclaimer language
that flows along these lines that would help you from being charged at being complicit
in any sort of illegal use to which the software might eventually be put.
And lastly, I'd just like to highlight this bottom little paragraph which is actually
from the Apple iTunes store, end user license agreement that comes with that.
And it requires that you agree that you will not use these products for any purpose prohibited
to develop design or manufacturer production of nuclear missiles or chemicals or biological
weapons. My God. Words with friends, that's dangerous stuff.
So thank you for coming. This is my contact (Applause.)
We have time here for questions so, if people want to line up, I'm happy to entertain them
as best we can. There are definitely free speech issues. Especially in the temporary
restraining order context. Second amendment, sorry. Come see me about that. Question back
here? What about using a corporation to limit your liability for disclosure or selling?
Is that ‑‑ >> JAMES DENARO: Corporations can be held
liable in many cases even under the computer Fraud and Abuse Act it hasn't happened yet
but a corporation could be held liable. >> AUDIENCE: A question regarding full disclosure
versus responsible disclosure. So when we do it, we do it via responsible disclosure,
we contact the vendor, we give them 30 days and we tell them our intent to publish and
we publish everything so the actual vulnerability of how to do it so people can replicate and
do whatever they want. In most cases the vendors get a hot fix in within a week and then if
within 30 days they provide the hot fix, we write them and say to fix it install hot fix
whatever. Sometimes vendors will say we need more time, maybe we'll negotiate a couple
days, but sometimes they'll say we're not going to fix it and you come publish it. So
I won't explain what we do for that. But Google recently published the fact that they plan
to disclose vulnerability within 7 days. All right? To have a 7 day turnaround. So what
happens if a company like Google ‑‑ I don't want to use the word threaten but intend
to publish a vulnerable within the 7‑day turn around, and the company says don't or
we'll sue you. What happens? >> JAMES DENARO: Google is at risk if they
have some kind of obligation not to. It depends on the specific circumstances of it. But in
this case if no law has been broken, then Google could be published without any ‑‑
>> AUDIENCE: The case for me, for example, where I contact vendor and say I've got the
10 vulnerabilities I intend to publish. And they come back and say I'm going to sue you
same with Google where Google says we're going to give you 7 days, not 30, the company comes
back and says Google I'm going to sue you. It's not the same as they're going to sue
me. >> JAMES DENARO: That's the unfortunate part.
>> Is it the case of how good your legal team is or how expensive your legal team is.
>> JAMES DENARO: Exactly. >> AUDIENCE: How much do you charge?
>> Wherever you want to meet him at to carry on. There's about 5 or 6 other people.
>> So this talk is over with. We've got to get ready for the evening. Down the hallway,
he'll answer the rest of your questions. Unfortunately, there's not a Q&A because it's been disassembled
too. Thank you, all. (Applause.)