Tip:
Highlight text to annotate it
X
>> DANIEL SELIFONOV: So, thanks for coming. We are here to talk about full disk encryption,
why you are not as secure as you think you are. Oh, what just happened? I am missing
a slide. I will just say what it was. So how many of you encrypt the hard drives in your
computer? Just a show of hands. Oh, wow. Welcome to DEF CON. (Laughter).
>> DANIEL SELIFONOV: So I guess 90% of you at least. How many of you use open source
full disk encryption software as something that you can potentially audit? Not as many
of you. But true encryption or, you know, how many of you always fully shut down your
computer whenever you are leaving it unattended? Okay. More of you. I would say about 20%.
How many of you have ever left your computer unattended for more than a few hours? A lot
of hands should be up. >> You talking about on or off?
>> DANIEL SELIFONOV: Either on or off. (Laughter). >> DANIEL SELIFONOV: I mean I would be surprised
if you are not because I would have to ask are you like zombies that don't sleep and
then the other answer, of course, is anyone who leaves their computer unattended for more
than a few minutes, also pretty much everyone. So why do we encrypt our computer? And it
is hard to find anybody talking about this which is really weird and I think it is really
important to articulate our motivations why we are doing something, a particular security
practice to ‑‑ and if we don't do that we don't have a sensible goalpost to see how
we are doing. There is plenty of details in the documentation of full disk encryption
software and I argue that we want ‑‑ we encrypt our computer because we want some control
of our data. Some assurances about the confidentiality and integrity of our data that nobody is stealing
our data or modifying our data without knowing about it. We want termination over your data.
We want to be able to control what happens to it. There is situations where you have
liabilities for not maintaining the secrecy of your data. Lawyers have to have attorney‑client
privileges, doctors have doctor‑patient confidentiality. And so if you are leaking
data, there is companies which have to notify their customers oh, we have ‑‑ someone
left a laptop unencrypted in a van and it got broken in to and stolen. So your data
might be out there on the Internet. But it also speaks to ‑‑ and it is really all
about physical access to our computers that we want to protect because full disk encryption
does not do anything if someone owns your machine but it also gets to a greater point
if we want to build secure networks. If you want to have a secure Internet we can't have
that unless we have end points that are secure. And you can't build a secure network without
the foundations of the secure end points. But by and large we figured out the disk encryption
theory numbers of the stuff. We know all the block cipher modes of operation. We know how
to derive keys from passwords securely. So mission accomplished, right? We can all stand
on an aircraft carrier, you know. (Laughter). >> DANIEL SELIFONOV: The answer is no, it
is not the whole story. There is still a hell of a lot of cleanup that you need to do. Even
if you have absolutely perfect cryptography, even if you know it can't be broken in any
way you have to implement it on a real computer where you don't have these nice black box
academic properties of your system, and so you don't attack the crypto when you are trying
to break someone's full disk encryption. You either attack the computer and trick the user
somehow or you attack the user and convince them to give you the password or get it by
some other means. And de facto use doesn't really match up with the security models of
the full disk encryption software. If you are looking at full disk encryption software
they are focused on the disk. Actual documentation that they do, not secure data on your computer if someone
has ever manipulated it or is manipulating it while it is running. Basically their security
model if it encrypts the disk correctly or if it decrypts the disk correctly we have
done our job. I apologize for the text, that you probably would not be able to read very
welt. So I will read it here. So we note this is an exchange between the true encrypt developers
and another security researcher by the name of Joan Rocosa where she brought up this attack
and tried to talk to them. This is what they said. "We never consider the feasibility of
hardware attacks and we have to assume the worst. Do you carry your laptop with you all
the time? How the user ensures physical security is not our problem." And she asks very directly
why in the world do I need encryption then. Ignoring feasibility of an attack you didn't
do that. We live in the real world where we have these systems that we have to deal with.
We have to implement this. We have to use them. And there is no way that you can compare
a ten‑minute attack that you can conduct just software like a Flash drive to something
where you need to pull apart the hardware and manipulate the system that way. And regardless
of what they say, physical security and resistance to physical attack is in the scope of full
disk encryption. It doesn't matter what you disclaim in your security model. At the very
least if they don't want it to claim responsibility they need to be very clear and unequivocal
about how easily the stuff can be broken. So this is a diagram of ‑‑ sort of an
abstract system diagram of what is mostly in that modern CPU or modern computer and
sort of what the group process is. Just so everyone is on the same page of what actually
happens here. So as we know the boot loader gets loaded from the secondary storage and
gets copied in to the main memory through a data transfer and boot loader then asks
the user for some sort of authentication credential, like a password or a key smart card or something
like that. That password is then transformed by some process in to a key which is then
stored in memory for the duration of the computer being active and another boot loader transfers
control over to the operating system and then both the operating system and key remain in
memory for the transparent encryption and decryption of the computer. This is a very
idealized view. Assumes that nobody is trying to screw with it in different ways where this
can be broken. So let's enumerate a few things that might go wrong if one is trying to attack
you. (Lost audio) Some other hardware component that you can attach to it, PCI card or express
card or thunder bolt, the new adapter that gives you naked access to the PCI bus and
consider attacks where a screwdriver might be required where you have to remove some
system component and soldering wire attacks where you are either adding or modifying system
components in order to try to break these things. And so one of the first types of attacks,
a compromised boot loader or this is also sometimes known as the evil mate attack where
the boot loader itself since you need to start executing some unencrypted code as part of
the system boot process. Something which you can bootstrap yourself with and a few different ways you can do this. You
can physically alter the boot loader on the storage system and you could compromise the
bios and load a malicious bios that hooks the disk reading routines and modify it in
a way that resists to removing the hard drive. In any case you can modify your system. When
the user enters its password it gets written to disk unencrypted. You can do something
similar with the operating system level. This is especially true if you are not using full
disk encryption. There is the whole operating system that somebody can manipulate and this
can also happen from attack on the system. Someone gets root on the box and now can read
the key out the main memory. And then that key could be either transtored on the hard
drive in plain text for later acquisition by the attack or sent over to the network
by the command and control systems. Another possibility, of course, is capturing the user
input via key logger, software, hardware, something exotic like a pinhole camera or
maybe a microphone that records them typing in sounds and trying to figure out what keys
they pressed and this is kind of a hard attack to stop because it potentially includes components
outside of the system. I want to talk about cold boot attack. If you asked five years
ago even people who are very security savvy what are the data properties, what are the
security properties of main memory they would tell you when it powers down you lose the
data very, very quickly. And then an excellent paper from Princeton, 2008, discovered that
actually a room temperature you are looking at several seconds of perfectly good, very,
very little data degradation in RAM. And if you cool it down to cryogenic temperatures
by using an inverted can duster you can ‑‑ there are several minutes where you are getting
very, very little degradation in main memory. And your key is in main memory and someone
pulls your modules out and pulls up the modules from your computer, they can attack your key
by finding where it is in the main memory in the clear. You can ‑‑ and those are
like some attempts for resolving this hardware. Memory modules need to be scubbed. But it
is not going to help you when you take the module out and puts it on the computer or
a dedicated piece of hardware for extracting memory module content. Any PCI device on your
computer has the ability to read and write the contents of any sector in main memory.
They dock anything. And I mean this was designed back when computers were much slower where
we didn't want to have the CPU baby‑sitting every transfer from devices to and from main
memory. So devices gain this direct memory access capability to just ‑‑ they can
be issued a command by the CPU and then finish it and data could be in memory whenever you
needed it. PCI devices can be reprogrammed. A lot of these things have writable firmware
that you can just reflash to something hostile and compromise the operating system or execute
anyone at any other form of attack of either modifying the OS or pulling out the key directly.
There is forensic capture hardware that is designed to do this in criminal investigations.
They like plug something in to your computer and pull out the contents of memory. You can
do this with fire wire and do this with express card and do this over thunder bolt, the new
Apple adapter. So these are basically external buses to your ‑‑ these are external ports
to your internal system bus which is very, very powerful. So wouldn't it be nice if we
can keep our key somewhere else in RAM because we have sort of demonstrated that RAM is not
terribly trustworthy from a security perspective? Is there any dedicated key storage or cryptographic
hardware? You can find cryptographic accelerators. And they are tamper resistant or certificate
authorities have these things that hold their top secret keys. But they are not really designed
for high throughput operations like using disk encryption. Are there any other options?
Can we use the CPU as sort of a pseudo hardware crypto module. So can we compute something
like AES in a CPU using only something like CPU registers? Intel and AIM have rather excellent
instructions which take all the hard work of AES out of your hands. Just a single assembly
instruction. The question is then can we store ‑‑ can we store our key in memory and
can we actually perform this process without relying on main memory. Another fairly large
center, I don't know if you have tried adding up all the bits you have in registers but
something like four kilobytes we can dedicate to key storage and scratch base for our encryption
operations. One possibility is using the hardware break registers. There is four of them and
in 64‑bit mode these are each going to hold 64‑bit pointer. So 256 bits of potential
storage space that most people will never actually use. The advantage, of course, to
using debug registers is one of privileged registers. Only the operating system can access
them. And the other ‑‑ we get other nice benefits like when the CPU is powered down
either by shutting off system or putting in sleep mode you use all register contents.
You can't cold boot these. And a guy in Germany actually implemented this thing as Tresor
Forley (phonetic) next in 2011 and it is not any slower. How about instead of storing a
single key we can store 228‑bit keys? This gets us in to more of the crypto module space.
We can store a single master key which never leaves a CPU on bootup and then load and unload
wrapped versions of keys as we need them for additional task operations. The problem is
this ‑‑ we can have our code, our keys stored outside of main memory but the CPU
is ultimately still going to be executing the contents of memory. DMA transfer or some
other manipulation can alter the operating system and dump. Can we do anything about
the DMA attack? And as it turns out yes, we can. And recently as part of new technologies
for enhancing server virtualization for performance reasons people like being able to attach a
network adapter to a virtual server. So it would need to go through a hypervisor. New
technology was developed so you can sandbox a CPI box. So this is perfect. We can set
up IOMMU permissions to protect our operating system and protect it from arbitrary access.
And again our friend from Germany has implemented a version of Tresor on a microbit visor and
it transparently does this disk access encryption. Disk access is totally transparent to the
OS. And debug registers cannot be accessed by the OS. And secure from manipulation. As
it turns out there is kind of other things in memory where we do container ‑‑ we used to do
container encryption and now we all do full disk encryption. And we do full disk encryption
because it is very, very difficult to make sure you don't have accidental rights to or
caching in a container encryption system. Now that we are re‑evaluating main memory
as a not secure, not trustworthy we need to treat it the same way. Things that are really
important like SSH keys or private keys or PGP keys or password manager or any top secret
documents that you are working on. So I had a very, very silly notion, can we encrypt
main memory or at least most of the main memory where we are likely to keep secrets so we
can at least minimize how much we are going to leak. Once again surprisingly the answer
again is yes. A, proof of concept in 2010 by a guy named Peter Peterson tried implementing
a RAM encryption solution. So it wouldn't encrypt all of RAM. It would basically split
main memory in to two components, a small fix clear which would be unencrypted and then
larger sort of pseudo swap device where all the data was encrypted prior to being kept
in main memory. And it being quite a bit slower in synthetic benchmarks but in the real world
when you ran like a web browser benchmark, it actually did pretty well. 10% slower. I
think we can live with that. The problem with this proof of concept implementation it stored
the key to the encrypt in main memory because where else would we put it. The author considered
using things like the TPM for bulk encryption operations, but those things are even slower
than dedicated hardware crypto systems. Totally unusable. If we have the capability to use
the CPU as sort of a pseudo crypto module it should be fast enough to do these things.
Maybe we can use something like this. So let's say we have sort of a system set up. We have
gotten our ‑‑ our keys are not in main memory and our code for manipulating our keys
‑‑ main memory is encrypted. Most of our secrets are not going to leak. But how do
we actually get a system booted up to this state? Because we need to start from an turned
off system, authenticate ourselves to it and get the system up and running. How do we do
this in a trustworthy way? Because after all someone could still modify the system software
to trick us in to thinking that we are running this great new system but in reality we are
just not doing anything. So one of the very important topics is being able to verify integrity
of our computers. You the user has to verify that the computer has not been tampered before
they verify they can use this. Trusted module has got a bad rap but it has the capability
to measure your booting sequence in a couple of different ways. To let you control what
data will be revealed to the system from the TPM unless in two particular configuration
states. So you can seal data to a particular software configuration that you are running
on your system. And there is a couple of different implementation approaches to do this and there
is fancy cryptography to make it hard to get around it. Maybe we can do this.
What is a TPM anyway? It was originally sort of hailed by the digital lights by media companies.
Media companies would be able to remotely verify that your system is running in some
approved configuration before they would let you run the software and unlock the key to
your video files. It ended up being impractical in practice. Nobody is even trying to use
it for this purpose anymore. I think a better way to think about it is really a smart card
that's fixed on your motherboard. And it has physical attack countermeasures to prevent
someone from very easily getting access to the data that's stored in it. The only real
difference between it and the smart card has the ability to measure the boot state in two
platform configuration registers and usually a separate chip on the motherboard. So there
is some security implications of that. There is some kind of fun bits like monotic counters.
There is a small nonvolatile memory range that you could use for well, really whatever
you want. It is not very big, like a kilobyte but could be useful. There is a tic counter,
use the term of how long the system has been running since the last startup. To make it
do things on your behalf which include things like clearing itself if you feel the need
to. So we want to then develop a protocol that a user can run against the computer so
that they can verify that the computer has not been tampered with before they authenticate
themselves, the computer and then begin using it. So what sort of things can we try sealing
to platform configuration registers that would be useful for the protocol. A couple of suggestions
that I have is seed to one time password tokens. Maybe some sort of a unique image or animation
like a photograph of you somewhere. Something that's difficult to ‑‑ something unique,
not something that someone can easily find elsewhere and then say disable the video out
on your computer when you are part of this challenge response authentication mode. You
also want to seal a part of the disk key and a couple of reasons you want to do that. It
assures that the system within certain security assumptions. Assures that the system is going
to be booting in to be approved in some software configuration. Ultimately that means that
anyone who wants to attack your system needs to do it either through the breaking of TPM
or needs to do it within the sandbox that you have created for them. And this is not
very cryptographically strong. You are not going to have a protocol that allows a user
to securely authenticate to a computer. But unless you can do something like RSA encryption
in your head it is never going to be perfect. So I mention that there is a self‑erase
TPM command that you can issue as in the software. And since you are also running the ‑‑ since
you require the system ‑‑ TPM requires the system to be in a particular configuration
before it will release secrets you can do something interesting like self‑destruct.
If you develop the software and set up your protocol to limit say number of times a computer
has been started up unsuccessfully, have time‑out once waiting on password screen for some period
of time or limit the number of times you can enter the password or the amount of time since
the computer has been started up. Maybe it has been in cold storage for a week or so.
And you can restrict access to the computer for a period. You are travelling to a foreign
country and you want to lock down your computer for the duration of the trip. You can do fun
things like leave little caners on the disk which appear to have the critical values for
your policy by just trip wires and you are using the internal TPM values and also create
the a self‑destruct password, dress code to automatically issue this recite command.
And since the two options attacker would have would break the TPM or run your software,
you can kind of make them play by these rules. And you can actually do an effective self‑destruct.
The TPM is intentionally designed to be very, very hard to copy. Basically you can't clone
it very easily. So you could use things like monotic counters to protect write blockers,
any disk restore replay attacks and once the TPM clear command is issued it is game over
for the attacker. You might want to get access to your data.
There is some similarities to a system that Jacob Applebomb discussed at the (inaudible).
He proposed using a network ‑‑ remote network server for many of these options.
But admitted it was going to be brutal and kind of potentially difficult to use. Since
the TPM is an integrated system component you can get a lot of these advantages by using
the TPM instead of remote server. In the hybrid approach you could have a system set up say
as an IT department where you temporarily lock down a system and it can only become
available until you plug it in to the network and call your IT administrator and unlock
the network again. But it is still a possibility. So I have sort of qualified all my statements
attacker can only do this. That's, of course, under the assumption that they cannot break
the TPM very easily. So this is actually an optical microscope scan of a TPM or smart
card done by Chris Tarnovsky who spoke here last year. He has actually done some great
work in figuring out how much ‑‑ how hard these things are to break. He has enumerated
the countermeasures and figured out what to take to break these things and has gone and
done it and tested it. So things like light detectors and active meshes and all sorts
of these crazy circuit implementations to throw you off the track of what it is doing.
But if you spend enough time, you have enough resources and you are careful enough you can
actually get around these ‑‑ you can actually get around most of these. So you can encapsulate
a chip and put it in an electron microscope work station and go wild. You find where the
unencrypted data bus is and glitch it and get the things to spawn out all the secret
data. Even if you have done all the R&D, something that's going to take hours with an expensive
microscope and you are still going to spend months of R&D to figure out what the countermeasures
are on the chip you can break it without frying the one chip without attack target.
There is also recent attacks. I mentioned earlier been that the TPM is a separate chip
on the motherboard. It is very, very low on the system hierarchy and not up in the CPU
like it is for DRM enforcement in video game consoles. If you manage to reset it you are
really not going to adversely affect the system that badly. It is usually a chip that's off
the LPC bus on the computer which itself is sort of a legacy bus that's off the south
bridge or platform hub. And on monitored systems the only sorts of things that you are going
to find are the TPM and bios legacy keyboards. So if you ‑‑ and if you find a way to
reset say the low pin count bus you will reset the TPM in to a fresh system, boot system.
You will lose your PS2 keyboard but not a big deal and you will be able to play back
the boot sequence of. A trusted boot sequence of the TPM has data sealed to without actually
executing that boot sequence. A couple of attacks that have tried to exploit this. I
have not seen any research on a successful attack against the newer Intel execution technology
version of the TPM activation. It is likely still possible. So this is an area that probably
needs more research to intercept the LPC bus and what it is communicating to the CPUs.
That's another way you can attack the TPM. So let's look at a blueprint, what I think
we should have for getting the system up from a cold boot state up in to what we have a
running trustworthy configuration. There is a lot of vulnerable legacy components in our
PC, architecture. Masking out CPU feature registers. I mean there is plenty of options
if you want to mess with people. And so in my opinion you really want to get out a bios
controlled mode out of real mode in to protected mode as soon as possible and really just do
your measurement stuff. So once you get in to this preboot mode it is just your operating
system, like a Linux initial RAM disk and then you start executing your protocol and
doing these things, what someone does at the bios level as far as interrupt tables. If
you know you are running on a core I5, you know it is going to be supporting things like
no execute bit and debug registers and other stuff that people might try to mask out in
capability registers. So here is the run time blueprint. What we actually want the system
‑‑ what we actually want the system to look like once we are in the running configuration,
so there was the previous project Trevisor. We implemented the security aspects of doing
disk encryption and having IOMMU protections on your main memory and bit visor is specialty
and not very commonly used program. Zen is sort of like the conical open source hypervisor
where there is a lot of security research going on and making sure it is not broken.
In my opinion we should use something like Zen, bare level hardware interface and then
use a Linux administrator domain to do your hardware initialization. So again in Zen all
of your paravirtualized domains are running in nonprivileged mode in ring 3. They don't
have direct access to things like the debug registers. So that's one thing that is already
done. Zen exposes things like hyper calls that gives you access to that sort of stuff
but it is something that you can disable in software. And so the approach I am taking
is we will sort of do that master key approach in the debug registers. We will dedicate two
debug registers to store master key. This thing never leaves CPU register and that takes
the user credential and then we use the second two registers as virtual machine specifics,
whatever. It could be either as ordinary debug registers or we could use it to encrypt main
memory. In this particular case we still need to have a few devices that are connected to
the main domain. The key, the TPM, all the stuff needs to be directly accessible. You
can't apply IOMMU protections on this. But things like the network controller, the storage
controller, arbitrary devices on the PCI bus you can set up IOMMU protections on it. So
they have absolutely 0 access to your hypervisor, memory spaces or domain. You can do similar
things by ‑‑ you can get access to things like the network by actually putting things
like your network controller in to dedicated virtual machines. So these things are ‑‑ these
things get the devices mapped but have IOMMU protection set up so that devices can only
access the memory space of this virtual machine. You can do the same thing with storage controller,
and then you actually run all of your applications in virtual machines that have absolutely 0
direct hardware access. Even if someone owns your web browser or sends you a malicious
pdf file, they don't get anything that would let them seriously compromise your disk encryption.
So I can't take the credit for that architecture design. It was ‑‑ actually the design
base is for an excellent project called the cubes OS project. They basically describe
themselves as a pragmatic formation of Zen to do custom tools to do a lot of the stuff
that I talked about. Implement these nonprivileged guests and do a nice unified environment.
It is actually a bunch of different virtual machines under the hood. I use this as the
implementation in my code base and all the crypto stuff is stuff that I have added on
top of it. And so the tool I am releasing, this is still really proof of concept experimental
code. I call it phalaxncy (phonetic). It is a patch to Zen to do the implementation of
the disk encryption stuff as I have described. Master key in the first two debug registers
and second debug register is totally unencrypted. And I have also implemented a encrypt key,
encrypted memory using Z RAM. It has does pretty much everything except for crypto. (No audio) And so it is ‑‑ the
nice thing about Z RAM it gives you a bunch of the bits that you need to securely implement
things like AS counter mode which is really great. Hardware wise you do have a ‑‑ you
do have a few system requirements. So you need a system that supports the AES new instructions.
Reasonably calm but not every system has it. Chances are if you have an Intel I5 or I7
all of them support it. There are odd balls. Checkout Intel arc. Data hardware virtualization
extensions, these are very common as of 2006. IOMMU is a little bit more complicated to
find if you are looking for a computer. It is not listed as a sticker specification.
There is a lot of people who should know better but don't about what the differences between
VTX and VTTD and so forth. You want a system TPM, otherwise you can't implement this measured
boot thing at all. So usually you want to be looking at like business class machines
where you can verify the sort of stuff exists. If you look for Intel TXT it will have everything
that you need. The cubes team keeps a great hardware compatibility list on their Wiki
which has details for a lot of systems that do the sort of stuff. So security assumptions,
in order for the system to be secure we have a few assumptions about a few of the existing
components. TPM, very critical component for assuring the integrity of the boot. You need
to make sure that there is no back door capable of dumping MV RAM or manipulating monotic
counters or putting the system in to a state where it is not trusted. We think it is. Resetting
the PCR attacks and based on our remarks by Chernofsky that has reverse engineered these
chips as setting 12 hours abound if you want to do an attack on it. A few assumptions about
the CPU they are not back doored and directly implemented and some of these might not necessarily
be strong assumptions because Intel could very easily back door these things and we
have no way of finding on security assumptions of Zen. It has a good security record and
nothing is perfect and occasionally there is security vulnerabilities. In the case of
Zen that's kind of a big deal. You want to make sure it is secure. And so under those
security assumptions we have a, you know, let's sort of put a framework up for a threat
model. We want to do a realistic threat assessment where we realize that not every system is
threatened. Not all theoretical attacks ‑‑ and I think a good analogy is thinking about safe security. Every safe
can be eventually broken and it is a question of how much time you have to reverse engineer
and how much time you have to break it but eventually it can be broken. So I think we
need to think about our systems in the context of having physical security defenses in terms
of hours rather than minutes that we have right now. And as always if I screw up, if
I omitted an assumption that you don't think holds, prove it, verify it, verify mine and
make sure I am right or wrong. And so expected security, this is what actually gets ‑‑ a
cold boot attack is not going to be effective against keys, and stuff that you have in main
memory is going to be restricted by whatever you have in clear. Hardware based RAM acquisition
is not going to be effective because they are going to be IOMMU sandboxed to nothing.
You are not going to get application state or system state and even if you manage to
extract the secrets out of a TPM all it is going to do is get you back to where we are
right now. Where although it is easily broken you are still not all the way down to 0 and
sort of sending assumption here if you have a good security habit policy which is reasonable
say 12 hours of no contact to your computer you should be okay. A couple of attack methods
which are really like these are the main ones that I would attack if I were trying to break
in to a system that use something like this. Key loggers and friends are still going to
be very much not defended against, you can do mitigation against this by using one time
token, TPM attacks, MV RAM extraction or LC bus. Find some way of tricking the TPM in
to getting in to a configuration that thinks is trustworthy but is not. In RAM manipulation
if you have something like that looks like RAM but is not RAM but it pretends to be RAM
but lets you manipulate externally there is nothing that you can do because you would
be able to manipulate the content of the system, no problem. You can try transient pulse injection.
I will do a quick bit about legal notes. I am not your lawyer. As far as I know if you
have ‑‑ okay. If you have self‑destruct as far as I know it is not illegal yet. But
there has been no legal test case of this. It might be interesting to find out. But I
am not sure I would want to do that test case either. TPM and strong encryption is not ‑‑ it
is illegal in certain jurisdictions. You can't use a TPM in China or Russia. In some countries
like the United Kingdom you have mandatory key disclosure. You will go to prison if you
do not hand over a key. Future work and improvements, production version, stable version, right
now it is not stable. If you put your computer to sleep, it will emit your data. I am working
on it. Some other things that might be fun to do in the future like open SLQ is important.
If some API that you can do to basically let you swap out your contents of memory very,
very quickly. So your exposure time is very small. You can all install and maybe upstream
the patches to Linux and Zen and the goons are getting ready to kick me off the stage.
Conclusions. I am almost done. So best security in the world goes unused if it is unusable.
Model needs to account for realistic use patterns. And it is not just disk encryption. You need
to think about it from the whole system. It is challenging to do this but I think it is
possible and we should try. Thank you. (Applause.)