Tip:
Highlight text to annotate it
X
Isaac Kohane: Thank you very much. It's a -- why does it
say video input out of range on this thing -- anyway, it's a pleasure to be here. Thank
you very much for having invited me. It's really nice to be able to reflect now, 10
years out, on what a transformative event we're celebrating. And so I'm going to reflect
on where we are with respect to genetic privacy, and how we're going to get to an information
commons. And I use the terminology "information commons" specifically because it was a term
that a IOM committee that I had the pleasure of being part of described where we want to
go next in terms of measuring genomic variation and phenotypes in large populations. And it
was actually a particular pleasure to have some of the original members of the National
Academy of Science Group that called for the Human Genome Project in the first place. And
they were able to share how, back at that time, they were told that there was two reasons
-- one of two reasons, why they could not actually follow their recommendations to go
with Human Genome Project. One was, it was that it was impossible to do, and the other
was that it was already being done. And --
[laughter]
Which was interesting that both were being said at the same time. And so, similarly,
in our report around precision medicine from IOM, we said we need to create this information
commons, but the privacy implications of it are significant. And therefore, another way
to think about this talk that I'm going to give you is to say, when you talk about perilous
privacy perspectives, which may promote parochial policies, pinching personal and public prerogatives.
The 10 Ps, remember them.
[laughter]
So let's start with publicity. There was a paper that recently appeared out of the Whitehead
[spelled phonetically] about how individuals from the 1,000 Genomes cohort had been reidentified
using publically available data, and Ancestry.com and genealogies. And there were hundreds of
headlines, such as "Your Biggest Genetic Secrets Can Now Be Hacked, Stolen, and Used for Target
Marketing." Wow, think of all the *** those poor 1,000 Genome people had to buy.
[laughter]
Study highlights the risk of handing over your genome. Researchers found they could
tie peoples' identities to supposedly anonymous genetic data. If you copy your genome sequence
anonymously to a scientific study, that data might still be linked back to you. Sounds
very worrisome.
Remember this? Groundhog Day? I've been inspired, actually, by President Obama's recent visit
to Israel, where he reached into the language of my ancestors to speak in Hebrew, and I'll
do the same now for my colleagues in Cambridge, Massachusetts. [speaks Hebrew] "There is nothing
new under the sun." And why do I say that? One of my colleagues and friends published,
in 1997, a study which I now reprise for you now from a 2001 article. This is Latanya Sweeney,
who did this work as a graduate student, and as this article from 2001 notes, "Starting
with a birth date, sex, and zip code, computer privacy expert Latanya Sweeney, Ph.D., retrieved
health data of William Weld, former governor of Massachusetts, from a supposedly anonymous
database of state employee health insurance claims. Knowing Weld lived in Cambridge, Massachusetts,
she cross-linked her data with that community's publically available voter registration records.
Only six people shared Weld's birthday, only three were men; of these, Weld was the only
man in this five-digit zip code." And then she was able to match this up with the supposedly
anonymous but public insurance records of public employees. And since he had very publically
vomited during a public presentation, she was able to track that episode of gastroenteritis
that he had been hospitalized for, and she could see his whole record using public data.
So I think Latanya Sweeney had made the point, which is, if there's enough data out there,
we can mash it together, and reidentify anybody. We knew this, and had been shown multiple
times. So therefore, what does this last study mean, and what have we learned that is new?
Is genomic data different from other health data? It's not the biggest. Functional MRI
scans are much larger in information content and storage than even BAM files from a genome.
It's not most predictive. [unintelligible] showed that the fact it's not very predictive
if you're asymptomatic. It's actually quite useful in disease. But a lot of other things
are very predictive. Family history, the fact that you're a smoker, the fact that you're
sitting down all day; those are good predictors of future health status. It's not most expensive,
thanks to people like George an MRI cost more than a genome, and will continue to cost more.
It's not the most identifying for at-risk subgroups. Turns out if someone says they're
African, of African origin, they're of African origin. The SNPs bear down. Bu ***'s self-evident,
and, in fact, if you look even at electronic medical records and look at what they stated,
and then do the genetic studies, they're right. So it's really not that disclosing.
But it is most disclosing personally. Fact is, your genome is your barcode, and not only
it is your barcode, it's also the barcode of your family members. So it does have some
special characteristics. And, unfortunately, because a lot of criminal databases -- the
database is not criminal, the database is of criminals.
[laughter]
Or people who are arrested, and maybe not criminals, have some genomes. It could actually
have some disclosing capabilities. So how does that make us think about privacy? Well,
what if I'm in the room and I hear Eric mutter something about that Zak Kohane? Should I
invade his privacy when I heard that? It's debatable. It's debatable whether or not we're
-- on the other hand, if Eric is sitting in his office and I'm bouncing off from outside
his office, down on the green, a laser beam onto his window, as has been done many times,
and I take the public data, this is just public data, on his vibrating window to actually
hear what he's saying; I've used public data, have I breached his privacy? It's public data,
after all. Well, he will say it's probably pretty antisocial for me to do that.
[laughter]
But it is public data. Well, Eric could then decide that he'd go into a bunker and lock
down himself so he can have all these loads of conversations about the sequester, and
no one would hear him.
[laughter]
He could yell very quietly behind meters of concrete. By the way, this is a, in fact,
a real concrete bunker that in Moscow that's turned to a very fashionable restaurant.
[laughter]
Do we need to do that? Do we have to do with our genomes into the concrete bunker and lock
down that data so that no one can ever see it, so that we don't have someone bounce the
equivalent of a laser beam off of our public data to reidentify it? Think about that. Do
we want to go down to the bunker?
Well, even if you're not in the bunker, even if you're very much in the public eye, you
might think that you have some reason to want privacy. The very talented and lovely actress
Scarlett Johansson had the unfortunate experience of having some photographs of her hacked out
of her cellphone. Some argue that, what did she expect? She was, after all, a celebrity.
She's in the public. You're in the public, therefore it's okay to look at things that
are in the public. And now she said, just because you're in the spotlight, or just because
you're an actor or make films, doesn't mean you're not entitled to your own personal privacy.
So just because the data's out there, doesn't mean that it's an invitation to breach your
privacy. That's bad form. It's bad social form.
Another individual, lovely in his own way, Richard Stallman, who some of you may recognize
as the founder of the GNU project and the Free Software Foundation, and one of few people
I know who has a beard that can compete with that of George Church --
[laughter]
-- said, "There is no substitute for privacy. Fortunately, we can maintain our privacy by
limiting by law what companies and the state can collect on a regular basis about everyone.
For instance, instead of a school requiring the ISPs, Internet Service Providers, and
phone companies keep data on everyone's contacts, laws could forbid keeping this data except
for people already placed on a surveillance list by court order. We must require a new
system to be designed for privacy rather than to collect all possible data. It's not too
late to protect the privacy pretty well, or we must insist on it, which means not to heed
the people who say it's hopeless."
So just because -- I want to highlight a few words. He says "pretty well." So you want
to make sure that you're not allowing me to casually over hear you in the hallway, but
if I am behind my windows in my office, you really should not be looking trying to listen
to me, and laws should protect that. And so I think one of the ways you can interpret
what RMS is saying, is privacy is not dead. We just have to engineer our society and our
systems towards it, and recognize that some behaviors are just not acceptable.
But this not an academic discussion, not at all. Seen here is an impressive road map published
in Nature by a group of people, including our own Eric Green, outlining the multiple
stages of work in exploiting the knowledge of the human genome for the public good. Note
here we have understanding the structure of the genome, the basic spelling of it; understanding
the biology of the genomes, how they interact, how they are regulated; and note here, understanding
the biology of the disease. And for that, we'll start to have to require this information
commons that I was referring to it and which I'll detail a little more subsequently. But
if we get the privacy discussion wrong, if we cannot actually -- if we cannot understand
that privacy exists, and that we can, in fact, put data out in the public with expectation
and enforcement of good behavior, then we're at great risk to actually disrupt that road
map at great cost to the future pain and suffering of us, the citizens of this planet.
And it's ironic, it's ironic, because as published by this other Institute of Medicine report
called "For the Record" -- by the way, for those of you who don't know IOM reports are
all available for free on the Web at nap.edu. So the irony is that even today, when there's
limited access of health care data, of biological data, to patients, there's very broad access
to that same data by insurance companies, by the government, by researchers, employers,
direct marketers, state bureaus of final statistics, pharmacy benefit managers, local retail pharmacies,
and attorneys. And what's interesting is, of all of those, the biggest focus, somehow,
ends up being, in most of the debates, around these people who seem -- who are actually
trying to promote the public good. There's no argument that any of these people should
have access to data. You're not seeing paired accomplices about pharmacy managers should
see the data. They're saying, "We need it to pay the bills," and everybody says, okay.
And so it is ironic that we're talking about this when, in fact, others don't seem to worry
too much about it.
So back in 2005, with my colleague Russ Altman from Stanford, we wrote a sounding board piece
called "Health Information Altruists: A Potentially Critical Resource." And what we articulated
was that there was going to be some cause for concern, that there, in fact, we have
to recognize there was no such thing as perfect anonymity. We are well aware because she was
a colleague of Latanya Sweeney's work, and so we did not need to go through another Groundhog
Day or another study to realize that there was no perfect anonymity. But we also saw
that this concern was going to increase when we would have large genomic research -- genomic
research on large clinical populations. And that everyone would be worried about the risks
of sharing data; and, parenthetically, we were also aware that a lot of people would
use these concerns as excuses not to share data.
However, we also note that there were various levels of concerns. Some people really did
not want -- were very worried about disclosure, and others were so unworried that they bordered
on exhibitionist in terms of sharing of data. And what we recommended, we put out there
in this article. We said, first, we should make a set of guarantees to the subjects,
the research subjects, about the risks of reidentification. But to be realistic -- by
the way, just as we were with a 1,000 Genomes cohort, and we can outline the potential damages
about a disclosure, and if they continue -- decide to go forth, the subject presumably will elect
to take the risk in the hope of helping to address human disease. We also wanted to make
sure we actually covered the researchers who curate genetic databases, they should have
also protection as well, provided they follow these guidelines, as they have, in fact, for
the 1,000 Genomes. And most importantly, we said -- and still quite controversial -- patients
should be granted explicit control over the disclosure process. Patients should get to
decide, not anybody else. And those altruists, it turned out, were not hard to find. What
a motley crew.
[laughter]
And what was interesting about this study is, I found it helpful because, you know,
when you share data, knowledge is certainly accrued around the data mash-ups. So this
individual, Steve Pinker, a distinguished psychology researcher at Harvard, was found
a mutation that was supposedly predisposed towards hypertrophic cardiomyopathy. Turns
out, he does not have hypertrophic cardiomyopathy. And for me, that was actually quite rewarding
because it led into my growing obsession with this biggest ome of them all, the incidental
ome, the ome of all incidental findings. And when I actually pointed this out to one of
the researchers involved, she said, "Well, he hasn't developed the hypertrophic cardiomyopathy
yet." And so I started to wonder, although I didn't say this out loud, how old will he
have to be before variant starts becoming protective against HCM.
But nonetheless, this was a brave and bold step forward that actually showed the way
that research could be advanced through this altruistic publication of your own clinical
data and your own genomic data. And this study, the PGP, is actually in this hairy, or fuzzy,
rather, netherland, netherworld, between clinical research and clinical practice. This is definitely
not clinical practice; it's not really necessary clinical research, unless you use it for clinical
research. And this got me thinking quite a bit about what is the distinction between
clinical research and clinical care. Because, in fact, if you go to most IRBs, they'll take
great exception to the idea that there is not an absolute dichotomous divide between
clinical research and clinical care. And yet, let me run a few cases by you. Your pregnant
daughter consents to a research study of fetal Tay-Sachs screening. Trisomy 21 was found,
but was not reported, because it was not part of a consent. Were they right? Hands up those
who think they were right. Either you're a bunch of meek, namby-pambys --
[laughter]
-- or you all massively agree that, in this case, this clinical research data, this genomic
clinical research data, should have been shared with the patient, and become clinically actionable.
Okay, let me push you a little bit further. Your son contributes blood for a study of
ADHD. During exome sequencing, they find your son has a variant, well-documented, to cause
familial adenomatous polyposis. Now today, the ACMG just announced their guidelines on
incidental findings, and for a clinical exome, they say that you actually should report if
you have this variant that leads you to, essentially, super high risk for colon cancer. But that's
for clinical exomes. This is your son. They found it in their research exome for an ADHD
study. Should they report to you or your son, or not? Hands up who thinks they should not
report it. Let me get a little bit braver, because maybe you're all zealots like me,
but it looks like maybe 4 percent of this room actually thinks that you should not disclose
it.
So therefore, I think you implicitly agree with me that this boundary is extremely vague,
and that therefore, we are -- the genome itself is accelerating the era where it's going to
become very unclear to what degree one is participating when one is patient as a research
subject and as a research subject as a patient. This led me to publish a piece in Science
back in 2007 where I said, "What the hell is this? Why have patients and doctors entered
into a compact of mutual ignorance where the doctor agrees not to find out who that -- what
the identity of the patient is, and the patient agrees not to find out what they may learn
about themselves from the study, and therefore they only benefit from the study as being
a member as a class of patients." And what I argued was that we should have patients
contributing their data to anonymous database, as before, but now if we have finding, it
should not only result in high impact publication, but it should also result in a review which
will then allow communication back to the patient of those results that matter, and
that should become a routine part of practice.
At the time, I got a few congratulatory comments, actually from clinicians and clinical leaders.
But many of the genomics community were quite annoyed with me for suggesting this, not least
of which because I was suggesting that they had a reporting burden where they did not
feel they did. But we continue to learn, so in 2008, this individual who we should really
be celebrating on this day that we're -- on the 10th anniversary, made his whole gene
available through next-generation sequencing. It was very interesting because although he
was a quirky individual, he's not as quirky as his genome might suggest.
[laughter]
Because, for example, he's homozygous for a number of diseases such Usher syndrome and
*** syndrome. And probably not a sequencing error, and, again, it suggests that our knowledge
was inadequate of the genome, and that we really have to push forward in the annotation
of the genome. Here let me make a plug for an NIH effort called ClinVar. We talk about
open-access publication; what we really need if we're going to take good care of our patients
is open access genome annotations. And although there are companies that are in this space,
I do think that the public wheel is best served if initiatives like ClinVar are maximally
successful so that every patient can have the most authoritative, up-to-date interpretation
of their genomes, and this individual is one among many who are helping us reach that state.
Now, whether these results get reported back to a patient or not, is actually a very individual
decision. You don't have to introspect too long to realize that some individuals want
to get everything, they want to know everything about their genome, and others want to know
less. And as I wrote in the article with Patrick Taylor, in Science Translation Magazine, it's
going to be function of what are your communication capabilities, what your preferences, and your
risk adverseness. Some people just don't want to know, "No, no, no, don't tell me," and
others want to know everything. And I think we have to respect that. But when I say that,
people say, Zak, what incredible overhead bureaucracy are you anticipating where we
have to take care of every single wish of individuals? Well, I was going to show you,
I don't think it's that out there.
For example, there's this website, Mint.com. And when I first started using it, people
thought it was remarkably bizarre that I was doing it, because what you do to Mint.com
is give all your usernames and passwords for all your financial institutions. Your credit
cards, your IRA, your home address, and what it does is it does something pretty interesting,
which is, initially, there's no standardization across these various databases. It would write
a software agent that go and login as if it were you, and take -- if the data was not
available, it would take the HTML, and just decode it, and put in an essential database
for you, all your financial database, so that for the first time, you'd have all your financial
data in one place. Furthermore, it will alert you that, you know, you have to pay your bills,
that you just got charged a fee, and, by the way, did you know, this is their business
model, you could get a credit card that's going to be less expensive. And I chose to
actually take my risks with my privacy because the benefits to me were very significant.
And every time we get to the tax month, I'm much happier that I've done this. But not
everybody has to do it, and nor should they have to do it.
So I think what we're heading towards is what we talked about in this Institute of Medicine
report. We said that much like Google and others, but particularly Google, has made
an industry out of taking something that used to be incredibly boring, geography, and have
layered onto it added value, such as the location of the nearest pizza parlor or how to drive
around. By putting multiple dimensions on top of geography, you achieve extraordinarily
a ramp-up in value. And likewise, we're going to create an information commons by putting
everything down from the exposome, signs and symptoms, microbiome, epigenome, these-are-not-my-omes-so-forgive-me-ome,
and line up all these data types against actual individual patients. When a lot of us do our
genomic meta-analyses or mash-ups, it's not always the same patient. If we can get down
to the individual patient, we'll understand how the epigenome is actually informed by
the microbiome and so on. We do, in little ways, in projects like TCGA and so on, but
we need to do this exhaustively on large populations, and that's what we called for in this IOM
report.
And furthermore, we pointed out that if we did that, if we created this nicely stacked
multi-axonal, multidimensional perspective on the genome, what we'd have is, in addition
to the usual thing that we've done quite well in academia and in commercial companies, which
is going from big scientific discoveries all the way to targets, we would also be able
to really take advantage of clinical medicine in all its messy glory, do observational studies
and do clinical discovery that's informed by both the clinical information and the molecular
characterization. And we did note in our report that this area is the part that has been underserved
until very recently.
And so when we think about bringing together all these data around patients, I think we
have to think expansively. By the way, I should note, the same -- the very same concerns about
genomics were not early articulated about geography, but they could have been. Several
people published papers that show that public diagrams of maps, for example, of individuals
with *** in first-rate journals like JAMA, had enough geographic resolution so you could
actually figure out who the patients were. I didn't hear a big outcry that we have to,
you know, stop doing geography. And, in fact, there was a risk when Google sent its vans
though the bare streets picking up your Wi-Fi passwords. They had to apologize for that
and, I think, pay a lot of money, subsequently. Somehow the outrage was not the same.
So we're going to have to have -- we're going to have to bring together, at the patient
level, several kinds of data. We talked about the research data, which comes from both registries,
and cohorts, and pharmacy trials. And, you know, I've alluded to electronic health records
and labs in the clinical data, but increasingly we're going to see it more and more coming
from yourself. You know, we all like to remind ourselves, in a good life, much less than
1 percent of your time is spent near a doctor. So, in fact, what you actually gather at home
and in your everyday is a much larger amount of important data that will be important genomic
correlates for the future. And, of course, there is also that people pay for health care,
and they also have a lot of data, like what they paid for and how much they paid. And
that's going to be important, too.
And the thing that we have to think about very, very hard, is in creating this four-way
join, the genomics can come in from all angles. Certainly, this has already happened with
23andMe and the direct-to-consumer genetics companies; this is already happening. And
in Pharma trials and in the various GWAS that have been put on NCI dbGaP, they've happened.
I'm predicting this will happen as well. That there's going to be, when you pay for an expensive
drug and you don't actually do the companion genomic diagnostic, you may not actually be
reimbursed. So that's where we'll have to really think about data sharing.
So, how am I doing for time? I'm doing okay? Five minutes. So in thinking about that, more
recently, with my colleague, Kim Mandel [spelled phonetically], I wrote a couple of pieces
in the New England Journal of Medicine, first entitled, "No Small Change for the Health
Information Economy." And that I made -- that was an article we wrote shortly after Obama
was elected president because it was a massive investment about to happen in electronic health
records. And we said, this was an opportunity for us to actually say why is it that our
health record systems are so monolithic; why is it that if you don't like a laboratory
system or an order entry system, we can't rip it out and replace it will another just
as simply we could replace an app on your iPhone? And then in our second piece, we said,
"Why is it that in our day jobs as clinicians we use these electronic health records that,
more or less, are state-of-the-art technology for 1980s?"
[Laughter]
Whereas when we go back to our kids, they're using a variety of different apps, across
different vendors, in a very coherent matter, and actually getting the job done. And so
this is -- why do I even bring it up here? Because if we are going to have the synthesis,
this mash-up of genomic data, and the clinical care, we're going to have to require a electronic
health record system that can support that. Guess what? Ninety-nine percent of the electronic
record -- health record systems out there don't support genomic data, they don't even
support basic family tree data. That's going to be a gating factor, and with -- in another
talk I could tell you what we're doing to actually get around that fact and to build
the modularity in the apps that will allow us to have that genomic data.
And, in fact, by coincidence, apparently, a year after we published those papers, something
miraculous happened, and it's a really good thing. What's happened is the market forces
became aligned with the best interest of the patients. And a number of these large EHR
companies said, "You know, we've been talking about interoperability for 30 years, but we're
actually going to make it happen." Why they did it, I don't know, I'll leave it up to
your imagination. But even today we are actually doing the kinds of studies that I'm hinting
at using our electronic health record data. So under something called the NCBC, the National
Center for Biomedical Computing, of which I'm the PI of one of them, something called
i2b2, we've been able to extract data from electronic health record systems by disseminating
our open source software that actually does this, extracts data from these various electronic
health record systems. And we have put it out there, and over 84 academic health centers
across the United Studies have adopted this software, and they use it for genomic studies,
where they look at the genomic correlates of clinical findings; they use it for quality
improvement; they use it for pharmacovigilance.
But do they share the data? Well, as a matter of fact, they do. And we actually tested the
proposition of sharing in that most difficult place to share called the Harvard medical
system, where we have five hospitals whose first inclination is not to share data among
themselves for a variety of reasons. And yet they all had installed i2b2, and we were able
to have reasonable discussion about having a distributed query done across them that
we called the Shared Health Research Informatics Network, and allows us to, for example, query
the data on 6 million patients just at Harvard. And so, for example, I was unaware of it until
my egocentric citation robot alerted to me there was a study in Nature about peripartum
cardiomyopathy that they used this system to find the small handful of cases that had
this disorder, and they only were able to get that number of cases because they used
Shrine [spelled phonetically] to do this.
But this -- so that's five hospitals at Harvard, but the whole UT system now uses it, the president
of the UC health system funded something called UC ReX that links all the i2b2s across the
11 million patients in California. The six South Carolina health systems, all with different
electronic health record systems, are sharing the data using the system, and there's 12
international sites and a bunch of Pharma that are using the same system. So even though
we have all these obstacles, we can actually do that sharing today. It's not -- it's not
impossible; it's just a matter of vision. And we actually have a live network for some
studies that we're doing of autism and type two diabetes, where today, today, queries
are being issued across this network, and it cost hundreds of thousands of dollars,
not millions of dollars, to actually run this network.
And so let's get back to our theme. Whose property is this personal data anyway? All
of you in this Institute are well aware of the Henrietta Lacks story. And we all feel
really badly about the fact that her data was used, and she and her descendents really
were not very recognized, if at all, and did not profit in any way from this worldwide
use of her genome. And we'd like to be able to -- we all feel badly about it, and that's
why this book, in part, did so well -- but would we really do it, would you really acknowledge,
pay anybody who contribute their genome? Most of you will say, "Well, no, no, I wouldn't
do that." Well, I think you would be wrong. And I'm reminded of this by -- I just saw
one of my former trainees, Atul Butte, and he was telling me about this great thing,
he says, "Zak, do you have Klout?" I say, "What's Klout?" He says, it measures your
citation index for tweeting.
[laughter]
And so Klout's mission is to empower every person by unlocking their influence, and what
they do is they monitor all the social networks, and they capture these moments, they give
you perks, they pay you to go into airport lounges and you can use the fancy lounges.
They pay you for -- if you have more Klout in your tweets. And they have a privacy policy,
which they're explicit about, they say, basically, you have no privacy.
[laughter]
But they're explicit. And so the tradeoff is there, and you don't have to use it, but
I was there in San Francisco two days ago, and, boy, was everyone really proud of their
Klout. Why can't we have a system of micro-recognition, micropayments, in health care? There is no
reason.
My last slide is, but "Is it worth the risk?" All of this will come to naught if we don't
fix something. Here's a bunch of studies published in 2003. Quick question for the audience.
2003, how many primary care providers were ordering a genetic test, what percentage of
primary care providers were ordering a cancer -- genetic test for cancer susceptibility?
How many? What percentage? Five, zero, lower, Price is Right rules. One. Okay. The shocking
answer, 2003, is 30 percent. And what was the greatest predictor of them ordering this
test? Hole-in-one, that's great. Very few people getting that right. It's the patient
asking for it. These other studies showed that the doctors were uncomfortable interpreting
the test; they definitely ordered it. And other studies show they were uncomfortable
but they were not competent, actually, in interpreting it. And finally, another study
from CDC showed that the usual thing: detailing, which means sending attractive men and women
into your office to tell you why you should order this test, increased the ordering of
this test of which they're neither competent nor comfortable in interpreting, ordering
went up by a factor of four. What does that say about our health care system? So that's
2003. Just now appeared in Oprah magazine.
[Laughter]
You laugh, but this is clout. Genetic testing, pass or fail. Interesting study of patient,
gastroenterologist said there was a new test that could determine if I had a gene, I could
have my blood drawn, and then this receptionist [spelled phonetically] told the patient there
was a positive mutation. She took her results to the doctor, who recommended she go to the
-- over the test to a genetic counselor. And then the genetic counselor said, "Oh, by the
way, it's unlikely that this is actually causative, and, in fact, let's test your father just
to be sure." The father had colon cancer, and, in fact, she didn't.
So the incidental ome strikes again, but the point is, there are some highly-paid professionals
here who did not know what to do with the test. And all this investment, this risk of
privacy, if I were a patient I want to know that when I contribute, that something useful
is going to be done with it. And we can be as successful as we want to be in discovery
if we cannot get that last mile of actionability. And when I talk about actionability, I mean
competence on the part of the clinicians to actually advise patients what to do. We don't
have it yet. In fact, the least paid person in this whole system, the genetic counselor,
knows more than most doctors. And that is a real problem.
And so, in summary, genomic science does not change our fundamental need for privacy, and
not all have the same needs, not all have the same sensitivities. Privacy is not protectable
by technology. Period. But by morays, by institutional transparency, and by exercising of individual
autonomy, and bunker legislation will only hurt we, the patients, because it will prevent
science from helping us. And democratized control of data as individual, not an institutional
prerogative, may be in the future, although many, including, I'm sure, many in this audience,
will resist that notion. So the question I have is, will the medical establishment lead
in this, or are we going to use privacy as an excuse not to? Thank you very much.
[applause]
[end of transcript]