Tip:
Highlight text to annotate it
X
[ Background noise ]
>> Jeff: Lecture for today so for those
of you who've heard me give that talk six times already I,
I was the warm up act for the her lecture today.
We're very pleased to have her here though.
She's a pediatric hematologist oncologist
with subspecialty expertise in marrow failure
and stem cell transplantation.
She graduated from Harvard College, Harvard Medical School,
and trained at Children's Hospital of Boston
in the Dana-Farber Cancer Institute.
I think because we want to jump right
in to her lecture I'm going to let her tell you
about the open innovation work she's done.
But we have both worked with Dr. Karim Lakhani [assumed spelling]
at Harvard Business School.
And while we're pulling up her slides I'm going
to let her go ahead and talk about the approach
and by running open innovation challenges
in an academic medical environment and using the crowd
as well so Dr. Dinan [assumed spelling] take it away.
[applause]
>> Dr. Dinan: That's for the safe landing part, thank you.
That should be for the pilot actually seeing as how it went.
So what I'm going to do is, is as Jeff said exactly is talk
to you about some experiments that we've been trying to do
at using open innovation approaches
in academic biomedicine.
And obviously this is the work of, of several people.
I've worked a lot with Karim,
with Kevin Boudreaux [assumed spelling],
who I think you also know from the NTL,
and with someone named Griffin Webben --
Weber [assumed spelling] who is the Chief Technology Officer
for Harvard Medical School and also a social network scientist
by avocation, if you will.
So why would we do this at all?
And the, the reasons are, are various but,
but a prevailing reason is
that it is really internationally very widely
accepted that the development
of new therapeutic modalities has completely stalled out.
And actually if I have this then I don't need
to stand there right?
Awesome. And if, if you have the graph you could see there's,
there are several beautiful depictions of this and one
that was run in Nature a couple of years ago shows a bunch
of rising curves from sort of 1980 through around 2008.
And what you can see on that is that the number
of new molecular entities that were FDA approved
by year has fallen really dramatically
and it's fallen every year in principle since around 2000 back
to levels of a couple of decades ago.
And the reason for that not entirely clear.
Now, new molecular entities are just that they are new kinds
of things so it's not third generation gentamicin [assumed
spelling] or a rip off of some other antibiotic,
or some antihypertensive, it's really a new drug.
And those have, have fallen in ways
that are, are quite startling.
And that's only new molecular entities that doesn't take
into account have we changed our approach to treating leukemia,
have we changed our approach
to treating inflammatory bowel disease and by the way,
we haven't changed those therapeutic algorithms
at all we treat the ml the same way that I was trained
to treat the ml when I was a, a hematology oncology fellow.
So there are a lot of things
that actually despite what appears to be a lot
of investigative success haven't changed at all.
Why is that?
And it's not so clear.
If you, if you looked at the rest
of that curve what you would see actually is that it's not
because we don't have personnel, I mean, the,
the number of academic PhDs that have been granted
over that period of time has risen very steadily
and that's important because in the medical space actually most
of the new drugs actually come
out of academia in the United States.
They do not come
out of biomedical companies, hi tech or pharma.
That's a little bit different than some
of the European countries.
So that's the major fodder
and those people are definitely there.
It's not because academic productivity itself is actually
slipped the number of journal articles has escalated
dramatically, I mean, no doubt due in part to the amount
of science that's around, but also due to the fact
that you know every two second there's a new ejournal
so that you can publish your work.
But there's a lot of work that's out there, there aren't a lot
of ideas and there's a lot of information yet the translation
of that into useful therapies and into impact
on human health is really stalled.
So what other kinds of things could be at play?
Well I mean there's always money
and in fact NIH funding has slipped over the last couple
of years but the stall out and fall in NIH funding happened way
after the decline in the production
of new molecular entities actually started.
That's very clear.
It actually comes five, six, seven years afterwards.
So that's not the only cause of what's going on.
There are some tantalizing pieces of information
if you actually look some years ago
in science they did an editorial based on a committee meeting,
a really long committee review that actually happened
at the NIH and it was data published by the NIH
and what they showed were the percentage of grants that went
to individuals under the age of 35, between 35 and 45,
and to individuals over the age of 45.
And if you look at that from two decades ago what you see is
that about a fifth or 20% of the grants went to people under 35,
these are new competing grants each year.
And about a fifth of them went to individuals over the age
of 35 and the bulk went to those in the middle.
If you look in 2002 what you see is about four percent, not 20%,
4% went to individuals under the age of 35.
Over 50% went to individuals over -- I said that backward.
Four percent went to individuals under the age of 35
and over 50% went to those over the age of 45
so a complete skewing of the distribution of grants.
Now, it's not to say that older investigators don't do novel
and innovative research but if one the things
that you're interested in is bringing new individuals
to a question and getting new ideas out there and you look
at what most the established investigators are funded to do,
they're funded to do their own established work.
What they're doing is extrapolations of work
that they have already done.
In 2001, the NIH gave out two -- out of over six thousand grants,
251 grants to individuals under the age of 35.
Really sort of startling numbers that I don't think
that most of us are aware of.
So it certainly opens the question
of whether a new community of people are being recruited
to look at old questions and whether they are being funded
and supported appropriately [inaudible].
Now, there are lots of other issues, as you know,
funding issues, regulatory issues, IP and patent issues
and other things that sort of stand in,
in the way of innovation.
But we wanted to sort of take a swing at that thinking
about some of the ideas that you've heard
about from Jeff before and from Karim, if he's been here,
things that are being explored in the innovation space.
And there were really two ideas
that I was particularly interested in.
One is the whole idea of the long tail of innovation,
which I'm sure is familiar to you and, and basically what
that says is that if you want to solve a problem the best place
to go is locally right?
So you look for a local expert, you do a local search
and that's incredibly efficient
and often a local expert can help you resolve a problem
that you have.
You have the expertise locally.
But that's constrained by the amount or number
of domain specific experts that you have around
and by the geography, you know whether your in one building,
whether your in a community, whether your
in a bigger company, whether your in a bigger organization
or not, how many local experts there really are.
If you can't do that you can actually access this huge area
under the curve of people
by extending the domain specific expertise
by extending the geography over which you search.
And you can if you sort of integrate that big space
of reaching out to other people come up with a pretty big area
under the curve of people that may actually have solutions
for the problem that you're asking.
So that's the sort of long tail and accessing
that long tail is one way of promoting innovation.
Another idea that was really very interesting
and basic is the whole idea of the maximal value
of extreme innovation, if you will, and it's called a bunch
of different things but the concept just being that if I run
around and I source innovative answers for a question
and what am I going to get?
I'm going to get a lot of stuff that's nonsense.
Just like they did with the longitude price,
you know the dog barks in the night and you're
at fifty six degrees or something or other.
I mean obviously not true.
And then you're going to get a lot
of stuff that's really average but it's not really helpful
and it's not really insightful and it's not really innovative
and it's not going to have huge impact.
What you have is this little teeny tiny fraction at the end
of the bell curve of really startling ideas.
And to get those ideas you're going to have to engage
in processes that uncover all the rest
of that stuff that's not very useful to you.
And that concept of trying to go for the gold, to try to get
to those really extreme value ideas through a process
that creates and makes you have to plow through all
that other stuff and being willing to do that
and accept the fact that your going to get a lot of material
that to you is valueless is not something
that is very comfortable for academic biomedicine at all.
And so that seemed to us to be an interesting precept
that hadn't been explored.
So we wanted to take on those ideas and try
to get some experience using open innovation
and what we did first was really an --
it was really a proof of principle exercise
that had a number of major points.
And the points were that we wanted to engage the community
in an open innovation exercise actively
so that they could start to understand it.
The second was that we wanted to explore the idea of trying
to reach out to really do a crowd source
in our academic setting.
We wanted to get experience in framing a question.
Now, you would think for scientists
that that would be easy to frame a question
but scientist frame questions for themselves.
They don't frame questions
so that other people can understand them.
And for anyone who's ever tried to respond to an RFA you know
that questions are formulated in very, very, very specific ways.
And their very hack kneed sorts of ways.
They are not ways that are actually sort of set
up to be accessible to larger and diverse communities
so we wanted to have a little of experience with that.
And we wanted to really see if we could push
and promote the idea of a reduction to practice kind
of question, as you've heard about here,
something that actually was a very concrete deliverable.
So none of that sounds exciting right?
The issue is that none of that is how academic medicine is
actually practiced or biomedicine, I mean,
sort of encompassing all the biological fields.
And it really isn't because when you work for a company
or you work for an organization the likelihood
that your division does well, the likelihood
that you get a raise, that you get a bonus,
that your company has stock options,
I mean whatever else you want is in general dependant
on the aggregate success.
And so you contribute to that aggregate success.
In academics it's really very different.
This has been written about in sort of different kinds of ways,
different from the context in which I'm using it,
but you are in principle an independent contractor.
The likelihood that you are funded
and have lab space next year is dependent
on whether you get a grant,
not with you help somebody else get a grant.
The likelihood that you get promoted is dependent
on you developing very esoteric and narrowly focused expertise,
in fact the criteria for promotion
at Harvard Medical School or at Harvard or that you are one
of the three best people in your field in the world.
Alright so most of us are not talented enough to be one
of the three best people in our field in the world.
But if we define a field
in which there are three people we will be one
of the three best people in the world.
And so people are encouraged
to actually really narrow their interests and to adhere to that
and a criteria for not being promoted at my institution is
that one is to diffuse.
And diffuse in our universe is actually a degree of haziness
that no one else would recognize as being diffuse
but I mean there's incredible pressure to be narrow
and precise and to be uncollaborative.
And if your being successful means
that you are entirely dependent on sort of that, that sort of --
whoa that's not the talk.
Help, help, help.
How do I go back to where you were?
Like how do I go back to the disc or
>> That's [inaudible] the MIT Lab.
>> Dr. Dinan: That's the yes.
That's the one you have from where?
No that's not it.
Oh you were pulling something up from the MIT Lab?
>> Yes.
>> Dr. Dinan: Is that it?
>> [Inaudible] the MIT lab.
>> Dr. Dinan: The MIT lab.
Alright well it's not the talk but we can always adjust.
We can go somewhere else.
[ Silence ]
>> Dr. Dinan: Alright so let me just finish what I was saying.
So, so all of this kind of really goes
against the grain of academia.
We found an experiment that we thought was actually really very
suitable and the experiment
that we found was actually an exercise in genomics.
So in, in the area of genomics the sequence
of things is really very, very important.
And I have absolutely no idea what's on here anymore.
Okay. So we did an experiment where we wanted to look
at an important problem.
Now, the NIH has actually put an enormous amount of resources
in to a program that's called Mega Blast.
It's actually been providing the sequence data on DNA sequences
for pretty much the whole world for free for the past decade.
It is the state of the art.
It does that at a certain speed.
And by sequence what I mean is if I had a piece of DNA
and I wanted to know how it was put together what's the
predicted algorithm for looking at that.
You know I would do it with a pencil.
ABC, CBA, BCA.
I would do all the permutations.
And frankly that's exactly what Mega Blast does very efficiently
and rapidly with a really big structure computer.
But it's actually not that fast and it's actually not
that accurate, but it is definitely the state of the art
and it takes a lot of money to maintain.
And the question was could one do any better than that?
Why is that important?
Because predicting those sequences allows you
to construct new molecules,
predict how proteins will be created, and,
and how you might modulate them.
So the experiment was to facilitate the involvement
of an external solver community and trying to improve
that algorithm for doing that sequence analysis.
And we worked with Top Coater, a group that's well known to you,
and with them we sort of formulated in terms
that could be comprehended by someone
who wasn't a biomathematician this problem and sequence
to calculate mathematically the Levenshtein [assumed spelling]
differ -- distance between the query string
and the original string of DNA scored both for the accuracy
and the speed of the prediction
and we ran a two week competition.
Now, Mega Blast is exist -- has existed and been improved upon
for a decade and has certain performance metrics.
There is also a researcher at Harvard who's been working
on this problem, funded by about three different RO1 grants
with collaborators from Stamford and from MIT.
And he's actually as an institutional effort as,
as that grant has actually improved on Mega Blast somewhat.
So we also had a better performing individual solution
and we created a bunch of test cases and we put this
out there actually to through Top Coater as a problem.
For two weeks we divided people in to three different groups
because we had an internal experiment.
And we incentivized each of those groups
with two thousand dollars.
As you know an RO1 is $250,000.
I don't even know how much money has been put into Mega Blast.
And we ran it.
And I don't remember what I put on this slide -- on this talk.
Anyway so there was 733 coders that registered.
They got put into these three treatments.
As you can see over a hundred coders made close
to 700 submissions.
The top 34 people beat both Mega Blast
and the local improved solution by a factor of 10
to the second to 10 to the fifth.
I mean they didn't do twice as well, you know,
I mean this was just ridiculous.
In two weeks the people had no idea what this was about
and which I found much more striking actually they found
nine different approaches.
When we went back and looked at the solution spaces
of those 34 people there were nine different approaches.
The academic community missed nine different potential
solutions uncovered in under two weeks.
Now, you imagine if we had let it run for a month
that there would have been more solutions and that I,
I doubt nine is all that there was.
And these solutions were dramatically better
by both criteria.
Both because they had to score both for time and for accuracy.
So we thought that this was really nice proof positive
that you could actually do a reduction of practice experiment
that you could source from the community a difficult problem
that they had been unable to crack put it out there,
incredibly cheaply and efficiently,
and come up with a good resolution.
So armed with that sort of proof of principle we decided to sort
of go forward and in a sense really try
to disaggregate how the research process happens in academics
and see if we could understand where the opportunities
for improvement might arise in each situation.
Oh I just realized what's not in this talk.
Okay. I'll cross that bridge when I get there.
By the way your tunnel is flooded.
Speaking of bridges and tunnels and things.
So we then went on to do another experiment
and this one was in Type 1 diabetes.
So Type 1 diabetes as you know is a huge socioeconomic
and medical problem.
It is the leading cause
of kidney failure in the United States.
It's a huge contributor to blindness,
cardiac disease and so on.
It's so it's economically really important and innovations
in diabetes has been few and far between.
So we picked that as a question
that we thought one was important
and two would engage a broad number of people
because there's a lot of buzz about diabetes
and even though some people confound Type 2
and Type 1 diabetes people might still have some sort
of emotional reaction to it.
And what we did was decide to really start
at the very beginning of research which is
where is the question come from.
And the questions in diabetes have come
from this really narrow community of people who work
in Type 1 diabetes and they really haven't come
from outside at all.
So what we did is we worked with InnoCentive [assumed spelling],
a different partner, and we formulated an ideation challenge
that we put out to the community in which we said,
in the broadest possible terms, "Tell us anything
that you think would impact any aspect of diabetes.
Diagnosis, prevention, amelioration,
therapy, prenatal diagnostics.
Whatever you want that you think could change something
with Type 1 diabetes we want to know about it
and you absolutely do not have to write down anything
about reduction to practice.
You don't have to have the resources
to reduce it to practice.
We don't want to know how you would do it and just do this
in three to five pages."
We incentivized it with $25,000 saying that the prize...
[ Silence ]
>> Dr. Dinan: ...would be for individuals any
where between 2,500 to 25,000 because depending
on the responses we would give out between one and 10 awards.
And we ran that on the InnoCentive platform
and internally at Harvard so inside and outside
for approximately six weeks.
And what we found was pretty amazing.
First of all we got endorsement from the top.
So the President of Harvard, Drew Faust [assumed spelling],
heard about this because I was trying to get an email list.
There is no email list for Harvard it turns out.
The President doesn't have an email list.
But it did actually flag this to her attention,
which is inadvertent, and she actually endorsed it and sort
of wrote to those people from who she does have an email list
that she thought it would be a good thing if they participated.
So it helped because we had some top down buy in.
We ran this and what we saw -- oh this is really different.
Okay, let me think about this.
What we saw was that about 800 people looked at the question,
about 200 people made a submission
and that this was equally divided.
There about 100 ideas that came from outside of the university
and about 100 ideas that came from inside of the university.
I'm actually going to not flick this slide for a second,
because I'm not even sure what's on this slides it.
And the first question, just out of those submissions,
was did we actually accomplish anything we wanted to?
Did we bring different solvers to the table.
And the answer was yes.
And the pie graph that you're not seeing what,
what we actually showed was
that about half the people actually had MDs or PhDs.
So those are the normal solvers actually and half
of the people had masters down to hadn't finished high school.
Because we blasted it out to everybody not only
through InnoCentive but when we went inside Harvard, I mean,
we sent it to the grounds keeping crew,
we sent it to the HR Department,
we sent to all the administrative staff,
we sent it to all the students, graduate students,
the medical faculty, etcetera.
And we went through all
of Harvard's 17 different institutions
and affiliated hospitals so it was really broad.
So yes we -- 50% of the people that we brought in were people
who would never respond to a scientific question
or participate in a writing a grant.
So we did indeed reach out to a different community.
And actually that is sort of in here.
So you can see the left hand side are the professional
degree, doctoral degree kinds of people
and that whole other side are people whose voices we never
hear in this discussion.
Another way of thinking about this is did we get people
of different motivations.
Well the motivations of academicians may
or may not be altruistic but they are certainly self centered
as well because as it's said that's how you have your job.
I mean, you are a contractor and you need to be able
to maintain your position.
If we ask the people who did this, and only about half
of them responded to the survey that we did of the submitters,
whether anyone in their family had Type 1 diabetes
or whether they themselves did and almost 50 percent
of the individuals, the right hand side there,
said that that they or a family member had Type 1 diabetes
so the prevalence of Type 1 diabetes
in the United States is about .025%.
So the enhancement here at 50% enormous.
Now, if we assume that half the people don't know the difference
between Type 1 diabetes and Type 2 diabetes, because I worry
about that, so if you said it was 25%,
I mean we have a thousand fold enhancement
for having some personal connection to the disease
which is always a great motivator for doing something.
So again, this is completely unrepresentative unless Harvard
happens to be a uniquely diabetes prone community
and it's uniquely lots of things
but I don't think we have more Type 1 diabetes.
So it's a very different profile of people coming
to the question then one what normally sees.
I'm not going to do it that way.
I'm going to send Jeff this other slide
because I really want you to see it.
And I can also take --
no I can't do it because you're on the WebEx.
So we then decided to sort
of attack the second part of the problem.
So one was sort of getting the ideas.
And then the second issue is how do you filter those ideas right?
So once you get really good ideas if they are
in fact innovative how do you then do something with them?
Well you have to have an evaluation process
with an idea and, you know,
we thought as others have before certainly
that if you have a very conventional evaluation process
you're going to end up with very conventional ideas
and it doesn't really matter what you put in the top
of the funnel if you put it
through exactly the same filter you're going
to the exact same thing out the other end.
So we created a very different evaluation process.
And what I did was I put together 8 groups of evaluators.
So I had eight groups of 30 people so that is about the size
of the biggest NIH study section right?
So there were 240 people who were asked to evaluate.
We went with electronic evaluations because we wanted
to avoid any sort of superstar effect.
For those of you who've ever sat in a study section or if you sit
in your own evaluations, I mean,
as you know it is either the loudest most alpha,
most committed, or the person who shows up at all
of the evaluation sections who actually carries the day
in terms of what gets funded right?
And we didn't want that to happen
so we did it electronically
so that there could be no confrontations
or influence directly between people
and we put together packets of 15 grants at a time
and sent them out to people in these big cohorts of 30
so the grants were seen multiple times,
multiple scores everything was randomized.
Well eight groups, what were the eight groups?
Well, two of them were outside of our institution.
So we sent them to the Juvenile Diabetes Research Foundations...
[ Silence ]
>> Dr.Dinan: ...review panel.
So this is a group of people who spend all of their time as part
of the biggest funder of Type 1 diabetes research in the world.
And they evaluated them.
We also sent them to a community of people
who are either venture capitalists or biopharmatech --
tech people or people who were really high up actually
in the pharmaceutical industry
and we had them do the evaluations.
And then we did six groups that were inside of Harvard.
So inside of Harvard we created three categories.
Your Type 1 diabetes expert, you're an endocrinologist,
and you publish in Type 1 diabetes.
You are a collaborator of a Type 1 diabetes expert
so you have published with them but you've never published
in diabetes so you could be a rheumatologist,
or an autoimmunity person,
or an ophthalmologist, or a nephrologist.
And then people who neither published
in diabetes nor collaborated with those who published
in diabetes and we have the tools to do this actually going
through these big data bases we have a --
which is what Griffin Weber has created something called
Profiles at Harvard where we can actually do this
in a very reasonable way.
So we created those two and then we parsed each of those two
into two groups and -- each of those three into two groups.
People who are experts, so those were the full professors
or people who had published really extensively and recently
in high impact journals and then those people who were junior,
so those people who were instructors
or assistant professors and who hadn't published very high
profile papers at all in these fields.
And we asked them to evaluate these papers and what you see,
if you actually sort of drew lines
for any graph, looks like spaghetti.
In fact if I go back to this knot nice depiction.
I had promised Jeff I would do something really good and I did
and you're not seeing so I'm so sad.
Because this is not a good way to look at it but if you --
if you for example look at 72 up in the top there on the left
with the black bar around it and then you look in the VC Pharma.
That was their first choice for impact.
And if you look at JDRF they didn't even rank it
in the top 22.
And then if you look at the Harvard's diabetes experts,
you know, they ranked it in the middle.
The junior people didn't care about it at all
and so on and so forth.
So, I mean, it was completely erratic and if you look actually
at the Type 1 diabetes experts, who are the most
like who parcels out money from the NIH
and you compare how their top 10 went
into all the other categories.
It looks like a waterfall that sort
of spreads out on both sides.
And there's no correlation
and particularly concerning frankly the Type 1 diabetes
experts have absolutely no correlation with the people
at JDRF and I mean you would think those should
be synonymous.
The people most involved in the field
with the most academic expertise,
seeing the most dollars, seeing the most data every single day.
And there was no correlation statistically
or in any other way that you could possibly fathom.
And there's no correlation either with the people
who are actually funding this work.
So what it said is not that we've discovered that, you know,
the review process is flawed.
We all understand that but what it really did was help
to make the point to our community that if we are relying
on reviewers to choose things that are innovative
and that are going to have impact and we believe
that that is a meaningful process we need to rethink
that because I could give you eight different meaningful
outcomes of review, really good people, you know,
done simultaneously so if we think that's an appropriate
filter it's not.
It's a filter that we need to understand
in a very different way in order for it to be meaningful
if the final product is going to be innovation.
That may be a better filter for other things or not
but in this area where people were asked to access
for different things it really was
so incoherent as to be laughable.
When all was said and done we went with the average
of what all of those groups said
and what we saw was really interesting actually which is
that we have a very, very mixed group of people who won.
And I'm not going to take you through all of these but sort
of working through the top on down.
And when the top is a basic science PhD, there was an MD,
who was doing a public health degree,
who was here from South America.
The next person is a retired dentist.
He's living up in Maine.
A PhD who does prostate cancer by statistical studies.
An MD student who'd finished his PhD.
One of the people who works in our HR Department,
who himself has Type 1 diabetes.
A couple of, of physician investigators.
And if you look right here, third from the bottom,
one of our Harvard undergraduates, who made one
of the most sophisticated and novel
of all the applications actually.
So you know we got pretty different people in there
and I'm going to cry because this isn't going to work.
I wish I could do this for you and I can't
because it won't transfer unless it's on the other computer.
What these two slides had were the words of these two
of the winners so this gentlemen is Jason Gaglia [assumed
spelling] and Jason's an endocrinologist
who actually works at the Beth Israel
so as he says here he's one of the few people who actually came
to our awards ceremony, these are the films
from the awards ceremony, who is actually someone who thought
about diabetes all day long which is what he does.
And what he said in this was so striking to me.
He said "This was such an incredible",
think about how sad this is,
he said "This was an incredible opportunity.
We we're told that we can just sit down with a piece of paper
and think about the best way to approach a problem
without any limitations.
We could pretend we had all the resources we had in the world,
we could pretend we had all the time in the world
and we could think of ideas.
And he said, "You know, we never get a chance to do that."
Well, the truth is that you always get a chance
to do that, but we don't.
I mean we don't see ourselves as being allowed to do
that perhaps those we work for don't see us as being allowed
to do that but it was very, very striking.
The second person whose talk I taped and listened
to is our young undergraduate,
Meagan Bleuit [assumed spelling].
So when we had this ceremony the people came from the office
of science technology policy
and as you know they've been very involved
with the American Competes Act and Competition
and she made their day.
I think she's been quoted more times than anyone her age.
Probably in US history at this point.
But Meagan, who's very smart and also very insightful was talking
about this process and she explained her idea
and then she said that being involved in this really sort
of reminded her of a comment
that her computer professor had made when he was talking
about barriers to entry and he was explaining to the class
that a lot of the success in his view of what had happened
with computer science was that the barriers
to entry were very small.
Internet businesses and so on.
And she said she'd been thinking about that and that the barriers
to entry for scientific work are so enormous
and she made the comment that, you know, there was sort of BMW
or several BMWs on each lab bench
when she went walking around.
And she said, "But what the contest did was
that it lowered the barriers to entry
because all you really needed were you ideas
and internet access to enter."
And I think between the two of them they really made some
of the points that we were fishing for which is
that we certainly don't in,
in academics biomedicine have circumstances that elicit
and permit the flow of innovative ideas
from people in a useful way.
So what did we do after that?
So ideas are one thing.
Lastly, what we did was really try
to reduce it back to practice.
So we actually were made a gift of $1 million
from the Helmsley [assumed spelling] Foundation and we took
that because they wanted
to support something really innovative.
And what we did was we took those ideas
that I just showed you and we mapped them, for real,
we actually mapped them to make an RFA.
So the people who were distributing the Helmsley funds
wrote an RFA and the RFA looked like every RFA I've ever seen
in my life so it was completely counter to what we wanted to do
and they thought they were doing something really innovative.
So we actually took those winning submissions we mapped
the mesh words and all the key words in those submissions
and then we wrote an RFA that was truly matched to the ideas
that the people had brought forward with the examples
that they had provided and we used that as the RFA
because the language is completely different
from what we normally see.
And we blasted that out actually to the Harvard community
and we specifically targeted people by going
after the mesh words so that if there were mesh words in there
about micro fabrication or liposomes or lipids we went
through the Harvard indices and we went to those communities
of people and we wrote them emails that came from,
ostensibly, from the our Dean, Executive Dean for research
that said, you know, "We are going to fund six
or seven grants at $150 - $200 hundred thousand a year
in the area of Type 1 diabetes and you have been identified
as having expertise in an area that matches some
of what we are looking for.
We know you haven't worked in Type 1 diabetes before
and you will not be discriminated
against for not having a track record.
And we do want you to take this seriously because we do think
that you have something to offer."
And at the end of the day there were 31 submissions
of which nine were awarded.
And of those 7 of the applicants had never worked
in diabetes before.
That never happens to us because you can't get your grant looked
at because you put it in there say,
"You don't have a track record in Type 1 diabetes,"
and on the thing about whether you have resources,
and whether you have a track record,
and what your past publications are they say you haven't done it
and the only pile that you can ever get in is an award
for junior investigator but you can't get
in to any other funding pool.
And we actually pulled those people in
and they had marvelous new technologies
which they're now sort of using and applying to diabetes
and we have also done some facilitation
to actually have them form a community
where they're actually being supported
and we've brought some diabetes experts in to help sort
of provide the background information
that these guys just don't have
because they can't even spell diabetes.
But what we now have is the best molecular biologist
at Harvard is now working on diabetes.
He's just found a new way to find a auto antigens.
He's already got it working.
And I mean we found him the samples,
we found him the connections but what we've done is take
in a really new solver who is very,
very distal to this problem,
brought him in by asking a question in a different way
and by doing a different kind of directed outreach
and facilitation and you know in the end we have
to see what the research does but it is a way,
I think the entire process is a way of trying to not focus
on one issue but to do what you've seen happen here which is
to really look at this continuum
and say there are just huge problems at multiple places
and unless you actually tackle them in some sort
of integrated way, at least for something as monolithic
as we do you're not going to move very far forward.
And so obviously the issue now would be to continue
to analyze all of this and to repeat these experiences and,
and try to build on them
but I think it does provide some evidence that you can take a,
a pretty uncooperative community and start
to move them a little bit.
Before I make it sound
as if this was all a big love fest I said that you know
when we did the outreach to people asking them
to consider the diabetes idea I used the email, you know,
like a fake email box from the Executive Dean
because no one was really going to care if I emailed them
and asked them to do this.
And I almost lost my job because Bill Chinslek [assumed spelling]
on the phone going,
"I am getting these emails from people.
What is going on?"
And he got emails from full professors at Harvard
who inexplicably to me actually took the time to email him
and say how dare you and I quote
"How dare you ask me to consider this.
I am a cardiac expert.
You are ignoring my entire career.
Don't bother me."
Those are all direct quotes.
Not one time, not twice, a massive number of times.
So after I reviewed my feelings
about ever using someone else's email again in a dead mailbox,
you know, it was really very interesting
and one was interesting that they felt
that way two it was really fascinating
that they felt it was important enough to take the time
to write those emails.
But that in itself was pretty educational about, you know,
the degree to which the community will
and won't embrace some of these concepts.
So long road to go but, you know,
some various progress amidst all of this chaos.
As well and, and you know hopefully an equal weight
to the number of people who are completely disgruntled there is
a weight of people you know who are highly motivated
and feeling very enthusiastic.
So we will see what we can do.
Thank you.
I apologize for all the disorganization
and the late start.
[applause] Questions?
I mean I know our setting is different than yours,
in some ways similar in some ways different so I didn't know
if there were comparisons or ideas
that you wanted to explore.
>> Jenn Febree [assumed spelling]: This is Jenn Febree
from Johnsons Space Center.
>> [Inaudible]
>> Dr. Dinan: Oh okay.
>> Jenn Febree: [inaudible] You know I,
I've actually heard you talk before when you came down here
to talk to MIT Innovation Lab
and it's always very interesting.
But I think what I might have forgotten
to ask the last time was
so after you showed the incongruents [assumed spelling]
between all those groups who actually did pick the winners
and kind of what was your rational for moving forward
with those really novel concepts?
>> Dr. Dinan: Sure so we, we didn't actually --
so for the ideation challenge we had asked the evaluators
to score one to 10 scale using a licrid [assumed spelling] scale
for impact, one to 10 scale for feasibility
and to use an allocation to do an allocation exercise.
You know if you had $100,000 and you had
to allocate it among the 15 grants that you saw
and we just actually went with the impact score
because in principle the real question
that we had asked people was for something
that would impact diabetes and we just did the math
across the 224 reviewers and then took the --
and there was enough separation
that we could actually take what ended up being the top 12
and we made 12 awards.
So that was truly completely objective with one exception
which is that I made the executive decision that one
of the grants that was very highly ranked was not eligible
and that was because they proposed to use IPS cells
which is something that the Federal Government has already
funded it to the tune of, you know, massive numbers of tens
of millions of dollars
and I didn't think it made the criteria for being innovative
and so we eliminated that one.
I mean very interesting also that one
of the highest rated things was the one
that was the most crushingly not innovative
but different question.
Other than that there was no editorializing
or manipulation of that.
That was a, the straight map.
For the Helmsley grants where we went out looking for,
for where we invited, actively invited,
very different participants to the table to look at the results
of this challenge and to reply
with a formal RFA there was a committee of about 20 people
who were not chosen by us, I mean,
Helmsley wanted a conventional evaluation process
so that was a group of Harvard investigators and scientist
across a lot of fields who were not instructed per say to look
for things that or they were not instructed to look for people
who had not been funded.
They were instructed to look for things that were innovative.
And then they just ran it like a regular sort of study section.
>> Jenn Febree: Thank you and I guess I had one follow
up question on kind of your last comment
about the email responses.
>> Dr. Dinan: Yes.
>> Jenn Febree: We, we talk about a lot of these efforts
as cultural change agents.
>> Dr. Dinan: Yes.
>> Jenn Febree: And now that maybe some
of these people are seeing your results and the impact
and different ways of thinking
about solving problems do you have any way
to gauge whether those people might have a different attitude
and even entertain the thought of looking
at problems outside their sphere of expertise or do you think
that it's probably not a very big dent that's been made?
>> Dr. Dinan: I think there are two different answers to that
so the first simplest answer is this is a very entrenched
community and I think real cultural changes is going
to take a long time.
The second is that it is a community that's very threatened
by the fiscal climate right now for research
and the fastest way to, you know, get a horse to water is
to have there be water right?
So I think that we can encourage participation
through the provision of funding.
In, in concert with something very important that happened.
And as luck would have it, in the same way
that Drew Faust endorsed the original ideation challenge,
one of the people -- well two of the people actually who ended
up getting awards one is, one is a PhD basic scientist who's just
like one of the world's best
and the other one is a cancer researcher also full professor
who's equivalently good.
And both of those people are now sort of publicly identified
as having stepped forward and come up with these really
out of the box dramatic ideas for diabetes.
And that's been extraordinarily helpful and I think, you know,
where somebody really prominent goes the rest
of the community is a little bit less hesitant to follow.
One because there was certain cool factor that they did it
and two is because they are so incredibly strong
as basic scientists that I think it's hard
for the very entrenched people to say,
"Oh come on we're not going to take that seriously,"
because it's sort of like saying,
"Well you know Einstein did that," I mean come on,
you know I mean it -- they're, they're really good so we,
we got some real traction from that.
That probably took about five years off the cultural
change continuum.
>> Jenn Febree: Yes thank you that was very, very helpful.
>> [Inaudible]
>> Dr. Dinan: It's that kind of a day Jeff.
>> The question I had for you is looking
at the evaluation panels that you had.
>> Dr. Dinan: Yes.
>> Is there any data be mined from that in terms
of how people responded I guess what I was thinking is are there
questions you could pose to evaluation panels
that might even out their scoring now
or is there some feedback from this challenge?
>> Dr. Dinan: Yes I, I think that doing exactly
that is something that we're very interested in.
So what we're doing at the moment is and we are sort
of knee deep into that is looking at the concordance
of people's own research interests
with the grants that they reviewed.
So again using mesh terms from Pub Med
to match them we're looking to see whether people were harder
or easier on the grants depending --
whether they scored them, that's normative,
whether they scored more highly or more lowly if it was more
like their own work or more distil from their own work.
And then obviously we're looking at it by gender
and by academic rank and doing all of those sorts of things.
And the answer is that there are huge effects of those things
and so we're still sort of lining all of that
up because it, it's hard it's a lot of scanning
of the literature to look at what it is.
But I think in the end the answer is going to be exact --
well, half of an answer your question is that I think
that there that one will be able to build a profile that says
if you want this to, to have a certain kind
of look you know you want these kinds of people
and if you wanted to have this kind of look
and I think we'll be able to also see whether
or not the things that are really innovative as measured
by how many times the terms in that proposal actually were
in Pub Med, I mean how often had those things sort
of come up together or not.
So I think that there where -- will be you know with time
and evolution and repetition and more people trying to do this I,
I think that there will be some ways of trying to figure
out what might be a good kind of panel for --
to review things for innovation what might be a good kind
of panel to review things for refinement of an existing field.
I think we can probably create a screening question.
Jeff also reminded me that another thing
that we've used the mesh terms for was the question that I,
I'd certainly gotten from the scientific community
which is well how do you know that any
of these ideas were innovative anyway?
So we've actually done that and that is really quite beautiful.
You can take all of the mesh terms in Type 1 diabetes
so the way that Pub Med indexes them for the past 10 years
and you can do these sort of repetitions
where you just pick 150 like we had 150 proposals
and you see what the overlapping terms is inside of them
so how much they're about the same stuff
and how many unique terms there are
and then you can do the same exercise for our 150
and what you show is
that there's this really big sweet spot
where there's no overlapping
between the literature combination
of terms and what we see.
And that it's much greater than for any of --
for literally it's like 20,000 permutations of any sets
of 150 papers out of literature and Type 1 diabetes.
So we did elicit ideas that were different in meaningful ways
and we've done that with two different people doing the
coding formal mesh librarians and that's reproducible.
So I think you can get new ideas,
whether the new ideas are better or worse than the old ideas.
Different question and only time will really tell.
>> And I, I guess the ultimate follow up question is,
is there energy around doing this again
on another topic within the community?
Was there enough of a...
>> Dr. Dinan: Yes.
>> Jeff: Take hold of...
>> Dr. Dinan: So we're going to do a couple of things.
One thing which we're doing right now is going back
to the general mix idea and that has to do
with a cultural change.
You know the, the reduction to practice kind
of Top Coater things are very attractive
because they yield a numeric answer
in a very short period of time.
And it's hard to argue, you know,
it's ten to the fifth faster, it's ten to the fifth faster
and what are you going to say it's not.
I mean it is or it isn't.
As opposed to this diabetes thing may
or may not be different six years from now and may
or may not have happened anyway.
So we are working with Top Coater right now
to formulate a sort of one and a half page statement
of what it is you would have to have
to pose an algorithmic question to me.
So an exercise you have in many already gone through here.
But we're going to put out to the Harvard community,
not as a contest, but really sort
of it's just an ongoing rolling open ask.
You know do you have data set with a problem
where you're stuck and you'd
like some help getting it unstuck.
And if we evaluate it, and that would be group
from Top Coater largely, and think that it's tractable
and that there's validation data set then we will,
we will provide the money to have Top Coater do it actually
with some funds that I have from a supplementary grant.
And what we're hoping to do by that is
to get the community sort of thinking about the fact
that they can in fact go elsewhere
and that they can get themselves unstuck and to get some practice
in thinking about what the data sets are
and what the mechanism for doing that is.
So that efforts going to start actually that, that's going
to roll out next month.
So, so that will be happening.
And then the question is what, what's the next thing
that we want to do here and what, what we're thinking
about is maybe trying to do some sort
of technology poll experiment.
Where rather than doing an ideation challenge per say my
idea at the moment is to perhaps say, "We have a laboratory
of innovative technologies that's part
of our Harvard Catalyst Center and it gets a lot
of really cutting edge, not commercially available,
technologies that companies want branded,
they want explored they want benchmark,
they want some co development, whatever."
What we're thinking about doing is taking a number
of those technologies, again writing up a description,
and then running contest around those.
So you know propose an experiment, real experiment,
using this technology that would answer a question
in Lou Gehrig's disease.
You know, sort of to have a concrete core and then the point
of that would be that it would bring the new technologies
into our population of investigators much earlier.
So it'd make the technologies visible
to a bigger group of people.
That in it of itself is important goal for us.
And then it might get something really clever going
with those technologies.
So that's an idea that we're kicking around, I mean,
we're still trying, I mean --
I don't think just doing ideation challenges, you know,
tell me your ideas about Crohn's Disease, tell me your ideas
about Cardiac Diseases really going to work.
It's a little bit to amorphous.
And there aren't a lot of diseases
that have quite the emotional draw that diabetes does
and cancer does but it's too big.
So we're trying to figure out different ways to get at it.
I think technology is one way of moving it.
We'll see.
>> Chris Fallophis [assumed spelling]: Hi,
Chris Fallophis of Naval Research.
We, we had this cultural block question is really huge right?
>> Dr. Dinan: Yup.
>> Chris Fallophis: And we talked, when I met you at NIH,
about this idea of trying to persuade NIH or NSF
to let investigators spend a little bit
of their grant on a challenge.
>> Dr. Dinan: Yes.
>> Chris Fallophis: I wonder if that kind
of thing wouldn't boot strap a cultural change even more
than solve problems right?
When you let somebody try it themselves
and see what they get back.
Do you have any thoughts on this whole ordeal?
>> Dr. Dinan: Yes, no, I, I first of all I totally agree
with you and that is in principle what we're trying
to do by having the algorithm thing just be rolling
and not even competitive.
And just say, you know, "If you can come
up with a problem we'll pay to solve it for you."
>> Chris Fallophis: Right.
>> Dr. Dinan: I mean so get used to this as a mechanism
that you're thinking of.
Yes, I mean, I absolutely think that it should be
and I am currently engaged in a dialogue with the NIH
about my supplement grant which was to sort of formulize some
of this and to try to create some SOPs for the other CTSCs
and in that there's twenty five thousand dollars of budget money
for incentives, for small contests and they just,
I'm probably not allowed to say this, they, they just wrote back
to me and said that I need to write
yet another budget justification because that's not authorized
for NIH, but in point of fact it is authorized for NIH
and I just went to challenges.gov
and to you know a whole bunch of the other offices
and it's absolutely authorized.
So there are some impediments to overcome
so I'm rewriting my budget justification with all
of those sort of citations from all the federal sites
and the OMB language that says all the federal agencies are
authorized blah, blah, blah, blah.
So there are impediments to doing that but I agree
with you I think making it clear to people that they could try
to solve some of their own problems
in this way is very helpful.
And there are people who've sort of taken a --
done some interesting things.
So J. Bradner [assumed spelling] who's a --
who's also with me at the Dana Farber,
who's an MD PhD synthetic chemist kind of person,
just discovered a new drug and what he did
and I think he just did a Ted Med talk on this or something
as he's taken the drug and sort of put it in a public space.
And he's advertised it all over the place and says,
"Anybody who has a reasonable question I'll send you the drug.
Try to do something with it."
So I think that there are there's a slow shift
with some people who are willing to do stuff in a more open way.
And, and anything we can do like this, I think, would be helpful.
>> Jeff: Other questions on the line.
I know we have folks that.
>> Dr. Dinan: Okay.
>> Jeff: at Johnson's.
Anybody else go ahead and jump in at any time.
>> Dr. Dinan: No.
>> Jeff: Okay.
Well, thanks very much again it was great content and sorry.
>> Dr. Dinan: We're visuals
but you know that's how it goes sometimes.
>> Jeff: Yes, some visuals and sorry about the weather
for the start but really appreciate you coming the --
what you've done with challenges like this looking
at the evaluation panels opens up, way I look at it,
many other questions to be addressed
to move this open innovation tool into our, our [inaudible]
of solving problems so I think it's really fascinating.
I hope many more parts of this will get written up as well.
And we will eventually.
>> Dr. Dinan: It's coming.
>> Jeff: post the testimonials at the end
because I think they're very powerful especially
with the folks who typically aren't
>> Dr. Dinan: Yes.
I will, I will send you this and I would encourage people to go.
I really would, it would only take a couple minutes.
I would encourage you to listen to what those two people had
to say and if and, and look at the,
the diagrams really of the evaluations.
Just frustrates me.
When -- I can't put that down and walk off so,
so I will give these to Jeff
and for anyone who's interested I'd really love you to look
at them and what, what you'll see, it's so interesting,
what you'll see is a diagram that's sort
of got like eight nodes.
So like these are the VC people and JDRF people
and then the diabetes experts, junior people, collaborators,
the junior people, the people that don't have anything to do
with the field at all and their junior people.
And you'll see a bunch of lines and it goes from red
at the bottom, which are the highly ranked things to blue
so there are like strings coming off the bars,
if you could imagine, 150 strings coming off.
Each of them is one of the evaluations, right?
And they're either red or blue and then,
they go like this depending
on where the evaluation goes on each node.
So it's like the strings just going
across of the grant like this.
And what you see is there's red in the blue, and there's blue
in the red and I mean, it's just a complete mess.
The whole thing sort of looks purple.
If you go to the next one -- so that just gives you a sense
of just how random the whole thing is
and you can trace any particular string
and see what happens to it, it's crazy.
The other one, is the second one, is really interesting.
What it shows is that the top 10 picked
by the Type 1 diabetes experts.
And I indexed it to that because that's what we
in the community do all the time, right?
You have an expert panel because you want to sort of come
up with a new funding thing.
The Juvenile Diabetes Research Foundation wants
to have a new initiative so who do they call?
Those are the people who they call.
I mean those are the people that come in and sit at each
and every committee meeting, each and every site visit,
each and every foundation board.
So we indexed it to them and it's really great
and we just put all the other stuff in grey in the background.
And what you can see is what that group picked,
what every other group thought of them.
And when you look at it it's so incoherent.
I mean, it's amazing some of them are
down at the bottom of the other groups.
They don't pair up.
It's very useful just to appreciate that.
>> Will: Hi, I'm Will.
I'm at NSF and you know, NSF loves to talk
about the gold standard of peer review but it you know one take
on what you're saying is that basically,
what you could throw darts at a board and get just
as good results as NSF's best, you know, the,
the experts from the community.
But I wonder about, you're asking them
to do something very different than they normally do.
You're asking them to evaluate pie in the sky kind of ideas
with no constraints, no you know resource constraints
or other knowledge constraints as what will or wont work
and I'm just wondering if that's comparing apples
and oranges in some ways and...
>> Dr. Dinan: Well that.
I'm sorry.
Yes so that's why I was very careful to say that this was
about reviewing innovative ideas okay and,
and yes I mean I am making no statements
about what peer review for the literature should be
or for other things.
I mean the -- we ask, you're absolutely right.
We ask people to do a very specific task
and it is interesting.
If you actually go to the literature,
the literature has a couple papers already in it.
Sort of scattered across a bunch of disciplines about the fact
that the hardest thing that they're --
that the thing that seems to be most difficult
to review is actually innovation.
I mean in part probably because we have trouble defining it and,
and you know any of a number of things so that people sort
of bring a lot of stuff to the table.
And I'm not saying we have a solution, but I am saying
that there are ways of actually addressing who does what
when you give them a task like that.
And understand what it produces and what it doesn't produce.
What it does not produce is a coherent set of results
from people who are highly coherent with one another.
And that's just a take home.
>> Will: Right.
Yes, but I think you may be asking them to do something
that none of them are expert in actually which is that,
picking out what's innovative,
which might explain why it gets the random set
of results, right?
>> Dr. Dinan: Right, but the answer is
that no one is an expert at the right?
>> Will: Yes.
>> Dr. Dinan: And yet we rely on mechanisms that were developed
to do something completely different to be the filter
for a different sort of process.
And...
>> Will: Well the, the one thing that seems hard.
>> Dr. Dinan: ...that's a problem.
>> Will: Yes, that seems hard about this though is
that I think if you were to go say to a venture capitalist
and ask them how do they pick what companies they philander,
I mean what groups of people, the idea is a small piece of it.
I mean there are millions of great ideas out there
but if you don't have the right team of people
with the right expertise
who have shown you they can solve the problem you're not
going to give them money because the idea is,
is not worth that much.
So I wonder if, if, if this is asking the wrong question,
a little bit, about innovation.
You know like anyone have a great idea
but if there's no way it's actually feasible
where the person who had the idea can't make it happen it
doesn't really matter, you know, it's a innovation vacuum.
>> Dr. Dinan: Well, I mean actually I, I didn't show it
but we actually did the same metrics for asking them
if it were feasible which is something they have huge
expertise in and there was no coherence in their deciding
if it was feasible either.
And then we asked them to do the allocation, which is basically
to award a grant, which is what they do all the time
which is take everything
into account would you fund this effort with this money
and there was no coherence there either.
So it, it, it's not to say that something's --
well I happen to think there's a lot wrong with peer review,
but that's a different story.
But I'm not taking pot shots at peer review.
I'm more taking pot shots at the facts that we set up all sorts
of ways to try to elicit ideas including you know innovation
grants, the grand idea stuff through gates,
I mean, all kinds of things.
And then we use very hack kneed ways of evaluating them.
And then we're distressed by the fact
that we don't get what the product is
that we wanted to get out of it.
And the answer is, you know,
as Don Berwick [assumed spelling] has said a million
times, you know, "You get out exactly the answer
that you structured the problem to produce."
And that's sort of what we've done.
And we might need to rethink that.
Because everyone's paid a lot of attention
to the innovation funnel.
You know open it up at one end get more different stuff
out at the other end but if the filter's catching everything
that you put in the top you're not going
to get anything different out of the other end.
And so I don't have a solution for that.
This is just a metric that says I think -- not to --
I don't, I don't mean to defend this specific methodology.
I'm just saying all it is an observation that says
that that filtering process is probably a lot more treacherous
then we think it is.
That's really the only point.
>> Jeff: Well thanks for that and again thanks
for everyone being on the line and for coming and...
>> Dr. Dinan: And staying.
>> Jeff: ...and we really appreciate you coming giving
this talk because I think there's so much more to mind
from this and other future questions
to spin off from this again so.
Thanks for everybody on the line.
We'll go ahead and, and close down today and,
and close down the line
and again thanks very much for coming.
[applause]
[ Background noise ]