Tip:
Highlight text to annotate it
X
>> I'd like to begin by telling you what an honor it is
to be here this morning.
I find myself more and more in meetings with fellow bureaucrats
where we talk about the people who actually do the work
and I look out here and I see a lot of people
who are really doing work.
And it's easy to just go about your day
and think that's just sort of normal.
It's not. You represent what I think is a national treasure
of knowledge and experience.
There are so few people.
You're such a low density, high demand asset for our nation.
And it's just really a privilege to be here.
I want to talk to you today about some
of the things we think we can do with research, development,
testing, and evaluation.
And so we're going to go through each one of these
and talk a little bit about how we prioritize research needs,
how we might go about cataloging future plans,
what would be involved in a gap analysis and how much would all
of this cost and what kind of facilities might you need?
Identifying and prioritizing the needs, we got a lot of help
in the NAS report on how to do that.
And, in fact, if you pull that up in Adobe Acrobat and search
for the word research,
it appears about 430 times in that report.
There's a couple of recommendations here
that specifically deal with research.
Recommendation 3, establish validity
of forensic science methods.
Recommendation 5 talks about human observer bias
and sources of human error.
And then, Recommendation 12 is very specific,
it was about fingerprint interoperability,
but it gets at a larger issue to me, which is just this idea
of doing things better, faster and cheaper in forensic science.
So we'll talk about that.
And then Recommendation 10,
we overlap with the education working group a little bit
of this research has got to be done with our Centers
of Excellence and academic partners.
And so let's talk a little bit about a few
of these research needs here.
I view that the research needs, really,
you could put it on a timeline right.
There are past needs, present needs and future needs.
The past needs are things the NAS report said
to you guys should have already done this.
Before you say no two fingerprints are the same,
you know, can you get me a citation in nature or science
that shows that no two fingerprints are the same?
Those kinds of things.
The foundation of our science.
And so there are a host of things where we do comparisons
and make conclusions where our intuition tells else we're
on pretty firm ground, but we probably need to back
that up with some foundation.
In my mind, I'd love to see some studies like this.
So you know you give some fingerprint examiners a
fingerprint and two possible ten print cards
and say does it match?
Can you tell me if it matches?
I think most qualified fingerprint examiners would have no problem with that,
but then you could say, can you tell me now?
How about now?
Can you tell me now?
And sooner or later, we'll find a limitation.
What should appeal to most of you as scientists out there is
that the amount of fingerprint that we give them,
we can actually measure, right?
We can put that in terms of an area and we can keep track
of that and we can put that on a graph with an x and a y axis,
those kinds of things.
I think, though, if we're honest with ourselves all
of our disciplines have had a culture that's a little bit
against this.
And although I can't attribute this quotation
to any one person, I think we've all heard it, which says no, no, no,
don't do a study like that,
because I don't want some defense attorney telling me
that I don't have enough of a fingerprint.
I want to have the flexibility to make a call
on that fingerprint if the case is important enough.
And so probably from an ethical point of view we need
to take a step back and say you know what,
we really do need to do these studies.
We really do need to know when we stop being able
to predict fingerprints
when there's no other case information telling us what we
really wish we could.
Peer reviewed research on uncertainty,
accuracy, and reliability.
Those would be on my timeline of things
that we probably need to do now.
And I put that slide up there to remind me
that we don't actually have to invent the wheel on this one,
and if we do, it might look a little bit like that.
The larger scientific community
and medical community have been dealing
with this issue for a long time.
And in fact, if you grab an analytical chemistry textbook,
sophomore level textbook, these things are well described.
We teach them to undergraduates all the time.
The first thing is this notion right here,
that any time we measure something it's going
to probably give us some sort of bell shaped curve.
We don't get a perfect measurement.
And even the measurement we take,
whether it's how long something is with some measuring device,
I'm going to have to make an estimate of,
is that 25.5 or 25.6?
At some point, I'm making a guess.
And so we're going to get bell shaped curves.
The next thing would be
that often we're asked to compare things.
And so those four different samples there all have the same
average value, alright,
but they're different types of samples.
So many of you have dealt with this.
It would be something like a student's T test
of how successful are we
at really differentiating those things?
Those kinds of things are well described.
We don't have to invent it.
Now the final thing is that all deals
with uncertainty of measurement.
The final thing is we render conclusions and opinions,
very well informed, expert opinions mind you.
I'm not throwing a rock at that.
But we have to make a decision and that decision might be
that if you're above the green line then we're going to say
that you're intoxicated.
If you're below it, we're going to say that you're not.
Or if you're above the green line,
we're going to say you're pregnant.
And if you're below it, we're going to say you're not.
And where we draw that line, there or there,
means we get some more false positives
or false negatives, right.
And so that's pretty well understood.
If you get a medical diagnostics textbook,
you'll see something that looks like that.
That's called the receiver operator characteristic curve.
It was actually developed shortly after Pearl Harbor
in WWII to help us keep track of how we would look
at radio signals from air planes
and decide whether they're friendly or foe, alright?
But it really is about comparing true positive rates
to false positive rates.
If you're unfamiliar with that diagram,
I learned about it probably in the last couple of years,
so I assume everyone's got my same level of ignorance.
That line, the diagonal line, is what a coin would get you,
flipping a coin would put you on that diagonal line.
So as it diverges up and to the left,
that means that you've got a test
that helps you differentiate things.
Alright, this one's fascinating to me,
this idea of extra domain information.
This is something that plagues all branches of science,
but I think if we're honest with ourselves,
we're particularly susceptible in forensic science.
And it goes like this; the swashbuckling,
handsome detective could be Ken Melson,
comes in to the horn rimmed sort of geeky scientist,
and says I need you to compare some evidence.
I need you to do something for me.
But before you do it, I've got
to tell you a few things about this case.
This guy, we think he's killed two people
and we got this fingerprint right where it should be.
It's on the gun and we want to compare it to him.
The other thing I want you to know is we had him last year,
dead to rights, but he got some fancy attorney
and he got off on a technicality.
So I'm going to give you the evidence here
and if you can just tell me what's underneath it.
I don't want to influence you at all. Right? I mean,
we have all seen that play out in some shape or form,
whether it's in our own disciplines,
the medical discipline.
Doc, before you look at this CT scan and tell me
when this injury occurred, I want to tell you a little bit
about the case, it occurs.
The problem is we're just not real good
at that as human beings.
We start seeing what we want to see.
So science is the drive of science is how do we just get
that out of the equation so really we just get handed that
and someone says start there.
That's all you need to know.
Maybe I'll give you something to compare it to.
The other one is this notion of sources
of human error just beyond bias, right.
I come from the DOD setting and one of the laboratories
that may help us with this; they got real excited when I began
to have this dialogue with them.
It was the Air Force Research Human Factors Lab, alright,
because they'll look at an F16 and say, that's an F16,
but when you put a pilot in there,
she becomes part of the F16.
It's not just a person anymore;
it's the limitations of that person.
And so, you know, you can Google these and find these.
The one that really hurts my head is
that line is just as long as that line.
There are some limitations we have.
And there's an interesting dynamic that goes on here.
If I were to hold up a digital camera and say let's talk
about the limitations of the CCD inside there.
We'd all go, oh, okay, no problem.
If on the other hand I point to your eyeball and say let's talk
about the limitations, we get personally offended.
It's my eye.
It's my eye we're talking about.
It doesn't have any limitations.
It's a great looking eye, right.
We get offended when it's about our own limitations.
So science helps to sort some of those things out.
So Recommendation 12 I talked
about was fingerprint interoperability.
In my mind that translates to a much larger research
and development need and that's better, faster, cheaper.
My first experience as a lab director, my deputy came
in who kept me from getting fired for the better part
of two years and said, boss, as you learn more about this job,
you're going to say that you want to do it better and faster
and cheaper and I'm just going to teach you this
on your first day, you can only have two of those.
Pick any two you want, but you cannot have all three, right.
I mean we can get it better and faster, but it's going
to cost right we can certainly do faster and cheaper,
but you're not going to like it, right.
So, but research says in the future yes we can,
we can give you better, faster and cheaper.
It says how do we take something like that and turn it
into something like that so that people who look
like that can transition into people who look like that?
And we give them that capability.
And so, and we've all seen this right.
If we grew up watching Star Trek, we're talking
about the tricorder, right, that Spock wore, you walk in
and there's four human life forms, this one matches
that one, this one has that much alcohol in his system.
So that's where research really would be taking us.
How do we do it better and faster and cheaper?
Alright so some of the challenge that we've got
to face is how do we find
out what's going on out there already?
In my limited world, just within the Department of Defense,
that's what I've been sort of tasked with.
And you can do that from a top-down point of view,
send out formal requests and you'll get some good responses,
but you get things that have been sort
of formally approved programs of record.
The real challenge is initial concept ideas come from folks
like you who are working in the field every day
and comparing things and looking at things
and realize the limitations of things and that seems to happen
from the bottom up over a cup of coffee right out here, alright.
And so how do we do that?
Gap analysis is pretty straight forward, right.
We're going to take those priorities and then figure
out what's going on and then look for overlaps.
Are there things that several agencies are doing
that we can maybe change the focus
so they harmonize better or can we even transition and say look,
seven of us are doing this very similar DNA project,
but no one is doing this work on this paint comparison.
Maybe we need to shift some of our efforts around.
And then we would hopefully discover the really true gaps.
What are things that need researched
that nobody's looking at at all?
Identifying the cost associated with RTD and E;
I was just having this talk
with Robin before we got started this morning.
This is tricky, right, because if we knew the answer,
it wouldn't be research.
So if someone says how much is this going to cost?
I have no idea.
Well, could you take a million dollars? Sure.
Could you take a billion dollars?
Absolutely.
You know, if we gave you 200 million dollars in this project,
what are we going to get?
I have no idea.
I hope something good.
I hope. We don't know.
That's the nature of research.
So we design good experiments and hope that they can begin
to investigate things and they'll inform what we're doing.
The biggest cost though, I think,
is this opportunity cost on the top.
So how many people are involved
in forensic examinations in the room?
Sort of a show of hands?
Yes, lots of you.
And how many of you have some sort of back log
if you had your hand up?
Okay. So how many of you, if you had both hands up now,
I could say you're probably in the group
that doesn't have free time.
But you're the people who are the smartest.
You're the people who know the most about this,
who know what those research needs are.
So finding ways to not grind to a halt case work
so that we can begin research is going to be a challenge.
There are lots of places out there,
and so this will be part of it.
How do we partner with these places?
How do we identify them?
How do we offer incentives for them to work with us?
Alright so just summing up,
those are the few things we've talked about.
That bumper sticker
on the bottom is really the most important note
because this has got to be such a team effort.
There's more rice than we can eat,
more rice than we can each keep in our own bowl.
And so I look forward to talking with all of you
in the way ahead on the future here.
That just gives you a little indication of the type
of people we already have on the research and development team
and the breadth of experience.
And thank you very much.
[ Applause ]