Tip:
Highlight text to annotate it
X
Thank you, and I too would like to add my thanks to Rick and the
organizers for inviting my participation.
What I'd like to do is share some thoughts that were stimulated
by Mark's paper, the work that he describes,
and the work that he himself has conducted over a thus
long productive career.
The thoughts are basically all related to the larger point of the
need for the field to conduct a parallel set of research that will
examine the issues of use of these technologies at the same time that
we continue to develop the technologies and the content
themselves.
There are three main points that I'd like to make.
First of all, the need to balance our focus on the technology and
content with an increased focus on use and issues of implementation,
adoption, practice change, quality improvement - all similar terms.
Secondly, the need for us to identify and surface many of the
implicit assumptions that we have regarding the mechanisms of
operation of these technologies so we can better understand how they
do or don't work and especially how they are not achieving the
sorts of goals that we have set out for them.
Then finally, the need to balance observation with interventional
work to conduct more observational studies that will allow us to
better understand the varieties of decisions and decision processes
that we're trying to support, as well as to better understand
descriptive in addition to normative models of
decision making.
So, let me turn then to my first point,
and that is the need to balance attention on technology and
content versus use.
The first point is a simple one, and that is the benefit of these
kinds of approaches consist of, first of all,
the innovative or the superior decision processes that we hope to
achieve through the technologies, but also their use.
The technologies themselves, the innovations, of course,
benefit us as research, trying to build careers and CVs.
They benefit us as journal editors,
trying to put out journals with interesting and innovative
content, and yet the impact on society on health and health
outcomes requires use.
So, that's a focus that has suffered to some extent in the field.
Second of all, the key question is whether we should be addressing
this use problem via implementation or design.
Are these two separate issues?
Do we first focus on design, and then we sort of throw the
technologies over the wall to those of us with an interest in
implementation, or do we in fact need to think about implementation
throughout, from the very beginning.
Are there trade offs?
I would argue, and I believe those in the field of implementation
would argue that the implementation problem needs to be
considered and addressed from the beginning.
It needs to be built in to the technologies themselves,
and there may in fact be some trade offs.
We may need to perhaps cut back a bit on what we hope
to achieve in the technologies in order to increase use.
And finally, should we be thinking about issues of barriers to use?
Is the problem the user? Do we simply need better
interventions to overcome these barriers, or in fact is
it a more fundamental issue and are they not necessarily
barriers, in other words users who just don't do what they
should, but instead some perhaps flaws or incorrect
assumptions in the way that we think about use and the way that
we think about decision making and clinical behavior and that
incorporating better assumptions and understanding
of use would allow us to design better technologies.
So, that brings me to my second major point and that is the issue
of our assumptions regarding use and implementation and the need
for us to identify and surface and test those assumptions that drive
much of the work in this field. And assumptions regarding
users and their behaviors.
The first two bullets on this slide are closely related and get
at the issue of the assumptions that we have about decision
making and practices.
Mark talked about the 10 to 15 decisions that are typically faced
by clinicians in daily practice.
I think the question that we need to examine is:
are those decisions all equal?
Are some of them essentially non-decisions?
What do we know about clinical practices and practice behaviors
and the extent to which some of them are based on what have been
referred to pattern matching kind of decisions,
or are some non-decisions that are simply automatic kinds of
behaviors that we see that don't involve a traditional normatively
guided decision of seeking alternative solutions and
examining and evaluating those solutions and picking that
solution that seems to best meet our needs.
What do we know for example about novice versus expert kinds
of decision making and how do those different kinds of decision
processes influence the sorts of technologies that we develop in
the ways in which we expect them to be used?
There's a great deal of knowledge that we have already regarding
different kinds of decisions that doesn't appear to be sufficiently
integrated into the field and into the development of these systems,
but also a great deal that we don't get now about decision
processes and behavior, that I think we need to in order to be
more affective.
What are our assumptions about point of decision support versus
general reminders or about the ways in which these kinds of
technologies are useful?
Mark gave the example of posters and pocket cards and so on.
Are those posters and pocket cards actually used to support
specific identifiable decisions?
Or do they serve sort of as a reminder and not used to
the point of decision?
These again, are the kinds of assumptions that we may implicitly
be taking to the process of, for the task of designing these systems
that we need to examine and test in order to be more effective.
Are some of the conditions that we are designing these systems to
address temporary or permanent?
So, if in fact we are moving in the direction of electronic medical
records and point of care information technology,
then it may be that some of the handheld systems are of somewhat
temporary or limited use so we need to be looking ahead to what
will be in addition to what is currently in place right now.
And again, the issue of barriers and determinants of current
practice patterns and or assumptions about whether the
work that we are conducting is intended to overcome barriers to
implementation or whether in fact some of those barriers are not to
be overcome but instead to be recognized and to work with them
rather than against them.
Third point that I'd like to make is to shift the need to shift the
balance of research that we conduct somewhat away from
a heavy focus on interventional work and developing and
evaluating systems and more towards the observational
type of work. Again, we need to better understand
descriptive models of decision making and understand
how decisions are made in addition to and in some
cases even instead of how decisions should be made,
our ability to develop normative models does require a better
understanding of current models of decision making.
Again, we need to understand different types of decisions in
decision processes and their determinants.
This requires a fair amount of observational work.
Much of the work that we've conducted in an interventional
mode I would argue is of very limited value.
For the most part these studies reflect insufficient follow up.
We don't know much about the sustainability of many of the
interventions that at least in the short term have proven
to be effective. We don't fully understand whether the
effectiveness is based on a Hawthorne effect or in fact
the systems themselves are effective.
We don't know much about their spread potential.
And many of the studies that we've conducted that are interventional
suffer from the problem of limited external validity given the
emphasis on internal validity.
I think there's a wealth of insights and evidence from
existing experience.
What can we learn about age differences,
about differences between trainee use of these systems
and expert use of these systems?
What does that tell us about decision processes and the kinds
of decision support systems and IT solutions we should
be developing for the experts, not just for the novices?
So, again, a fair amount of insight that is available and
ready for us to try to find and to interpret and understand
its implications.
And finally what can we learn about the use of mini-mental
status and other kinds of simple solutions?
Does this tell us something about clinicians' ability to or comfort
with these kinds of systems?
What do we know about the importance of issues that
Mark has raised such as validity, transparency,
the need to reduce work and so on by examining the
systems that are currently in use versus systems that
we develop as researchers and attempt to evaluate?
And I think this is a broader theme within the field of
implementation and quality improvement.
Certainly within the Department of Veteran's Affairs we suffer
from an excessive interest in experimental evaluations of
developing new innovative strategies for improving quality,
most of which unfortunately in our studies tend not
to be very effective.
While at the same time the VA is constantly changing,
new policies are put in place, quality is improving all the time,
and as researchers we don't spend enough time examining
those processes understanding the insights that are available
to us through these kinds of observational studies.
So, I think again, shifting the balance from interventional to
observational would take us very far in this field.
Let me conclude with a couple of key points that I know you've
heard before, that I know you'll hear again,
but that bear repeating.
One of which is the need to for all of us not to neglect the
implementation issue or the implementation challenge.
Much if not most of the discussion within health reform
has to do with our ability to pay for more care.
We all know that much of our ability to pay for more care is
based on our ability to increase the current overuse.
That's a solution that the economists are more
conservatively and in colleagues might wish to see through
economic incentives, but we know that market failure
exists in healthcare and that there's not much that can
be accomplished through economic incentives. That we
need behavioral approaches and as a research field,
we need to better understand the determinants of clinical
practices, the determinants of implementation so that we can
help shift away from the current overuse that we see,
free up resources that will allow us to cover more of
our fellow citizens. The same issue applies to comparative
effectiveness research and again, the idea that as
a research field we've been entrusted with one point one
billion dollars of our fellow citizens' tax dollars to
generate comparative effectiveness in research
findings and new guidance.
That guidance and those findings will lead to large stacks of
reports but not necessarily of the kinds of impacts and benefits in
the healthcare system and in actual practice without more
attention to implementation issues and better understanding of the
actual use of those findings and what it takes to achieve higher
levels of implementation.
And then my second key point, a general point is again,
the fact that implementation is not a barrier to be overcome by
carefully designed, technologically sophisticated
interventions.
That as a field the implementation science field should not be
pursuing more than better elegant solutions,
testing them through RCTs and other rigorous trials,
but instead developing a better understanding of implementation
as a process and behavior change as a process that requires
different kinds of research that requires generating different
kinds of insights and those insights are not defect sizes
and again, more and better solutions but instead a better
understanding of how change occurs, how practice occurs
and what kinds of strategies are needed, not only in terms of
technology, but in terms of adjusting context, adjusting
the kinds of systems that we practice in, and moving
towards a better understanding of delivery systems.
I would argue that the IOM committee on comparative
effectiveness research was correct in its emphasis on
implementation which I believe was the fifth priority,
although I would also argue that they got it wrong in terms of the
kind of research that's needed.
Again, it's not comparison of different strategies at achieving
better technologies and better interventions, but instead,
developing better insights and better guidance for those of us
who areaiming to achieve better implementation.
Thank you.