Tip:
Highlight text to annotate it
X
Coordinator: Welcome
and thank you for standing by.
At this time,
all participants are
in a listen only mode.
After the presentation,
we will conduct a question
and answer session.
If you would
like to ask a question,
you may press star 1.
Today's conference is
being recorded.
If you have any objections,
you may disconnect at this time.
You host for today's call is
Margaret Farrell.
Thank you.
You may begin.
Margaret Farrell:
Thank you so very much
and on behalf
of the National Cancer
Institute, I wish
to welcome everyone
to the March advanced topics
and implementation Webinar.
Today we feature the final
in our three-part series
following up from the sixth
annual MIH meeting
on advancing the science,
dissemination
and implementation research.
Today's presentation is a
debrief on the focus
of (study) designs workshop.
We're delighted
by the large number
of registrants where the session
and a robust response we've
gotten to the serious.
As you'll soon see,
we have an all-star cast
of presenters this afternoon,
such as the very brief word
about logistics
and we'll be off.
As the operator said,
questions are encouraged
and we really do look forward
to hearing from you.
And there are two ways you can
ask a question.
You can press star 1
to place your -
to ask a question
on the phone live
and you'll be placed
in the queue to do that.
And you can also type your
question in the Q&A tab
at the top of your screen.
You just type and had asked
to submit the question.
You could submit your question
anytime but we'll be opening the
session for general questions
when all the speakers
are finished.
And without any further ado,
it's my great pleasure
to turn the meeting over now
to Dr. Lori Ducharme
of the National Institute
of Drug Abuse - sorry -
who co-chairs the meeting.
(Lori), it's all yours.
Dr. Lori Ducharme: Okay,
thank you, Margaret.
So good afternoon everyone
and thank you
for joining us today.
Over the next half hour
or so we're going to share
with you some
of the discussions we had
in our recent working group
meeting and study designs
and implementation research.
First things first.
I'm Lori Ducharme.
I coordinate the implementation
science research portfolio
for the National Institute
on Drug Abuse.
And I also sit
on an NIH wide (with) dating
committee of program staff
from about 18 different NIH
institutes,
all of whom are interested
in promoting implementation
research at NIH.
Let me tell you briefly
about the workgroup meeting
that we will be reporting
on today.
So as most of you know,
NIH did not hold the large
national dissemination
and implementation conference
in 2013 four variety
of mostly budgetary
and logistical reasons.
But that gave us an opportunity
to step back
and reassess the gap areas
that were apparent to us
after doing several
of these conferences
and to give them some more
concentrated effort
and discussion.
So over the last six months
or so, we've held three
different working group
meetings, each
with a different panel
of implementation researchers
from across the country
and we focused on three topics
that we felt were necessary
to address as this field reaches
a certain state of maturity.
The first meeting held last
September, was to take stock
of existing and needed resources
to train the next generation
of implementation researchers.
The second meeting we held
in October focused
on the potential
to improve measurements
and to standardize outcome
reporting and
implementation research.
Both of those meetings were the
subject of previous Webinars
in this series in the archives
of those are available online.
And then the third meeting
that we'll talk
about today was held in January
and it was on study designs
that are appropriate
for implementation science,
thinking beyond the randomized
controlled trial.
So to be clear,
today's Webinar is a reporting
out from network group meeting
and it's not intended
to be an in-depth seminar
on any particular study design
but I hope it gives you some
insight into our thinking.
I'll show you quickly the study
designs meeting roster.
You will hear from several,
but not all,
of these folks today.
Believe me, everyone wanted
to be on the call with us
but to two limits of time,
you will hear
from only a few of us.
And on behalf of NIH,
I want to relate knowledge their
commitment and their
contribution
to this ongoing discussion.
So what did we cover
in this meeting?
I think first off,
we quickly realize we could not
have a discussion
about study designs
without first getting clear
on terminology,
so we spent a fair amount
of time and definitions.
We discussed the boundaries
between quality improvement
and implementation research
as the relative emphasis
on local adaptation
and developing generalizable
knowledge directly impact your
research questions
in your design options.
And we had a lengthy
and engaging discussion
about principles from the field
of engineering
that have could influence our
thinking about
implementation research.
One definitional issue
that I will bring
to your attention at this point,
there is a need, of course,
to differentiate
between the clinical
intervention
or prevention program
or evidence-based practice,
the thing you wish to implement,
and the implementation
intervention
or the strategy you will use
to accomplish the uptick
of the clinical practice,
which is the focus
of your implementation
research study.
So we will try to be careful
to use the terms clinical
or prevention intervention
to refer to the practice you
wish to implement
in the term implementation
strategy to refer to the process
of doing the implementation.
We collected
and we discussed a variety
of study designs
with the particular attention
to the specific research
challenges posed
by implementation science
and the need
to match your designs
to your research questions,
objectives and constraints.
And while this you see
on the screen is not a complete
list, it does spam the broad
range of potential designs.
I think the single take-home
message for today will be
that there is no one best study
design, not even the randomized
control trial,
that is appropriate
for all circumstances
or that will ensure your NIH
grant application gets funded.
Rather, it's incumbent on you
to assess your goals
and circumstances
and to intentionally
and carefully select the design
that best balances all
of your needs and constraints.
So the plan for the rest
of this Webinar, first of all,
Brian Mittman will provide a
general overview of some
of these research questions
and constraints
that I've just alluded to.
And then we'll have four
speakers each provided brief
look at four different study
design options.
I think it's important to point
out that each
of the speakers has first-hand
experience using the design they
will describe
and they've also successfully
obtained NIH or VA funding
to execute that design.
But it's also important
to remember
that these are only four
of a much longer list
of potential study designs
and you should not leave this
Webinar thinking
that these are the only four
of the best four.
Today is really just a sampler
for you and a full menu is
out there for you
to choose from.
Also, I will note
that at the very end
of the slide set,
which will be available for you
after the Webinar,
is a slide that lists a number
of resources including
at least one publication
that addresses each
of the designs that we will talk
about today.
Okay, and so with that,
I will turn the mic
over to Dr. Mittman.
Dr. Brian Mittman: Great.
Thank you Lori and I liked
at my things Lori
and (NIDA) organizing
and hosting the conference
earlier this year
and to Margaret and others
at NCI for hosting the Webinar.
So as Lori indicated,
my role in our tech team
presentation is
to provide some brief background
and to highlight some
of the motivating issues were
meeting and designs
and for the work that we hope
to continue following this
Webinar and the basic argument
is that the implementation
phenomena that we studied
are different.
They're quite different
from the kinds
of clinical phenomena
for which many study designs
employed in the health field
are used.
Many were developed, applied,
have been optimized
for a clinical phenomenon we
study, again,
implementation phenomena,
are very different.
Many of the designs similar
to many statistical approaches
to analysis the student a number
of features in these phenomena
and when those assumptions are
not met, the designs are not
thoroughly appropriate.
So what I'd
like to do is go through,
very briefly,
a set of examples are
illustrations of some
of the key differences
in the subsequent speakers will
talk about and presents
of designs that are based
on efforts to address
these differences.
Perhaps the key theme
in the implementation science
field is that of heterogeneity,
the kinds of quality problems
are gaps, the implementation
problems and gaps that we study,
that we attempt to close tend
to be very different
across clinical domains
across practice settings,
and as a consequence,
the designs need to account
for this heterogeneity.
There's also considerable
heterogeneity among the targets
of the
implementation strategies.
Clinicians,
they come from a variety
of backgrounds and discipline,
very different teams in terms
of their composition
and the organizations
or their practice settings
that we often target
with implementation strategies
also tend to be very different.
Similarly, there's considerable
heterogeneity among the settings
in which these clinicians are
teams operate - microsystems,
organizational units,
entire institutions
and systems in, as I'll note
that later,
the contextual influence,
the influence of the settings
on implementation outcomes are
very important and, again,
the heterogeneity
across the settings as
yet another layer complexity
and poses additional challenges
for us in designing
implementation studies.
And finally,
the implementation interventions
or the implementation
strategies, to use the term
that Lori indicated in order
to help us keep clear what it is
we're talking about,
the strategies themselves tend
to be very different.
If we're talking
about a strategy
that is using opinion leaders,
for example,
those individuals tend
to be very different
and that's true
of many other implementation
strategies that we would
employed as well.
No similar to heterogeneity
across the implementation
strategies,
the strategies are also highly
adaptable, highly variable
and often unstable,
unlike the (simplest) examples
we deal with,
medications
that are very fixed income
at a consistent formulation
from the factory,
the kinds of implementation
strategies that we employ tend
to vary across time
and they tend
to be adapted despite,
in many cases, our best efforts
to maintain fidelity.
And that adaptability,
that variability
and supposed challenges as well.
Similarly, the settings
that we studied,
targets of our interventions,
tend to vary considerably.
When we deal with organizations,
for example,
phenomena such as the
organizational learning curve,
where the organization will
often adapt
to the implementation strategy
and that strategy itself changes
as well.
There're practice affects.
We also see high levels
of staff turnover
and other forms
of organizational change.
And even the environments
within which we work
to implement practices and in
which the settings operate,
change as well.
There are changes
in regulations,
changes in technology,
changes in fiscal environments
and again, these forms
of heterogeneity tend
to pose challenges
for our study designs.
One final set
of challenges listed
on the slide,
we deal with oftentimes
multilevel implementation
strategies which means we have
nesting, patients
with an clinicians,
clinicians within teams
and apartments,
departments within organizations
and organizations within systems
because we often see
in an attempt
to measure influences
on implementation outcomes
at all those levels,
we have to deal
with the multilevel nature
of the phenomenal
and with the clustering
that we see, then,
posing challenges
for our study designs.
When we do study implementation
strategies that are targeted
at the level
of the organization,
we're faced with the problem
of small sample sizes
and limitations in power.
Moving to the next category,
causal complexity,
the kinds of implementation
strategies that we study should
not be studied
in a black box mode we were
interested in one single outcome
and intervention
control differences.
We often need
to study the individual stops
and the causal change to focus
on proximal impacts and outcomes
as well as distal impacts.
We also see very different
levels of strengths
of the contextual factors
versus the main effect
of the intervention.
And in many cases,
if not most cases,
and studying implementation
phenomena the main effect
of the implementation strategy
tends to be very weak
and we see a greater influence
organizational factors,
such as leadership, culture,
staffing, budget and so on.
And these, again,
pose challenges for us
in study design.
And finally,
many of our implementation
studies attempt
to pursue multiple names.
We are, of course,
interested in studying rates
of adoption of a best practice.
We're interested
in practice patterns
and adherence levels
and fidelity.
But we're also ultimately
interested in clinical outcomes
and we often try to measure both
of those and color multiple aims
within the single study.
And in many cases,
we are pursuing improvement aims
at the same time
that we are pursuing the aim
of generating
generalizable knowledge.
There are trade-offs
between these two as well
as trade-offs
between internal validity
as well as external validity.
So this is just a short list
of some of the key challenges
but, again,
these are the reasons
for our focus on design
and the need to develop
and operationalize designs
for implementation studies
in a way that will not always
resemble the way that we employ
and operationalize designs
for other forms
of clinical studies.
So with that background,
let me turnover the mic
to (Hendrix)
for the first example of a set
of designs.
(Hendrix): Thank you, Brian.
My task is going to be
to briefly identify a talk
about one type of trial
which we could go
on their head-to-head
implementation strategy trail.
That's much similar -
it's similar
to a comparative
effectiveness study.
Only instead of looking
at an active intervention -
or to active interventions
and comparing them one
to each other, we're looking
at two implementation strategies
that differ from one another
and applying them
to a single evidence-based
program in the example
that I'm talking about.
So the next slide, on Page 14,
I just wanted
to reemphasize some
of the things
that Brian setting here.
Implementation is necessarily
multilevel, systems oriented
and dynamic.
By multilevel, we're talking
about characteristics
that range both
within organizations,
such as the leadership
in an organization,
all the way
down to what people are calling
intervention agents
in different kinds of settings
and clinical trials.
There might be a clinician
or therapist.
It could also be a teacher
or it could be a person
who is working in a community.
And systems oriented
and systems have to do
with interactions.
So we're not only just talking
about characteristics of, say,
service providers as well
as community partners,
but we're talking about the kind
of community partnership
and service provider
collaborations that Ken is going
to be talking about later
on which we think is the
fundamental piece
about the success or failure
of implantation.
It's also dynamic.
That is, it's not a single time
point at which you get
implementation
but it's a process.
We've been using recommend ways
of measuring the implementation
process such as in the stages
of implementation completion
which is listed down here
as a reference for you
to take a look at.
The implications
for design include the fact
that we really take a look
at this, we're talking
about large interacting systems.
Often these are -
community at group level
assignments are needed to look
at some of the designs.
Not all the implementation
designs but certainly some
of those are that way.
And we're also looking
at how well the program is
implemented
across multiple trials.
The data is going
to be collected
across is different levels.
In the next phase here
or the next, Page 15,
we just have a map
of different stages of research.
The typical research
that many people have been
involved in for the last 20 some
years or so in terms
of interventions have been
efficacy studies
that really talks
about the program work
under ideal conditions.
Effectiveness is,
does a program work
with a good deal of help
and support in here?
In the implementation site is
in the box in here,
that's making a program work.
It has multiple stages in here,
all the way ranging
from exploration
to sustainability.
But it also looks at comparisons
of local knowledge.
That is, when we need to do
that is unique
to this particular setting
to make an implementation
strategy work
versus generalizable knowledge
which is the science and how
to do that one.
So that's also -
on the X axis is a traditional
translational pipeline.
We ordinarily think about going
up in stages of these
but as you'll see,
and (Jeff) we'll talk a little
bit about - and both (Jeff)
and (Linda) we'll talk
about different traditional
transla- changes
from these translational
pipelines that are typical
in here.
And the next phase, Page 16,
this is just a sort
of a cartoon illustration
of what we mean
by this head-to-head trial.
You look at the comparison
in here for implementation,
we're going
to be focusing primarily
on the program delivery system
rather than the clinical
or preventive intervention.
So what we're really looking
at in the grant proposals to NIH
and to others is the comparisons
of these multilevel program
delivery systems with our -
which are in different colors
and here and in the background
we're looking at,
in this instance,
the same intervention here.
And that's one of the designs
for head-to-head trial
which will talk
about on the next page.
The head-to-head trial in here
that will give us an example is
a randomized implementation
trial, randomizing counties it
turns out.
The intervention
that we are testing
in here is looking
at different ways
of implementing it,
is multidimensional treatment
(foster) care.
This is developed and designed
by (Patty Chamberlain)
and this head-to-head trial was
part of an NIH grant
that was funded by NIMH
that (Patty Chamberlain) also
was the PI on.
We're looking
to alternative strategies
of implementing the
same program.
One of them is the standard
setting that had been out there
for many years,
manualized as well,
compared to a community
development team.
It's a team-based approach
and it will show you how
that looks in terms
of evaluating this
over 51 counties
that got randomized
to implementation strategy
and we're evaluating
that in terms
of whether the implementation
occurred faster,
occurred more often, that is,
did it include more families
that were served and did it -
was the intervention implemented
more effectively?
The last light
and hear this gives you a
pictorial illustration
of this one about starting
with counties.
There were 40 counties
in California and 11 in Ohio.
These got randomized
to two different components
in here.
The first one, of course,
(funds) to which implementation
strategy, but we also randomized
in terms of time or cohort.
That is, the year
at which the
intervention occurred.
We call this a randomized roll
out design because it's evolving
over time and people start off
in these counties,
the counties in here.
The first year,
the first cohort,
there were two
active components.
One of them -
one group of counties received
the CDT, the Community
Development Team.
Another one
at the standard setting in here.
And the third group was
wait listed.
In the second year,
the cohort two,
these 26 wait listed counties
were again redistributed
into one group that got CDT,
another one
that got the standard,
another 13 were wait listed.
And finally, the third cohort,
all of them got either CDT
or the standard setting
and the remaining ones -
we also added 11 counties
in Ohio and hereto agrees that
and so we have
that in two groups as well
that got CDT
or the standard setting.
So they give you just a quick
illustration of these.
For the next setting,
we wanted to turn it
over to Jeff Curran who's going
to talk to you
about hybrid designs.
Jeff Curran:
Thank you, (Hendrix).
So with these hybrid trial
designs that we first started
talking about in some VA
trainings around maybe 2008
or '9, and then finally to paper
on it 2012,
our essential argument is
that the speed
of moving clinical
or prevention research findings
into routine adoption can be
improved by considering hybrid
designs to combine elements
of effectiveness
in implementation research.
One, it does not need to wait
for perfect effectiveness data
for moving to implementation
research and we can backfill,
if you will,
needed effectiveness data
which might be missing
for variety
of reasons while we test
implementation strategies.
And as Lori indicated,
at the end of the slide sets you
will see the full site
of a paper with my colleagues
from 2012 and medical care
which provides a lot more
details on these hybrid trial
designs and also has many
examples from the field.
In our next slide here,
we just wanted to quickly return
to (Hendrix)'s pipeline here
to visually depict
by the red orange oval
where these hybrid designs sort
of stand this range.
And in this slide here,
we lay out the three hybrid
types that we have already
put forward.
In the first hybrid type,
in the type one, we are looking
to test the clinical
or prevention intervention while
gathering information
on implementation.
So here, most of our efforts are
on the clinical/prevention
effectiveness trial
with an added process evaluation
of implementation issues during
the trial.
So here we are trying to learn
of the barriers and needs
for future implementation work.
And so these types are indicated
when there are likely some
effectiveness data available.
Likely not in the context
of sort of your interest,
but less is known about barriers
to implementation.
And so there is a need
to gather these barriers
and needs data to use
to develop future implementation
strategies which you would
test later.
In a hybrid type two design,
here we are really testing both
a clinical/prevention
intervention
and implementation strategy.
It is really a dual focus study
where you often have a
randomized clinical/prevention
trial nested within either
and also randomized
implementation trial,
whether that's either at the -
at a provider level,
a clinic level,
a community level,
or we have also seen
and done hybrid type two designs
where the implementation part
of the study is more
of a pilot nature
where we are not randomizing
to the implementation strategy.
We are doing more
of a pilot feasibility look at -
feasibility, acceptability
and promise.
Indications for these trials,
like the hybrid one designs,
there is likely some
clinical/prevention intervention
data that are now
and that are positive,
though perhaps not
for the context of your trial
but in a hybrid type two case,
some data on barriers
facilitators
to implementation are available
and they are -
and run which you can develop
your implementation strategy
or strategies to be tested.
In the third hybrid design,
here we are clearly testing
and implementation strategy tool
strategies while trying together
information
on clinical/prevention outcomes.
So here it is clearly having
most of its emphasis on being
in implementation trial
but there is some evaluation
of health, clinical
or prevention outcomes.
In these types of designs,
the health outcome data are not
normally collected
with primary data collection.
These designs are often
facilitated well
when there is secondary sort of,
you know, healthcare data
available to look at outcomes
but not actually having to be
in the field collecting
them primarily.
The indications
for these designs are
that there are likely fairly
strong robust
clinical/intervention data
available but those effects are
thought to be highly vulnerable
to implementation variations.
And also in a situation
where there was a high-level
need for clinical action despite
limited evidence,
so in a certain
healthcare system.
In the VA, this happens
frequently that there's a policy
action, a mandate
around a certain practice
or program that, you know,
is moving forward
without good effectiveness data.
And so there are opportunities
here to do hybrid type three
research to test implementation
strategies alongside one
of the sort
of policy mandate rollouts while
trying to backfill effectiveness
data sort of as needed.
So I will stop there
and we will move next to hearing
from Dr. (Linda Collins).
Dr. (Linda Collins):
Thank you, (Jeff).
I'm going to talk briefly
about somewhat different
perspective on all this.
I'm going to try
and get you thinking
about the possibility
of taking an engineering
perspective on study design.
So by this I mean working
systematically toward
development of an intervention
that meet specific criteria
that are determined in advance.
So there are a couple
of different ways to go
about this and they're not
mutually exclusive.
So one possibility is
to experimentally manipulate
factors that are hypothesized
to impact
implementation quality.
Now, of course,
it's not always possible
to do this but I think it's
possible more often
than we typically think it is.
And then a second possibility is
to start at the end.
So let me review what I mean
by each of these.
So I'd like to give you,
just very briefly,
an example of experimentally
manipulating sectors
that are hypothesized
to impact plantation quality.
And this is the study that's
in the field that's funded
by NIDA.
The PI is my colleague here
at (Penn State),
(Linda Caldwell).
And the idea is -
so (Linda Caldwell)
and (Ed Smith) developed a
school-based ***
and drug abuse prevention
program for implementation
in South Africa.
And the program itself has
already been evaluated
and everyone is satisfied
with it.
But the idea is
that now it's going
to go to scale.
And so we have an opportunity
to do - to conduct an experiment
in a school district that has
about roughly schools and it
to determine what influences
fidelity of implementation
of this drug abuse
prevention program.
So we're examining three
different factors
that we thought might
influence fidelity.
One is the level
of teacher training.
And in the experiment,
that could be either a standard
level of teacher training
or an enhanced type
of teacher training.
The second is whether structure
support and supervision,
for example,
in terms of a number
that can call
if they have questions
about the program,
whether that's provided or not.
So that could be on or off.
And then the third factor is
whether measures are taken
to enhance the school climate
to make it friendlier
to this program
which is called Health-Wise.
So that, again,
can be on or off.
So this is a 2 by 2
by 2 factorial experiment
in school is the unit
of assignment
in this experiment.
And, as I said,
that's in the field now.
Now, the second approach a
talked about was starting
at the end,
and let me explain what I mean
by that.
Ordinarily,
when we develop some kind
of a behavioral intervention,
the - we have kind
of a standard way
of doing things out in the field
which is we start
at the beginning.
We develop a complex program.
We might pilot test certain
parts of it.
Then we evaluated in RCT,
and then once it's been found
to be efficacious,
we then start looking
at effectiveness
and what factors might influence
effectiveness
and how well it can be
implemented with fidelity.
So - and the only do the latter
part if the program has a
sufficiently large effect.
So what if, instead,
we start at the end.
And by that I mean we ask,
what are the characteristics
of a program
that can realistically be
implemented with fidelity?
So this is when you're first
sitting about to develop
a program.
Look to the end and say,
I know that this program is
going to be -
at least this is my objective -
it's to be implemented in, say,
a community setting.
So what's the upper limit
on what it can cost?
How many hours can
realistically take?
What are the demands?
What's the upper limit
of the demands on staff?
What's the upper limit
of the participant burden,
and so on?
Start with that idea and then
when you developing the program,
engineer the most effective
program you can
that does not exceed
these constraints.
So you can think of these things
like an upper limit on cost,
an upper limit on the number
of hours, as a constraint
in an engineering sense
and you can work
to engineer an intervention
that's the most effective you
can get that won't exceed
those constraints.
And I've been working
for a number of years
on approach called the
multi-phase optimization
strategy and this is one thing
that you could do
with that approach.
Okay, but I have things
over to Ken now.
Ken Wells: Yes, hi.
Ken Wells from UCLA
and I'm going to be talking
about community engagement
as a component of design
and other aspects
of lamenting implementation
science studies.
The key thing
in the community engagement
approach to involving
stakeholders,
which is a fundamental activity
for all of us doing
implementation
and dissemination sciences,
are the underlying principles.
And I've given an example
of how my community partner
and I, (Loretta Jones),
from Healthy African-American
Families frame these principles.
Transparency,
or making sure we're very clear
and straightforward
about the rules of the game.
Respect, showing respect
for differences,
similarities perspectives.
Power-sharing,
having meaningful sharing
of responsibilities and roles
and budgets
and transparency over that.
Co-leadership,
all of our committees
and activities have academic
and community
or other stakeholder leaders,
patients, providers
and policymakers.
And in the key process is
to acknowledge exchange
so it's not just
that I'm telling you
about the science design
for the study
but you're explaining
to me what (makes it) the
community for the clinic
that you're running.
What are the issues?
And then there's the
two-way exchange.
The applications
and implementation sciences
include intervention design
through this process,
so it may be an implementation
intervention that's
being designed.
How we roll out this
in the community
in a meaningful way?
The research design and methods,
with form of randomization,
what are going
to be the key outcomes
that are relevant and useful?
Study implementation,
actually recruiting practices
clients implementing an
intervention together.
And then dissemination in terms
of products, analyses,
presentations and next steps
for studies.
The simple structure
that we follow is
to have a council
of diverse stakeholders
for a particular issue
and to use that council
to figure out,
have we really been inclusive
enough of the stakeholders are
on a given problem.
In the ultimate client
for the initiative is the
community and one doesn't go
to for with planning and design,
for example,
without having forums
or workshops to say,
have we got it right?
Is this the right idea?
And then, like what would
normally have committees
for any study, perhaps
and design measures,
implementation, publication
and so forth,
those would be partnered
workgroups under this model
with stakeholder partners
co-leading methods groups
as well as more applied aspects
of the study.
And I've given a few citations
to how we've done this is some
of our work.
There are different stages
and difference demands
on the partnership as well
as the design issues
in different stages.
So this slide shows a diagram
for a study
that we recently completed
that's called Community Partners
and Care.
And the design phase,
but we think of design planning,
we refer to as the vision stage
which is a way of framing things
that community stakeholders can
relate to and it's -
the key aspect
of that is combining academic
and community capacity
into the design.
The second stage is the valley
or actual design
or study implementation.
In this case,
what is shown are two
of randomized conditions
at the level of programs
within the same communities.
A community engaged
implementation,
and this case a collaborative
care for depression
versus more standard technical
assistance as a randomized
comparative effectiveness design
and actually implementing
that is what we mean
by the valley.
Outcomes are achieved
through the research tracking
of outcomes
and that the products of that,
the findings, the presentations,
the papers which are also
partnered, (our fed backed)
in a capacity building way
to both build academic capacity,
people get promoted,
they get additional grants
and so forth,
and community capacity,
either actually, you know,
better services trained
individuals
and data can also be used
for community agencies
to develop their grants
for services.
There are certain opportunities
and challenges I want
to briefly mention.
One is this approach obviously
requires extensive community
input, so mechanisms are needed
for that.
We use everything
from book clubs, where poetry
and science articles would be
read by different stakeholder
members, larger forums
and movie theaters and so forth.
And a lot of design flexibility
is needed.
One can propose a certain design
but then one has to be prepared
to really utilizing input
and rethink things.
The two-way capacity building is
key to this approach
and that requires training
academics and community
engagement with strong
participation by partners
and then training the community
and research methods and design
so there can be meaningful
dialogue and participation.
There's also the issue
of partners requiring resources
to do this work
and then having access
to the resources
that are developed like data
and programs.
And I've given our formula here
of a third academic
or third community
and the third shared
data programs.
There are substantial benefits
and that includes community
co-ownership
which is very important
for moving forward
with communities
and policymakers using findings.
This approach can lead
to novel solutions
because they're
so much ingenuity brought
in through the community
knowledge that academics may not
have thought through.
And then because of the
stakeholder participation,
including policymakers,
this approach can generate
national as well
as local policy impact
and that's one of the goals
of using this
stakeholder approach.
And then finally,
I think the sense
of improved community capacity
and the reality of that,
and in the social inclusion
of vulnerable population meeting
social justice goals can be
inspiring and moving and,
you know, tangibly very
important to communities as well
as to academics.
I've given a very brief example
from the Community Partners
in Care Study where we showed
that this approach
with was both feasible
but also lead to better health
and social outcomes relative
to more standard
technical assistance.
So that's a quick overview
and now I think we don't want
to our rep up.
Woman: We do.
Thank you very much, can,
and thanks to (Linda) and (Jeff)
and Hendrix).
That was a very quick sampler
of several study designs
that are being used
in current implementation
research projects
and this Webinar is just the
first of what we hope are
several products
that will provide more resources
for implementation researchers.
We also have planned a review
paper that will expand
on this Webinar
and more fully survey the
landscape of implementation
research issues
and design options.
One of our committee members,
(Rachel Payback)
and her colleagues
at (Wash U) are in the process
of developing what should be a
very useful taxonomy
of study designs
and their key features.
There are cross-cutting issues
that we are likely to work
on the measures
in the training groups
that (unintelligible) from us.
And that we're thinking
about what other sources
(decisions and) support
resources might be feasible
and useful.
So here's the slide I promise
you back in the beginning
of the presentation,
so you might want
to grab a screen capture here
or take a picture,
however (unintelligible).
But we have
at least one article here
that maps onto each
of the four designs
that you just heard about and,
of course, we recommend
that everyone began
with the first article here
which is a very good overview
chapter by (John Lansford)
and colleagues
in the (Ross Brown
edited) collection.
So with that,
I think we're ready to open it
up for questions.
I can see one here already
and we'll start there.
And that is a question
about how these different
strategies help
to address the problem
of limited power due
to small sample size.
The sample size is one
of the things,
one of the constraints
that we were alluding
to in the beginning
of this discussion.
Very often you either only have
access to so many sites
or subjects
or you can only afford,
given the size of a grant,
to access some insights
or subjects.
And so that comes
into your selection
of study design.
(Linda), I'm going for this
to you because I know we've had
this discussion a couple
of times, so if you have a sort
of brief response
to that question,
that would be great.
Dr. (Linda Collins): Sure.
Can you repeat the
question please?
Woman: I sure can.
So the question is
about how these various
strategies help
to address the problem
of limited (power data)
and small sample size.
Dr. (Linda Collins):
Limited power
and small sample sizes.
Well, that's always a really
difficult issue I think
in implementation science
and a lot depends
on whether you're talking,
whether your outcome is
that sort of the institutional
level for the individual level.
And the study
that I very briefly described,
the outcomes are the teacher
level, so there are -
in each school,
there's maybe two
or three teachers
who are implementing a
Health-Wise program.
And we had efficient power
with - in 56 schools,
we believe,
to conduct that study.
One thing that is interesting
about that factorial experiments
is that they require much
smaller sample sizes
than people often think.
I - we don't have time
and this call to go why that is
but I will say
that the logical underpinnings
of factorial experiments are
quite different
from the logical underpinnings
of the RCT and other kinds
of designs that directly compare
individual treatment conditions.
And so with the factorial
experiment, it is very often
to - often possible
to add factors to the experiment
without having
to increase the sample size
which is, you know,
very different
from what you would have to do -
within RCT, if you add an arm,
you have to increase the sample
size by a lot.
And with the factorial
experiment,
there are many circumstances
under which you would not have
to increase the sample size
at all or not increase it
very much.
So I guess I will say
that I think factorial
experiments are underused
in this area.
They're a lot more efficient
than many people think
and I encourage people
in implementation science
to consider using them.
Woman: Thanks (Linda).
So I want to remind folks
that you can either press star 1
to get in the queue
with questions
or you can submit them
by writing them in the Q&A box.
They see one coming in now.
We're trying
to steal this I'm talking.
A little different.
So this one is
for Ken regarding
community partnerships.
The question is really
about community input
at the level you described
in the investment of lots
of time, and so any words
of wisdom on ways
to involve community partners
in what is described here
as a reasonable pace of program
of research,
which I suspect means
within an NIH grant
funding period?
Ken Wells: Sure.
The - I would start input early.
The, you know,
time is always critically
important to be able
to develop the trust.
I think having the partnership
not have to develop
at the last minute
and perhaps even developing a
partnership for program
or a center or, you know,
some kind of entity
so that there are partners
that are used to working
with each other
and knowing the issues can be
very helpful
for that jumpstarting
particular programs.
I also think having partners
that have strong relationships
in the community or it,
you know, whether it's clinics
or community-based organizations
and then problem solve with you,
and then also being realistic
about the constraints.
I mean, we do need
to be flexible but also saying,
"Look, I have, you know,
ask weeks to get the script
together and we need
to figure out, you know,
how we can get some inputs
quickly and see if we're
on track and then develop more
mechanisms over time."
That gets back
to the whole transparency issue.
I think that's it knowledge
that this might be difficult
for the community,
that's the respect issue
so it's really, you know,
being respectful of what some
of those challenges are.
I think to not do things this
way often causes time
and money later.
So, for example,
let's say that is not really
trying to do this kind
of engaged partnership work
because it feels too time -
taking too much time.
Then suddenly, something happens
in the partnerships you need
later it doing applied research
anyway or you lose that partner
in need to recruit another
and it becomes very expensive
later on.
So I think,
at least a moderate degree
of investment upfront
in the partnership
which may slow the design period
down a bit,
initially can help later.
One very brief example
of that is we took almost an
entire extra year
in the Community Partners
in Care Study because it was
such a large effort
to engage partners
and get the design right.
But then we ended up meeting all
of our recruitment benchmarks
and time for clients
and agencies and providers
because the communities were so,
you know, bought in
and co-owning and facilitating
that we actually ended
up not being behind despite
that additional planning time.
Woman: Great, Ken.
Thank you very much.
So we've got one more question
in the queue here
that I will ask (Hendrix)
if he will field for us, please.
And that is someone wanting us
to explain the difference
between a randomized rollout
design (step wedge) design.
(Hendrix): Sure.
Those are quite similar
and I think we still have a
little bit to do
with defining these terms very
carefully in here.
The step wedge design
in these randomized rollout
design are sort
of in the same class.
I - the way I read that is
that the step wedge design is
one where individual units,
groups are randomized to one
of two conditions
which are the
interventions themselves.
So it's essentially starting
with no active intervention
and they get randomized to one
of the conditions,
the intervention conditions,
to start off with.
In the kind
of randomized implementation
trial that we're doing,
that we've done in here,
the rollout really is
at the level of implementation
so that these interventions take
place in the context of one
of two different
implementation strategies.
So that's primarily how I see
the difference between us.
Woman: Great.
And thanks (Hendrix)
and thanks very
much (unintelligible).
I just wanted
to just take a moment
to thank all of our presenters
and certainly
for thinking (Morey)
for gathering
as a convening us here.
This is a very exciting
to be able to capture the key
topline outcomes
from the meeting.
And we'd like to invite you
to continue this discussion
online at NCI's (community of)
practice, a research to reality
at cancer.gov and invite you
to join us for our next advanced
topics and implementation
science Webinar.
You'll be receiving a link
in just a few minutes
to our evaluation survey
and we hope you'll take a few
minutes to let us know how we
can improve the seminar series
and what next steps you would
like to see from this series.
So with that,
I'd really like to think,
once again, our presenters
and look forward
to seeing you online at Research
to Reality and in
future Webinars.
Thank you again.
Coordinator: Thank you.
This concludes
today's conference.
Participants, you may disconnect
at this time.