January 31,
2013 1:00 pm Eastern Time
Coordinator: Good afternoon
and thank you for standing by.
All lines will be in listen only
until the question
and answer portion of the call.
At that time, to ask a question,
depress star,
then 1 and please be sure
to record your first
and last name.
Today's call is being recorded.
If you have any objections you
may disconnect at this time.
Miss Purcell, you may now begin.
Peyton Purcell:
Thank you so much
and welcome everyone.
We're really excited
to have this second webinar
on the D&I funding announcements
that's put out by NIH.
Just a couple
of quick logistics before I turn
it over to David and Russ.
Some of you are on as speakers
for the Web portion.
So, unfortunately,
you do have to have some access
to moving around things,
but we ask that you please don't
touch any of the advancing
slides or exiting the meeting
or anything like that.
And hopefully, that way,
we'll avoid closing
out the meeting too early.
In terms of some other quick
logistics, for questions,
as the operator mentioned,
you can press star 1
to ask your question live
on the phone during the Q&A
session or you can also type
your question in the Q&A tab
at the top of your screen.
You'll just type it in
and hit Ask to submit.
Lastly, you can request a copy
of today's slides following the
presentation by emailing us
at ncidccpsisteam@mail.nih.gov,
and we'll be sharing
that link again.
And with that, I will turn it
over to David to start us off
and really walk through all
of this information
for the D&I (CF).
David Chambers: Okay,
thanks, Peyton.
So I'm David Chambers
at the National Institute
of Mental Health.
Just to give you an overview
of what we're trying to do here,
it's not that we're trying
to go line by line
through the
program announcements.
We would expect that those
of you who are interested have
already taken a look at it.
We're very happy to spend
as much of the time
as possible answering the key
questions that you have.
And what we wanted to start
with was just provide a little
bit of context and a few
of what we would see
as highlights
within the program announcements
for you, at least,
to think about.
And then as,
quickly as possible,
we'll move on to hearing both
from myself and Russ here
at NIH, hearing from the Chair
of the Study Section
that oversees the review
of D&I research applications
at NIH and then, again,
open it up for questions.
So the - just
to give you a little bit
of history at NIH, Dissemination
and Implementation Research
for a number of years,
really was each institute going
it alone.
Everyone across different
disease or clinical
or community health areas was
thinking that there are
challenges around how do we get
effective interventions
to make a broader impact
by implementing them
within systems of care
within communities, et cetera.
But there wasn't really much
of a coordinated effort.
It was in 2005 that a number
of our institutes
and centers came together
with a lot of coordination
support from the Office
of Behavioral
and Social Science Research.
And in 2005 we had issued the
first round
of program announcements
where we were really trying
to establish a common agenda
across multiple institutes
and centers, everyone realizing
that we were having some
of the same challenges
in seeing effective
interventions have as much
of an impact as they can,
as well as trying
to build the knowledge base
to figure out how we can inform
the field on how to do
that better.
It was in 2009 that we were able
to add a number of institutes
that hadn't
previously participated.
Simultaneously
to this we had been able
to establish a set
of D&I conferences.
Hopefully, a number
of you listening
on the calls were able
to attend any
of those conferences.
We keep the materials,
as much as possible, available.
So if you weren't able to go,
we still have as much
of the content
that we presented there.
Similarly, the last couple
of years, in thinking about how
to we build the research
capacity of the field and think
about training new
investigators,
we established the NIH,
along with VA,
Training Institute
on Dissemination,
Implementation, Research
and Health,
which we call TIDIRH.
So we had that in the last
couple of years.
This year it's being hosted
by the Washington University
at St. Louis,
along with us at NIH.
So there is more information
available about that
if you're interested.
So moving on, one of the things
that we really wanted to do,
starting out,
was not just assume
that everyone was using the same
terms, with the same definitions
behind them.
And so when we talked
about Dissemination
and Implementation Research,
we decided that it would be
important within the program
announcements to lay
out our working definitions.
You'll see these on the screen.
Again, it wasn't trying to say
that everybody
in this field should just define
things the same way
but rather encourage
that by example.
We would lay out what we saw
as these key challenges
and we would hope that those -
if you were using these
definitions, fine,
if you were choosing other
definitions,
that you please define them
and not assume
that everybody sees
dissemination
and implementation the same way.
For dissemination,
we were really thinking
about the targeted distribution,
the spread, of information
and intervention materials
to specific audiences
where you're trying
to spread the information
and try and sustain
that knowledge
and the related evidence-based
interventions for a wide variety
of stakeholders who can benefit.
We contrasted
that from implementation
which really was
about a more intensive effort
to adopt and integrate effective
interventions
and really change practice
within different settings.
So we laid out both of those
out because we felt
that sometimes people were
thinking, at a much more surface
level, about how do you
transmit information.
And we wanted to recognize
that that was one piece
of the puzzle.
But there was also a lot
of the context
of how one actually changes
practice and improves the
delivery of
effective interventions.
So moving on,
what you'll see here
in over the next couple
of slides were extracted
from prior versions
of the program announcement just
to give you a sense of some
of the kinds of research
that we were soliciting and some
of the kinds of research
that we funded.
So in terms
of dissemination research,
we're thinking a lot
about every aspect
of the translation
or the transfer
of research knowledge -
the creation, the packaging,
the transmission and reception,
all of which we saw
as having an impact
on the ultimate benefit
of that information
to move health outcomes.
That we saw the value
of the experimental studies
looking at the effectiveness
of different dissemination
strategies,
that we were particularly
interested in looking
at what kinds
of strategies might be helpful
for service delivery systems
serving, typically,
underserved communities.
And even some
of the basic dissemination
research topics, understanding
about how different target
audiences are defined,
how evidence might be packaged
for specific audiences rather
than assuming a
one-size-fits-all mentality.
As we moved on to the thinking
about implementation,
it really was efforts
to implement a whole variety
of kinds of interventions -
prevention, early detection,
diagnostic, genomic,
services interventions,
that have evidence behind them
and how they fit into a range
of existing care systems.
We wanted to spotlight the
importance and, really,
the challenge
of maximizing fidelity while
adhering to some
of the flexibility in the needs
to fit, between context,
where things are set
and the evidence
where it's been provided
and then thinking,
the longer term view,
around sustainability.
We saw certain topics
as really cutting
across both dissemination
and implementation research,
things dealing with the capacity
of different care delivery
settings to incorporate
different D&I efforts,
thinking about the development
and testing
of theoretical models that cut
across dissemination
and implementation
and thinking importantly
about the development
of outcome measures
and methodologies
where they don't already exist
in valid, robust, reliable
and relevant ways.
So we didn't want
to just highlight
that there were a few things
within the new PAR
that are pretty consistent
with what we've had before,
that there remains a focus
on implementation
and dissemination,
that we really do emphasis the
(generalizability)
of the knowledge to be developed
within studies.
So thinking
about how is this not just a
very, very specific example
that answers a very specific
question, but what does this say
about other efforts to try
and disseminate information
and implement
effective practices.
That we remain interested
in having transparent reporting
on feasibility issues -
so what is the likelihood
that an application going
in has thought
about issues related
to recruitment,
related to implementation
and related
to sustainability once the study
is over, still wanting to,
where applicable,
the use of theory and models
around the interventions
as well as, and specifically,
the implementation strategies
as well as the
evaluation framework.
Again, wanting
to specify the importance
of thinking
about health disparities
in low resource settings
and incorporate,
where applicable
and where particularly
important, issues of cost
and analyses in terms
of the economic perspectives,
not only from an agency
perspective,
as well as societal.
The next slide just tries
to highlight a couple
of the things that we saw as new
and different.
So we were a little bit more
explicit in thinking
about the development
of needed measures for D&I.
It's not the case
that Dissemination
and Implementation Research has
no development of measures,
but there are now very key areas
where there are gaps.
And we're hoping
that people will systematically
identify those gaps and think
about studies that help
to develop measures
to close them.
We wanted to make more
of an explicit request
for proposals on sustainability
in thinking much more
of the really challenging
and complex area
of what does sustainability mean
as well as really wanting
to highlight the opportunities
for international applications.
We've added an emphasis,
really not just thinking
about sort of one intervention
for one particular problem
at a time, but thinking -
I use the word scaffolding -
thinking about how different
interventions might come
together to form more
of an evidence-based system
of care as well as thinking
about the interventions
that might be implemented
to deal with the problems
of people who don't just have a
single condition
that they're dealing with.
Whether you call multi comorbid
patients or you're talking
about complex patients,
it's really just saying how do
we not try to identify a problem
that may be very hard to recruit
for because it might mean
excluding all
of these other problems
that people are dealing with.
We also want to have the use
of innovative designs,
certainly a premium on those
that are going to be -
result in more relevant,
more rapid and more
(generalizable) findings.
And we saw a potential
opportunity, particularly
in this climate,
thinking about applications
related to health policy
and those that use simulation
modeling, which are two areas
that have been under-represented
in the portfolio.
So the final thing that I want
to say, before turning it
over to Russ,
is that we've been fortunate
to see that, over time,
more and more
of our sibling institutes
and centers
at NIH have an interest in this.
And so you'll see, with the R01,
that we now have 14 institutes
and centers and the -
as well as the Office
of Behavioral
and Social Science Research
who are involved in the R01.
We have a few who are involved
in the R21 and the R03.
It's very important,
as you're thinking
about the particular topic,
that you consider the relevance
of your topic,
your studies topic,
to the different institutes
that are listed on this slide.
And it's a great thing to have
as many - as much input
and as much contact
with the program staff
who are listed
in the announcement
as possible beforehand.
Because you want to know,
how does this complement what's
already in the portfolio
and does this meet a particular
need as seen
by individual institutes.
The PAR is written to cut
across all of these.
But what you'll find is,
among the different institutes
participating,
you'll not only get really good
content-specific expertise
but you'll also get a window
into where is the individual
portfolio at a
particular institute.
So with that,
I'm going to turn things
over to Russ.
Russ Glasgow: Good afternoon or,
I guess, maybe morning for some
of you on the West Coast.
Again, I'm Russ Glasgow
from the National
Cancer Institute.
And thank you for hanging
in there with us.
I believe I just have about four
or five slides,
and then we're going to hear
from Ross Brownson,
who's the Chair
of the Study Section
that reviews for this PAR.
Peyton Purcell:
And just a reminder to everyone
that just got on,
please don't touch -
some of you are accidentally
logged in as speakers.
So please do not touch anything
in regards to the slides.
Thank you.
Russ Glasgow: Thank you, Peyton.
Let me first answer one question
that we get a lot,
that isn't on here.
It isn't directly related
to the PAR but it certainly is
related to dissemination
and implementation science
at NIH and in our support for it
and that is
that there is no Spring,
large Spring D&I Research
meeting this year.
It's our intent
to have another large meeting
again, starting in 2014
and we hope
to have a much smaller,
invitation only -
very small meeting -
in the fall.
But more on that later,
and some of you have maybe heard
about the meeting complications
that we've had.
But it is our intent to recreate
or revisit those,
starting in 2014.
Okay so here,
in terms of opportunities,
I'm just going to kind
of restate or paraphrase,
coming at it,
slightly different frame,
from what David said.
In the new PAR we're looking
for content that focuses on -
David did a good job of talking
about some of the expanded
healthcare topics.
I particularly want
to underline context.
I think it's emerging,
the importance of context,
in D&I and so we're interested
in that research.
And then, as David mentioned,
also on sustainability.
We have very few applications
on sustainability.
And evolution,
what that means here,
on this slide anyway
or for the purpose
of this is how things adapt
or evolve over time
when implementing a program
or a policy.
And that's the other thing I
wanted to add to emphasize,
David mentioned.
But we get very few
policy applications.
But we are concerned
about translation
into both practice and policy.
In terms of methods we're open
for a variety of things
and we want to encourage people
to think broadly.
In particular, I think,
we're interested
in mixed methods design as well
as methods development.
And then two things
that are somewhat newer this
time is we want to encourage use
of simulation modeling
for various purposes
and also comparative
effectiveness research
that meets the other
requirements of D&I.
So with that let's go
to the next slide,
if we could, Peyton.
Just a couple of things here
that those of you
that are experienced grant
writers already know,
but it's never too early
to start.
So with the page limitations,
to hone that down,
to make sure you're covering the
essential things.
An important point,
and each institute differs a
little bit on this,
but if you plan to exceed
for the R01,
exceed the $500,000 direct cost
cap per year maximum.
You need to get prior approval
and that's generally
around at least six weeks ahead.
So you have
to start really early for that.
And again,
that's institute-specific.
So I'm going to say twice,
on this slide even,
the bottom line
and the one thing, in caps,
in our entire presentation is
talk to your program officers
at the institutes
that you're thinking about.
In general,
and Ross Brownson may want
to elaborate on this,
but don't skimp on your methods.
We're aware
that you do have some tight page
limitations but methods are not
where you want to skip or cut
at the last minute.
And finally a key emphasis,
I think, not only for us
but as other,
as the field evolves
and PICORI evolves and that sort
of thing, is demonstrating your
partnership, that your level
of engagement that you do have
with the settings
that you're working
and that you really understand
that context.
The next slide is something
that I want
to emphasize because,
unlike many NIH applications,
this is one in the new PAR
where international
investigators can be on lead
as the principal investigators.
They could certainly be co -
subcontracts
or co-investigators too,
but this one,
unlike many others,
they can actually be on lead.
But there are a few things
to keep in mind for those of you
that are
international applications.
And again, these slides will be
made available.
But some of the things
that you want to focus
on is the particular opportunity
from an
international perspective.
Is there some unusual talent,
the resources, the magnitude
of what we might be going to.
One thing, often,
is a greater context or looking
at things indifferent, say,
health policy context.
Secondly there is a requirement
that, while we strongly want
to encourage this,
that you do need to address
in the grant how this will also
benefit the health
of the American population
as well as the international
or the other nations
that you're studying.
So that is a caveat to address
but the bottom line here is
that we do want
to encourage more applications
from foreign investigators.
And probably
in March we are committed
to offering one more webinar
to focus on specific issues
of international grantees.
And we will have information
forthcoming on that.
Finally, the next thing -
I'm not going to go through all
of these, but we've tried
to make a number
of resources available to you
on different NIH Web sites
to answer some
of the most common questions
including things like,
well what's been funded
in the past and the topics.
And, again, you might want
to think about, you know,
how what you're proposing might
build upon or be different
than that.
Some key - particularly
for those of you that are new
to the field
or either new investigators
or new to this area -
some key references
and publications.
Again, certainly not a
definitive, exhaustive list
but some things
that we thought might be helpful
that includes Web sites, books,
key publications.
And then, finally,
there's a note there to send
to us if you have any
other information.
So, Peyton, if you would keep
that one available
that is my last slide.
But what I'd
like to do is introduce Ross
Brownson now who is just going
to share very briefly a couple
of his perceptions as the Chair
of the Study Section
that reviews these.
And then, as promised,
we will open it
up for your questions in Q&A.
Ross Brownson: Great,
thank you Russ.
Hi, everyone.
I'm Ross Brownson.
I work at Washington University
in St. Louis and, as Russ
and David mentioned,
I now chair the DIRH,
we call it - Dissemination
and Implementation Research
in Health study section,
which is a relatively new study
section that, of course,
is tied with this
program announcement.
Keep in mind,
just like any study section,
what we do in the meetings is
to review the scientific merits
of the application.
We don't make funding decisions.
We always have,
at the beginning, the use of not
to use the F word during
the meeting.
That's not our job.
Our job is to review the
scientific merit and score them.
And then you end
up with your overall impact
or priority score.
I thought I would just share a
little bit, we have -
I believe we have 10 core
members on the group now
and they represent a variety
of backgrounds and disciplines -
you know, different areas
in public health, in medicine,
in economics and communication.
And so we have a pretty nicely
varied group.
And also, people work
in different program areas,
from infectious disease
or international health
to various chronic disease
issues and risk factors.
And so the idea there is you
have a core group made up
and then, each time,
we usually have other members
added on an ad hoc basis
who fill in certain
content areas.
I will also give a plug.
We have a great scientific
review officer
for our study section.
Her name is Dr.
Jacinta Bronte-Tinkew.
And Jacinta is just terrific
and so, just like the contacts
with the program people
around the content
of your application,
if you have questions
about the review process Jacinta
is really terrific
and does an excellent job.
I thought, just building a
little bit on what Russ said,
I would just mention a few
of my, sort
of what I call my ruminations
from the study section,
first serving on it
and then taking
over as Chair last summer.
And probably the most important
one is to build
on what Russ said
about don't skimp
on the methods.
When we look at sort
of anecdotally,
and I think there's also some
data for this,
it's really
that approach section
of your grant that tends
to drive the overall score
and whether you end
up with a really strong priority
impact score.
And so keep in mind,
for a D& I topic,
the kinds of things
that we often see
that are limitations could be
not paying attention
to external validity,
not thinking about the issues
of external validity and how
that will have an impact
on your grant
and the (generalizability)
in different settings
and populations.
Surprisingly, we see a number
of grants that either don't have
a conceptual framework
or don't carry a conceptual
framework through all parts
of the application.
And so I think that's another
important thing to keep in mind
in that area of the methods
and how you set up your study.
I'd also say
that I just searched the new
program announcement
and evidence
or evidence-based is mentioned
22 times in the
program announcement.
So I think another important
thing is how you define what
you're calling your
evidence-based intervention
or treatment or program area
and making sure you've got
that sort of nailed down.
And then I think maybe the last
thing to mention is just sort
of trying to make sure
that all pieces
of your application tie together
well and there's a consistent
flow between the background,
the significance
and your prior work,
and then especially beefing
up what's called the approach
section, which is essentially
what Russ was mentioning,
is around the methods
of the application.
Just like you would guess,
as a lot of you, I know,
have served
on Study Section have observed
this - you know,
you've got a bunch
of researchers reviewing this
and so they're obviously going
to focus in on the methods
as the primary part.
And so I think that's enough
from me because we really wanted
to allow plenty of time
for questions and answers.
So, David and Russ,
I'll hand it back over to you,
and I guess we can open
up the Q&A part.
Peyton Purcell: Yes, great.
Thank you so much, Ross.
And just a reminder to everyone
to please press star 1
to ask your question, live,
on the phone
or you can type your question
in the Q&A tab at the top
of your screen and hit Ask.
Again, a reminder, please -
some of you were accidentally
logged on as speakers
so please do not move
around the slides as we go
through these questions
and answers.
Russ Glasgow:
And as we're waiting
for your questions to roll in -
this is Russ -
just one other request.
Please try to be concise
and brief in your questions
so we get a chance to hear
from everybody.
We will do the same
in responding to try
and be brief and concise,
and if it's a longer answer
either set up something else
or direct you
where you can look.
Peyton Purcell: And again,
a reminder,
please do not advance the
slides, whoever is doing that.
Thank you.
You are advancing it
for everyone.
Okay, so we have a few questions
that came in as we were waiting.
From (Kathleen),
the question is the addition
of the new institutes
of D&I is exciting
and some institutes don't really
have D&I in their priorities.
How will this affect the review
criteria and likelihood
of being funded?
Must the D&I application focus
on cancer?
Ross, do you want to start
with that and then we can,
if Russ or David have anything
to add?
Ross Brownson: Yes,
I'll just say,
from the review group's
perspective, we'll be looking
at the scientific merit.
So we would not think and,
you know, oh guess this is
coming from Institute X
and it's not
in their priorities.
One little tip that I tend
to do is get a hold
of the strategic plan
of the particular - overall NIH
but then, and specifically,
the particular institute's
position papers,
things like that,
and really look for this.
And, generally,
while they might not call it
D&I, almost every institute has
something about, you know,
translating research to practice
or improving health or something
about scaling up that does fit
into the umbrella of D&I.
But I'll let Russ
or David comment specifically
about the different institutes.
Russ Glasgow : Let me just jump
in quickly and we'll try
to just give you one answer
for each one unless,
like happens sometimes,
we have three
or five different answers
to your question.
So I want to be definitive
about this.
This is a trans-NIH initiative.
This webinar is being hosted
by the NCI,
the Cancer Institute,
but content area
that is either relevant to
or cuts across all of,
David said,
14 that were listed here is
quite relevant.
This is not specific to cancer.
David Chambers:
Right and the best thing to do,
certainly, if you're interested
and believe
that you're targeting a
particular institute is get
in touch with the contact
because they'll have a much
better sense.
Because sometimes what you'll
find on the Web, just searching
or in a particular report,
may not reflect the most
current priorities.
Peyton Purcell:
All right, thank you.
A second question from (Susan).
How do the definitions cover,
related to term that's being
used now,
implementation science?
Ross, do you want
to start with that?
Ross Brownson: Peyton,
say that one again.
I'm not sure I understood
the question.
Peyton Purcell:
I think it's a question
of how do the definitions
that we, that David mentioned
earlier, for dissemination
implementation sort of connect
to this broader term
of implementation science that's
being used.
Ross Brownson: Yes,
I mean I think the terminology
is used many different ways
and I think, in particular,
when you get
into international settings
you'll see terms
like knowledge exchange
or knowledge translation coming
from Canada or Australia
and then different terminology
in parts of Europe and Africa.
So what I would say is you're
writing to NIH
so use the terminology that fits
with what you're doing that's
in the program announcement.
And if you're thinking
of it broader -
using implementation science
as a broad term -
then that's okay
for what you're thinking about.
But when you write it, you know,
really think in there,
from the definitions
in the program announcement
and that David just covered,
is this more
of a dissemination research
study or an implementation
research study.
And frankly, you know,
having done both
and having looked
at many applications,
there's oftentimes an argument
that it could be one
or the other.
And so don't spend too much time
sort of picking one definition.
Just think of the one
that makes sense
and that you can make a rational
choice for.
The reviewers don't tend
to spend a lot of time
on is this really an
implementation study
or a dissemination study.
They tend to look at the science
of the research
and really evaluate it on that,
if that helps.
Russ Glasgow: Thanks.
Often the key question
that you're asking is more
important than how you group it
into a broader term.
Ross Brownson: Yes,
I would agree.
Russ Glasgow: Okay,
so there's another question
that we have is what do you mean
by sustainability?
And that's -
it's actually a very good
question because I think,
in part, what we're hoping
for is that -
the knowledge base.
And studying basically how
interventions, once implemented,
continue to be delivered
within those contexts
over time is what we're meaning
by sustainability.
We've actually looked to those
of you on the phone
who are interested in applying
to conduct research
on sustainability.
But you start
to help build the
knowledge base.
So does it mean
that it's sustaining core
components of an intervention
while allowing for flexibility
over time or tailoring
to a particular context?
I think we have our sense that,
to this point, there's been -
it's been understudied
and it's been under-defined
in thinking about sustainability
as the absence of change,
that basically sustainability
means that an intervention looks
exactly the same way
as it should from Day One
through infinity.
And what we've looked
for is studies that help
to challenge that either reify
or come up with more sort
of context-specific thinking
about what is
meaningful sustainability.
So it really is, as we see it,
a wonderful area for people
to pursue within the field
and I don't think that it's sort
of over-determined
at this point.
Peyton Purcell: Great.
A question for you, Ross -
can you speak a little bit more
about how a policy research
question might be reviewed
by the Study Section?
Ross Brownson: Sure.
That's a great question.
The way - here's how I look
at it and, honestly, we don't -
we typically haven't seen a lot
of policy D&I applications
so I think it really is an area
where we're still looking
for strong applications.
I would frame it
in thinking about,
within a context as this -
it typically,
at least the way I look at it,
as typically dissemination
research studies
because they're sort of messier
and more diffuse.
And I would think about -
and we've written about this,
so if you want
to email me I'm glad
to send you back a few articles.
But we've written
about what makes
up an evidence-based policy
and we talked about the content,
the process and the outcome
of the policy.
So if you're thinking
about a dissemination research
study, a policy,
it could be within any
of those three domains.
So it might be saying okay,
a review of the content of a set
of policies, is this -
is what's getting passed,
evidence-based.
If you're looking
at the outcomes it might be how
can we influence the scale-up
or the sustained use
of the policy
and what are the outcomes
of being able to do that.
I think the one part
of the policy research
that is typically a little
different than some other D&I
studies is the design
and the implementation
of the actual research.
That policy, as you all know,
is a lot harder to control.
And so we're often looking
at natural experiments or sort
of the independent exposure
that we're not able
to control too well.
So you want to match that design
to the research question
and maybe it's not a randomized
trial for the design,
it's some other design
that allows us to look at those,
including both qualitative
and quantitative approaches.
Peyton Purcell: Great,
thank you, Ross.
And I - we have several more
questions that are
in on the Web.
And I promise that we'll get
to those but I do want to check
in with the operator to see
if we have any questions
on the phone.
Coordinator: Yes, we do.
(Linda Squires),
you may ask your question.
(Linda Squires): Oh, hi.
Mine's probably next
in the queue anyhow,
if you want to read it there.
Peyton Purcell: You're on.
Go ahead and ask it.
(Linda Squires): Oh, okay.
Sorry about that.
Mine is about if you're looking
at, like U.S. Preventive
Services Task Force
recommendations
and either the dissemination
of those to the public
or the implementation of them
within a healthcare system.
Is that, you know, how -
is that enough of evidence-based
to be considered evidence-based
or is that like a policy -
like how would that be viewed
by a study section?
Russ Glasgow: The general -
the short and the simple answer
is yes.
This is Russ.
That definitely would be
in some - one way is -
that's one of the safest things
because both the clinical
practice guidelines
from U.S. Preventive Services
Task Force as well as guidelines
from the community guide...
(Linda Squires): Yes.
Russ Glasgow: ...by CDC
or other evidence-based,
like reviews or recommendations,
are a great place to start.
But definitely those two
and I thank you for asking
that question.
That's something
that I think would be a
particularly nice thing.
Not only is
that an important issue,
as you and others know,
about getting those things
and studying how they actually
are rolled out and implemented
in the real world.
But your kind of sense part
of the process is reviewing the
evidence and there's an
authoritative body that has kind
of said yes,
these are evidence-based,
that gets you over one
of the hurdles that often comes
up in study sections.
(Linda Squires): Okay, great.
Thank you.
Peyton Purcell:
Any other questions
on the phone?
Coordinator: Yes,
Mr. (Shridanti),
your line is open.
Mr. (Shridanti),
please check your mute button.
Your line is open.
Dr. (Ridanti):
I believe this is me,
Dr. (Ridanti).
Peyton Purcell: Yes, go ahead.
Dr. (Ridanti):
Actually a very simple question,
Dr. Brownson's content
at the beginning,
when he gave his speech
or as sort of part
of the section,
there were no slides associated
with his text.
And I was just wondering
if that would be made available
to us in addition to the slides
that were initially already
prepared for the other
two speakers.
Peyton Purcell: Okay...
Dr. (Ridanti):
And then the second is...
Peyon Purcell: Go ahead.
Dr. (Ridanti): Sorry.
Second is a little bit more -
just, you know, so it does look
like the (unintelligible)
participation
for R01 are greater than those
that participate in R03 or R21.
Does that mean that,
for someone like me
who may be applying, let's say,
to a - for something
that would be more appropriate
for the (NHLDI),
we're essentially limited
to the R01 mechanism?
Ross Brownson: So, yes,
taking your second question.
It is the case
that they are most on the R01
and fewer are
on the R03 and R21.
And the primary reason is
that different institutes use
different mechanisms
or don't use different
mechanisms and they make their
own call on it.
Everybody uses the R01.
But some institutes have decided
that either they're not going
to accept any R03 or R21s
or they're only going
to reserve them
for more specific
institute-only things.
So the best thing to do,
really - and this goes
for anyone or whatever institute
- if you find
that there's an institute that's
signed on to one
of the mechanisms
but not the other,
it would be a great thing
to contact them
and say I think I want
to do something
that might be smaller
or might be more developmental -
what would you suggest.
Because they might say, well,
you know, we have this other
announcement
that we might encourage you
to do or they may be able
to problem solve with you
on how might that fit
within an R01,
because an R01 is really used
for many, many different things.
That's what I'd do.
Peyton Purcell: Thank you.
And to answer your first
question, Ross's comments were
meant to be just sort
of commentary
and so he does not have slides
associated with that.
So that - you know,
what we will send
out will be the slides
from both David
and Russ's presentations.
Ross Brownson: Peyton, you know,
one thing we might send
out is the first part about sort
of the makeup
of the review group
that you could send as a link
to the CSR listing or the roster
of the review group.
That might be helpful.
Peyton Purcell:
That's a great suggestion.
We'll go ahead and do that.
Russ Glasgow: Well, do that
and - this is Russ,
and I apologize.
Ross and I are
on too many calls together
so often they get us confused.
But the other thing to try
and address maybe the -
what was behind the caller's
question or request is one
of Ross's colleagues has just
published -
and some of you might have been
on - we just did a webinar
featuring a article
by Enola Proctor who's a social
health mental researcher
at Washington University and one
of Ross's colleagues -
on 10 tips for writing
successful D&I grants.
And we'll see,
if not during this call,
if one of us can find the exact
citation we'll give it to you.
But it is available
on our Web site
and that might address much
of the same content
of what Ross talked about.
Peyton Purcell: Okay, great.
Let's go to a couple more
questions from the Web.
(Sheila) asks how do you define
transdisciplinary
in a healthcare
delivery context?
Russ Glasgow: Good question,
one of many.
And this one might be -
we're trying
to just give you one
or at most two responses,
but we might have different ones
on this.
I'll start out -
and this is just
from NCI perspective.
We focus a lot
on going beyond just having
different disciplines.
That's kind
of like multiple disciplines
but, in our view,
that isn't transdisciplinary.
Transdisciplinary is
so there's a real integration,
interaction,
among them so that something
different comes out rather
than just a parallel play.
And in particular,
there's been a lot of work
in the development
of what's been called
team science.
And I might just refer you
either to the Google
or our Web site or others
on team science there which,
by the way, is also applicable,
I think, to partnership research
or community based participatory
research and other things.
But that might be one Ross
or David might want to chime in
or add their take on.
Ross Brownson:
I would just add one thing
to it, Russ.
Again, this is Ross,
with the O. The one part I might
add, I think,
the team science articles -
there was, for example,
a special issue
of the American Journal
of Preventive Medicine that's
very good.
But the one part I would think
about, when you think
about other disciplines,
most of us on this call probably
are some
health-related discipline.
What I would think
about is what is it you can gain
from another discipline,
let's say business
or communication or marketing
or some other area, economics,
that might not be directly
health and what can you gain
from that and what new might it
add to a D&I research study.
And I think
that not only can build this
sort of transdisciplinary part
of a study but it also can
increase your innovation.
And say, you know,
we're doing something new
that applies this model
or framework
from this discipline
into the health field
and it's innovative because -
and I think
that can be a nice addition
as well.
Peyton Purcell:
Great, thank you.
Another question
from (Heather) is is D&I,
system level interventions
included in this PAR as opposed
to D&I individual level
intervention
or would the distinction be
driven by the individual
institute priorities?
Wondering whether some
of the institutes might see
systems level D&I
as the domain of (ARC).
Russ Glasgow:
So this is actually a good case
where the answers are yes
and yes.
The D&I at systems level
interventions is included
within the PAR,
that we are quite comprehensive
in thinking about the kinds
of interventions
that are being delivered
and are not just focusing
on individual level.
In fact, as Ross had said,
you know, certainly thinking
even at the policy level,
policy interventions,
is perfectly fine.
But it's also a good thing
to make sure, again,
when you're thinking
about the broad clinical
or health topic
that you consider asking the
specific questions
to the relevant contact
at each institute.
Because it is the case
that something can be quite
responsive to the broad
announcement
but you'll still want it
to fill a niche
or be high priority
to the individual institute.
So, yes, it is perfectly fine,
and you'll want to talk
to each individual institute
about it.
Peyton Purcell: Great.
A question about comparative
effectiveness research and that
that is encouraged in this -
sorry, I'm having trouble
reading this question -
comparative effectiveness is
encouraged and is comparative
effectiveness
of two interventions
of health outcomes
or comparative effectiveness
of various implementation
and dissemination strategies?
Russ Glasgow:
So this is Russ from NCI.
Let me paraphrase that first
to make sure I understand it
and then I'm going
to give you the same answer
David did of yes and yes,
and explain it.
I think the question was
about often we differentiate
between what is the
evidence-based intervention.
Say that could be a guideline
or a screening procedure
or a certain intervention per se
or it could be a policy,
and contrast
that with the implementation
strategy, the way that was used
to get that into practice
or the dissemination strategy
to get that out there
to go to scale.
In general, I think,
what we're looking for
and we're really focused
on is the implementation
or the science
of how it's being implemented.
But if you did the maybe
narrower, more traditional
concept of CER, just looking
at two different approaches
to screening, say,
or two different treatments
for depression,
as long as it met the other
requirements for D&I research
that we went over here,
then that could also
be included.
But just by itself,
doing that in an efficacy
setting, would not count.
But if you were meeting the
other requirements for D&I
that we talked about that would,
then, be appropriate.
Ross Brownson:
But a nice resource also,
and I'm not sure if it's
on NCI's Web site,
but some of the folks
in the VA have done some really
nice work on hybrid trials.
It really does serve to try
and incorporate both
effectiveness
and implementation aims.
So you can certainly take a look
at that as one way
of bridging both comparative
effectiveness in terms
of specific interventions
and understanding the
implementation
of those interventions.
Peyton Purcell: Great.
Let's take a quick break
from some of the Web questions
and see, again,
if we have any questions
on the phone.
Coordinator:
Mr. (Massey) you may ask
your question.
(Massey): Yes, thank you.
It sounds like, in listening
to you guys,
that some of the emphasis may be
on retouch on strategies
or applications.
Do - what do you think
about consideration of research
on more basic models
and practices?
Is that - is it all wrapped
up there together?
Ross Brownson:
You're saying basic processes
related to dissemination
implementation as opposed
to simply a strategy to try
and make something happen?
(Massey): Yes, exactly.
I mean, if you think of,
you know, (Dennis Dixon)'s work
on barriers or Greg Aaron's work
on evidence-based,
acceptance of evidence-based
programs, I mean,
to me that kind of gets
to the notion of the process
by which it happens,.
Ross Brownson: Right.
Yes, so we absolutely, in fact,
one of the things
that we don't see as much
of are more of the basic sort
of fundamental questions
that should underline the field.
We definitely see most are
applied work.
So we would love to see
if what you're saying is here's
a particular question related
to basic processes.
It just hasn't been well
answered and here's an
opportunity to do so.
I think that that remains
of interest.
Russ Glasgow:
Let me just clarify -
this is Russ.
I concur with that
but let me give you an example
or just a clarification.
For example,
let's take decision making.
There are many models
and basic theories to develop
that science of how do we go,
particularly at a systems level,
to get that employed or tools.
So research on theory and models
and processes
around that would be
very appropriate.
Now in contrast,
if you're talking
about basic research
on how do you humans make
decisions that would not be
appropriate
for this study section.
It would be very appropriate
for other study sections at NIH,
but not for this Applied
D&I section.
Russ Brownson: Right.
(Massey): Okay.
Peyton Purcell:
Thank you for the question.
Any other questions
from the phone?
Coordinator:
There are no further questions
at this time.
Peyton Purcell: Great.
We'll go ahead and ask some
of the ones that have come
in on the Web.
This is a more logistically
question, but may be of interest
to some people.
How are funding decisions made
after scores are assigned
and who makes them?
David Chambers: Right.
So what we've talked
about in terms of grant review,
to this point, with, you know,
with Ross as the Chair,
has been the first level
of review.
So at NIH there are multiple
levels of peer review
that culminate in decisions
about funding.
And as Ross had said,
we're not talking
about using the F word in terms
of funding because that first
level is really to look
at the scientific merit.
After that,
it goes to the
individual institute.
And the individual institute -
each institute is required
to convene an advisory panel
that helps to make the second
level of decisions
about funding.
And it's at that level that,
then, the reviews are looked at,
the applications are looked at.
Program is often involved
in helping to understand the fit
of the program in terms
of the program priorities.
And then, ultimately,
it would go first
to that advisory council
and then to the leadership
of the individual institute.
So while there is a, you know,
certainly a - you know,
positive arises
from having a well scored
application within the review.
It is important
to understand the broader
process which does involve
trying to contextualize the
individual science with respect
to the institute's mission,
the institute's priorities
and the other things going
on within the portfolio.
And again, that's the other
reason why it's
so very important to be
in contact with an
institute-specific person
throughout this process
because they're the ones
who can help navigate the
individual process
at each institute.
And there are some differences
but, broadly,
that's the process.
Peyton Purcell: Okay, great.
Thank you for that.
And from (Katie)
(unintelligible) how do you
define evidence based
and do you have a -
do you have
to have a publication on an RCT?
Ross, why don't we turn it
over to you to start?
Ross Brownson: Sure.
The answer to the last part
about whether you have
to have an RCT,
the answer to that is no.
You know, there's different ways
that people define
evidence based.
And probably the strongest way
is - so, for example,
if you look
in the U.S. community guide
that Russ mentioned,
there are many interventions
in there that are
from quasi experimental designs
or hybrid designs
or other designs
that do not have an RCT.
' So the easiest way is
to say one or more systematic
reviews has deemed this
evidence based.
There are other ways
of framing it
where perhaps an expert group
has looked at the literature.
The Institute
of Medicine might have looked
at the review of certain areas.
But I think, you know,
what you want to look
at is is there some consistency
in the literature, a review,
a systematic review
or even a narrative review
sometimes, that she's something
is evidence based.
And then you can think of sort
of the intervention itself
and you can also think about,
if you've got an evidence based
intervention,
thinking about the process
of how that intervention -
and that include a lot
of implementation factors.
And so, you know, it's,
to some degree, it's in the eye
of the beholder.
But I would rely
on the scientific evidence
and think about evidence based,
based on the area of enquiry
and the types
of research designs,
strong design, strong execution,
that leads to something being
deemed effective
for a particular health issue.
Russ Glasgow: Let me -
this is Russ -
let me just concur with that
but also build on it just
to make sure
that we don't implicitly
discourage you
from doing things.
So I completely agree
with everything Ross said.
Also though, I want to say if,
say, some of you,
as individual investigators,
have done a series of studies
and you have strong evidence,
and that can include an RCT.
We don't want
to give the idea here
that we're opposed to RCTs,
particularly those
that might be more pragmatic,
more contextually sensitive
and had a line of research,
that would be appropriate.
It doesn't have to be -
the guidelines, as Ross said,
or systematic reviews,
those are the easiest things
that's like almost easily gets
over that bar
for being evidence based.
But if you have a line
of research and now you want
to take it to scale,
let's say you've studied it
at few settings or something
but you have no good evidence,
but now you want to go to scale.
Even though it hasn't been
subject to a systematic review,
that is also appropriate
for this mechanism.
Peyton Purcell: Great,
thank you for both
of those answers.
A question -
and I think we addressed this a
little bit in the presentation,
so maybe we can reiterate
whether or not we're looking
for research
that combines both dissemination
implementation
or if it can be one
or the other.
Ross Brownson: Right,
so the quick answer is,
as we tried to establish,
there are very important
questions that are just
specifically
within dissemination
and research.
Those are great,
a number of important ones
related just
to implementation research.
There are a number
that really cut across.
All of them are fine.
So you don't have to combine.
If you do and it's appropriate
to the question - the key thing,
really, is identifying a
research question that,
you would be able to argue,
is both answerable
and significant.
And it does not have to cut
across an entire field,
but we're looking
to advance the field
across the collection
of studies.
Peyton Purcell:
Great, thank you.
A question - another sort
of more logistical question
about the D&I section,
study section, and whether
or not this is active now
or only for this
new announcement.
Russ Glasgow: This is Russ.
So it is active now.
It has been active
for some time, but David can,
as a better historian than I am,
can remind us
of when it was made a permanent
study section.
But there actually had been this
renewed mechanism for this type
of research at NIH for some time
but it's been...
The key thing that's different
now is it's a permanent
study section.
It is in place, and for those
of you who aren't familiar
with the ins and outs,
the important implication is it
has regular members
who are permanent members
of the study section so that
when it comes back,
you're much more likely
to have your revision,
if you don't get funded the
first time,
reviewed by the same folks
and the same study section.
So you're not kind of subjected
to that double jeopardy.
Now having said that,
you can come
up with your own that's
something different than this.
We're not saying
that this particular PAR defines
the entire field
or that's the only thing you can
do, including, some people ask
about well what
if you have an institute
that isn't on here?
Again, what the important caveat
that David said, always, always,
always - and that's going
to be our last line -
talk with the program officer
in your institute.
But you can do other things
that aren't in here.
You don't have to be responsive
and they can or, you know,
could be and there have been
some instances being reviewed
by this study section too.
So the study section is
for the field, okay.
As it's turned out, historically
or for the vast majority
of things that have come
in have been in response
to the predecessor of this PAR
but you can do things
on your own also.
Ross Brownson: Right.
Peyton Purcell:
Great, thank you.
Are - let's just check
in to see whether
or not there are any other
questions on the phone.
And a reminder,
you can press star 1 to get
in the queue.
Coordinator:
There are no questions
at this time.
Russ Glasgow: So Ross,
I'm just wondering here,
I'm looking at the time.
We're coming
up about five minutes
and we may have time to get
to one or two more question.
But I'm just wondering, Ross,
if you have any other
reflections or other things.
This is his, has queued off
of thinking
about study section debates
or anything
for the greater good you'd
like to share?
Ross Brownson:
Maybe just a couple other things
I was reflecting about,
sort of things
that might add a little
innovation and newness.
I think one thing we
like to see - and this gets
into some of the work,
if you're familiar with PICORI
and the patient-centered
outcomes, is the participation
of stakeholders in the process.
And if you're working
at the community level you might
be using the CBPR,
Community Based
Participatory Research.
But I think
that stakeholder engagement,
in some way, is an advantage
for a lot of your work.
We're very keen
in our research group here
and I think, in general,
it gains you points on the idea
of designing for dissemination.
So there's often, kind of stuck
at the end of a grant,
the dissemination plan
for a grant and it's usually,
you know, maybe a paragraph
stuck at the end.
Well if you're doing a D&I
application,
maybe it's worth putting a
little more thinking into that,
what I call designing
for dissemination,
and sort of thinking about okay,
what are the results,
what are the implications,
what are we going to do
with the findings
of the study you're proposing?
So I think that can be
another one.
And then using some
of the hot new techniques
that I guess aren't
so new now but, you know,
systems modeling
and other approaches
that can add
to a D&I application can also
be helpful.
Peyton Purcell: Great, thanks,
Russ - Ross.
We did have -
I just did what Russ
mentioned before.
I'm getting their names
mixed up.
We have one more question
that came in and this is
in thinking
about developing an
implementation intervention,
what would be some important
considerations in submitting
to the D&I R21 versus
and R34 beyond the funding
amount and the length
of the grant?
Right, so for those
who are familiar
with these different mechanisms,
sorry, I'm going
to take 30 seconds.
The R21 is, of course,
one of the three announcements
specifically focused
on Dissemination Implementation
Research in Health.
The R34 is actually a mechanism
that's quite similar to the R21
but only used
by a few different institutes.
The ones that I know very well
are the National Institute
of Mental Health
and the National Institute
on Drug Abuse.
And it's actually been the case
that the R34
that NIDA has supported has had
grants that have been reviewed
within the DIRH study section.
Within the NIMH one,
they more typically would go
to the NIH standing review
committees around Service
and Interventions Research.
So I'm happy to talk
to the specific questioner
about this more specifically.
But basically the R21
and the R34 do reflect similar
scopes of work.
The R21 is smaller
and typically only
up to a 2-year grant whereas the
R34 has the ability
to go beyond three years.
It's also the case of the R34
for - has a broader mandate
around Services
and Intervention Research,
whereas the D&I R21 is very
specific to the topic here.
Considerations are also whether
you would guarantee
that you would be reviewed
within this particular study
section versus you might go
to another place.
So I think,
to answer your question,
there are a few differences
beyond just the
specific amounts.
And the best thing to do,
as always, is to get
into contact
with the program officer
and really say what do you think
about whether this fits better
with one or the other.
Russ Glasgow: We apologize
for those of you
who were anticipating -
oh, we didn't.
Sorry, we had a power outage
here and we thought we might
have lost you on the video,
but I guess not.
I think we probably have time
for one more question
and then Peyton may have just a
summary reminder or a comment.
So, since I can't see right now
any questions, is there anything
on the phone?
Coordinator: Yes, Mr. Sandler,
your line is now open.
Irwin Sandler: Yes,
this is Irwin Sandler
and I had a question
about interventions
that were moving
onto different platforms
such as Web based programs
and that kind of thing.
Under what conditions might
those be considered a
dissemination
and implementation question
versus a basic
efficacy question?
Ross Brownson: Yes, so I think,
Irwin, the major thing to think
about - and this relates
to prior comments
that Russ had made
around comparative effectiveness
research - is what is the
specific scientific question
that you're trying to answer.
If the question is does this
particular version
of an intervention compare
positively, negatively
or equivalent
to another condition.
So if the major question is
really still
around is this efficacious then
it's really not
as much a D&I study.
If the issue is looking more
at issues of reach,
issues of fidelity over time,
issues of broader sort
of population impact,
then it can fit very nicely.
But a lot of what we've seen
in this area -
not to say that this is the
question that you're
specifically answering -
we need more
of the non-inferiority trial
that says if you put this
particular intervention
into this format versus that,
do you get similar outcomes.
So that would be less
of a D&I focus versus some
of the things
that are really predicated
on broader reach,
on broader impact,
on broader ability to take
up within different systems.
Irwin Sandler: All right.
Peyton Purcell: Thank...
Irwin Sandler:
So then if you are taking a
program that is strongly
evidence based in one format,
let's say a interpersonal
format, and you're moving it
to a Web based format -
but the original question's
about before one did the studies
of implementation
in different settings,
the uptake of that Web based
program, you would still -
you first have
to demonstrate the efficacy
of it and that would not be
considered dissemination
and implementation research?
Ross Brownson: Right,
if the study that -
if the primary question
of the study
that you're thinking
about really relates
to is there a benefit
from this particular
intervention,
then it doesn't fit as well
with the announcement.
As if you're saying here is the
evidence base,
now the challenge, the question
that we're asking is how do we
get this taken up broadly.
Again, it's not a perfect yes/no
because there are ways
in which you can think
about health IT as a way
to increase uptake.
But, again, a lot of it is
in the framing
of what is the particular -
you know, what are my specific
aims and what is the trial
that I'm designing,
and how much is
that answering the question
of can this be implemented
or does this help and individual
who receives the intervention.
Peyton Purcell: Great,
thank you for that question.
And we're at 2 o'clock
so we're going
to go ahead and stop.
But just a reminder to everyone,
you can find some additional
information about most
of the stuff
that was mentioned today as well
as some of the resources
in (unintelligible) Enola
Proctor paper
on the NCI Implementation
Science Web site.
And, additionally,
you can also -
we will be sending
around the slides for those
that were on the Web portion.
If you had trouble getting
onto the Web you can email us
at NCIDCCPS IS TEAM --
that's all one word --
@mail.NIH.gov
to request the slides.
Thank you for your time
and have a good afternoon.
Coordinator:
This concludes your
conference call.
You may now disconnect.
END