Tip:
Highlight text to annotate it
X
[Amy Margolis] Well thank you, thank you for inviting us here to talk to with you guys
today. I am just going to give a brief presentation just on what our experience has been so far
at the Office of Adolescent Health in trying to replicate teen pregnancy prevention programs
to a large scale. Got a little bit of background information, but I am going to go pretty quick
because Martha covered a lot of this this morning. Our office is relatively new, we've
only been around since 2010, and we are responsible for coordinating adolescent health programs
across the Department of Health and Human Services, implementing this new Teen Pregnancy
Prevention Initiative and implementing the Pregnancy Assistance Fund, which is our program
to provide support services for pregnant and parenting teens. This is just a schematic
of our three major grant programs. We've got seventy five million to replicate evidence-based
teen pregnancy programs, another twenty five million for research and demonstration programs
around teen pregnancy prevention to test new and innovative approaches, and then our program
for pregnant and parenting teens. I am going to just talk about our replication program
today. So the purpose of the program is pretty obvious, to replicate evidence-based program
models that are medically accurate, age appropriate, and have proven through rigorous evaluation
to prevent teen pregnancy or other associated *** risk behaviors. And our target population
is individuals nineteen years of age or younger at program entry. The whole foundation for
the program, HHS under contract with Mathematica Policy Research and Child Trends did a systematic
evidence review that identified the evidence-based programs that were eligible for replication
under this grant initiative. The initial review identified 28 very diverse different programs,
and just a note that we do plan to update that evidence-based list every year, so hopefully
more programs will get added, but right now we're working with 28 programs. We have seventy
five grantees that have been funded to replicate one or more of these programs, and they're
large, they're large funding for the grantees, actually each grantee gets either four hundred
thousand, between four hundred thousand and four million per year. It's a five-year grant
period. They're serving youth in thirty nine states and the District of Columbia, and you
can just see it's a very diverse set of grantees, and they're working in all different programs
settings. So now I want to go a little bit into our program expectations, and then I'm
going to spend more time talking about the lessons we've learned so far. We're about,
a little more than six months into working with the grantees on these programs. So our
program expectations, one they, we're requiring the grantees to implement one or more of the
evidence-based program models. They must maintain fidelity to the program model, and we'll talk
more about that. They must address the target population, ensure that all their program
materials are medically accurate and age appropriate. We've allowed or required for grantees to
engage in a phased in implementation period. They have to collect and report performance
measure data and adhere to our evaluation expectations, and we won't get into that,
but those are just kind of the overview of our expectations. So for fidelity we're requiring
grantees to maintain fidelity to the program model, and to us that's maintaining fidelity
to the core components of the program model. And we've learned a lot about from the work
that our colleagues at CDC have done previously, and so core components we're defining as those
characteristics that are determined to be the key ingredients related to achieving the
outcomes that were associated with the program. And so we have core components around the
content of the program, the pedagogy of the program, and how the program was implemented.
So even though we're requiring the grantees to maintain fidelity to the original program
model, we are allowing and actually expecting them to make some minor adaptations to make
the program actually more relevant for the population that they're serving, like I said
it's diverse grantees all across the country, serving very different populations so we're
allowing minimal adaptations. We're using guidance for adaptations, in some cases the
developer has guidance that we're using to guide what adaptations are allowable, other
cases we're using guidance that has already been developed by CDC with ETR Associates,
and then in some cases where there is no guidance we're developing the guidance on our own with
the developer along with ETR. And this is just some examples of how we've defined minimal
adaptations, I mean we're really looking at minimal adaptations that make the program
more relevant to the population. So some things like changing details in role play, updating
outdated statistics, adjusting the reading level, making activities more interactive,
so pretty minor stuff. We're also allowing the grantees to add on activities to the original
program model as long as the activity is well integrated, works in concert with the underlying
program model, and does not alter the core components. So really for us everything goes
back to those core components, they're very important. And we did figure out very early
on that if we were going to allow adaptations we needed to have a process for how we were
approving those and who was defining, determining which adaptations were allowable. So we have
sort of this process in place and we're just getting started so you know not sure how well
it's gonna work, but at least we're trying. We're requiring all of the grantees if they
are proposing any adaptations or add on activities that they must document them, they must document
their rationale for why the adaptation is necessary and how they are going to implement
that, and then we have staff who, all of our staff will review the adaptations that are
proposed and review them against, review the rationale, make sure it makes sense, but then
review the adaptation against the core components of the program and with any adaptation guidance
that we have to make sure that those adaptations that we're approving really are minimal, don't
change the core components, and do make sense. In most cases, I mean program developers are
really busy so we're trying the best we can to use the information that we have from the
developers already around core components and adaptation guidance and making recommendation
on our own at the staff level, but if we have any questions or there are things that we're
just not sure, we don't have any guidance on you know whether or not this is allowable,
we are going back and consulting with the program developer to see it if, if this really
is an adaptation that they think makes sense and is approvable. And then we're providing
the grantees with written approval or disapproval on each of the proposed adaptations and add
on activities. The other thing that I just want to highlight, this has been extremely
critical for us, is that we've required this phased in implementation period, so all of
our grantees have had to engage in this planning, piloting and readiness period for the first
6-12 months of planning, and this slide lists out all of the activities the grantees are
engaging in during this period, but this has been really really important to make sure
that the needs assessment was done, that the program they selected in their grant application
really is a good fit for their community and if it's not that they can pick a different
program, that they can gather all the materials and get the training and technical assistance
that they need, that they can develop a really thorough implementation plan and really think
through with their partners what they are actually going to be doing, and that they
have time to pilot test the program and learn from the pilot test before they start serving
large numbers of kids. So it's been really really important for us. And with that I'll
just jump into some of our lessons learned. First big lesson learned for us, and I think
this was alluded to earlier, is how the evidence-based programs were identified, so for us the evidence-based
program models were identified because the evaluation of the program was found to meet
these rigorous standards, but implementation readiness was not one of the standards that
was considered to meet the evidence review. So we actually had a number of programs make
the evidence-based list and therefore become eligible for funding for a replication that
really weren't ready to be replicated at a large scale. And I think this kind of, that
leads into this, where we've sort of learned, and I'm sure there are things missing from
this slide, but to us there are certain elements that need to be in place for every program
in order for it to be replicated by someone other than the developer to a large scale.
And so some of the things that we're going back and working with some of these program
developers to put in place are things like identifying your core components, making sure
you have a logic model for the program, making sure that the facilitator guide and the curriculum
materials are available and that they're well documented, that they're not just in you know
draft form that the developer used when they implemented the program, but that somebody
else can actually follow them, that any supplemental materials that you need to do the program
are also available, that there is training available on the program, and that you do
have guidance on allowable adaptations and a tool for monitoring fidelity. So like I
said when we actually found, not many, but a few of the programs that made our evidence-based
list were missing some of these elements, so we have some, we had some programs that
made the list where the developer had never identified the core components, and so we
actually had to go back and work with them, and you know they knew sort of what they were,
but they had never actually put them down on paper that they could give to somebody
else. We had some where the materials weren't available. They hadn't you know put the facilitator
guide together, they hadn't you know curriculum materials were all sort of scattered here
and there, so we've had to work with them on that. And then we had some who really didn't
plan to do a formal training on the program and didn't really have the time or have that
built in, so we've worked with them on that. And then we had a number actually that were
missing adaptation guidance and fidelity monitoring tools. The other thing we've learned is that
developers really differ in their ability to actually go back and create these missing
elements. So some of the developers have partnered with another organization that is there to
package their programs, disseminate them, and do training on them, so it's not really
a big deal for them to add these pieces in, and then other folks you know it's one person
working in a university by themselves who did this program ten years ago and had no
staff and no resources to not only package the program, but even answer questions from
people who were interested in replicating it, so that was a challenge. And because of
that our office has stepped in and actually taken a number of steps to make sure that
the programs that our grantees are replicating are actually able to be replicated to a large
scale. And the planning period I should say again has been really really important for
this because not only has it given the grantees time to really plan and make sure they're
doing what they need to be doing and they've picked the right program, but it has given
us time to go back to the developers and get all these pieces in place. So without that
we would be in a very different position, so it's been really really important. And
one of the first things that we've tried to do is really establish a good working relationship
with the developers, which was a little bit difficult for us at first because weren't
able to give the developers any advance notice that their program was going to be added to
this evidence-based list and therefore eligible for funding. So they found out their program
was on a list and eligible for funding at the same time the funding announcement was
released, and they got bombarded with calls. So that was not ideal, and if there is any
way to not do that again in the future that would be a recommendation cause they really
are important in this process, I mean they answered tons and tons and tons of questions,
so it's important to bring them along. So since then, that was out of our control, but
since then we've tried to have a number of conference calls, email exchanges with the
developers to make sure they know what our expectations are and can get any updates from
our office and sort of are kept in the loop along the way. We've also assigned someone
in our office to be the lead for each program model, and their role as lead really is to
become really familiar with the program, to attend the program developers training, and
to be the point person in the office to answer questions around implementation and adaptation.
And we've been collaborating with the developers like I said to identify the core components,
develop adaptation guidance, develop fidelity monitoring tools, just making sure that all
those pieces are in place, and then again we're consulting with the developers to approve
any adaptations. We're trying to do as much of that as we can without needing to go to
them for every single thing, but if we do have questions we are making sure we bring
them into the discussion. Then two other things that we've had to do. We have, we contracted
with Sociometrics to package the program materials for the program models that weren't available,
so now all of the programs that are being replicated are able to be purchased. You can
get all the materials that you need, so that's been helpful. And we then also contracted
with ETR to develop some adaptation guidance for program models when there was no adaptation
guidance available. Thank you.