Tip:
Highlight text to annotate it
X
>>DR. MIHAIELA RISTEI GUGIU: Ok. Hi everyone, my name is
Mihaiela Ristei Gugiu, I'm a Senior Research Associate at the Crane
Center for Early Childhood Policy and Research.
I'm working with Laura Justice who is here today. She's my boss.
Actually, my background is in political science, and I have
extensive expertise in (indistinguishable) and corruption.
As part of my dissertation, I did a lot of anti-corruption policy evaluations.
In 2008 I started attending the American Foundation Association
Conference and presenting these policy evaluations.
About three years ago I started doing program evaluation, and as
part of that, I started to use logic models.
The approach that I've been using is called, "Semi-structured
Interview Protocol Approach" and it was developed by my husband
Cristian Gugiu who is faculty here, in collaboration with
Liliana Rodriguez-Campos, who is faculty at The University of South Florida.
This particular approach actually has been used in the United States,
by various agencies, including the USDA, NIH.
It has also been used in Australia, Europe, and South Korea, at least
recently, I've just been looking to see who has been using this particular approach.
And it also has a lot of flexibility, in a sense; I think it can be used in
different fields. I found it being used in business economics,
healthcare, education, and criminal justice. So it is quite versatile.
I'm going to present the steps that I'm going into, applying this
that is a draft that we recently created for a program
that we are evaluating at the moment. 0:02:06.110,1193:02:47.295 particular approach. I'm going to try an example of a logic model
The first thing in the plan, in this particular approach, is to identify
the key informants, the key stakeholders in the program.
We wanted to be able to talk to the people that have information
about the program, and they can tell you how the program works,
and have a vision of the program.
And you want to be able to talk to people that are at different stages,
so to speak, on the scale of the program. So the higher ups to the
ones that implement the program in the field because, they will give
you different types of information.
They will have different visions that you want to get to encompass the program.
The second step would be to - this interview protocol allows you to basically
get background and contextual information about the program.
What was the reason for which this program was created in the first place?
Are there any social, cultural, political factors that impact how the program functions?
And factors that may impact the way the evaluation will be going on?
Generating the logic model, I think, is the easier
part because as everyone has talked about influencing, activities and
outputs, and the outcomes and comparing the structure depending
on the type of the program, the scope of the program into short term,
intermediate, or long term outcomes.
Modeling program outcomes refers to identifying whether these
outcomes are at the individual level, at the organizational level,
if the program has the goal to change the community outcomes
and you will have community level outcomes, or maybe even larger than that.
Once you identify these outcomes, of course, we can do the activities
and the objects that go with the outcomes. And here again you have
modeled that based on the same way. You know, individual
activities, other group activities that have more than one individual
like families, or activities that take place on the organizational level
or the community level. Again based on this qualitative program.
Modeling program inputs refers to not only identifying the
resources that are going into this program, but also whether there
are resource gaps because this will give information about how well
the program will function based on the resources they have and will
give you some idea of what maybe is going on differently- they have
this outcome, they have these activities but maybe they don't have
enough resources, and that's why are not effective in what they're
doing, and they're not reaching the outcomes they want to reach.
Building the Rational theory is another aspect to this process.
We're always asking interviewees, "Are these outcomes realistic?
Are they meaningful? Are they specific enough?" We want to make sure
they are measurable. If you give me this outcome is it measurable?
How would you measure it for this particular outcome? We want to make
sure that they are actually realistic and they don't... the example
I'm going to give you is a (indistinguishable) program, and if their
outcome is for all the children to go to college, well that's not a
realistic outcome. So I want to make sure they have an understanding of that.
Developing a program theory is the next step where basically I'm
there with them in the end when they give us all the information about
the study: "We are going to do these activities and expect these
outcomes." And then we say, "Okay, we will now connect these
resources with the activities and the activities with the outcomes to
see how this is going." What's the logical process in here.
The important aspect of this logic model is to- particularly for
program evaluation- is to identify what are the most important
outcomes for this particular program. And one of the questions you
always ask is, "Please identify the critically important outcomes of
the program." What that means is try to identify those outcomes that
if the program does not do well on these particular outcomes, that
would be reflected in the evaluative conclusion.
Basically, if you fail on these outcomes you fail on the overall program.
So you have to pause and really think carefully about what you want to really achieve.
This gives them an opportunity to reassess, "Do we have enough
activities going towards this particular outcome we really want to achieve?
Do we have enough resources and should we focus more or less on this outcome?"
And finally we build a graphical or logic model. This is a logic model
we created- this is a draft. You can see it looks- the basic skeleton is the same
:We have the activities, short-term outcomes, intermediate,
and long-term outcomes. But they are structured very differently.
You have here all these next steps and so on, you have activities.
Instead of using arrows we decided to- and this is a personal
preference of course, because it tends to get messy when you are
pointing arrows from every single individual activity to outcomes.
We decided to use instead these small parts of this here
to make it easier to detect which outcome connects with
which activity and which activities connect with which outcomes.
These outcomes that are in black, you see there is a legend placed
beside, those are critically important outcomes.
The ones in blue are identified as moderately important and the
ones with interrupted line are the low-importance outcomes.
So, even if they don't reach those particular goals the program is still
successful as long as they reach their main outcomes.
Just to show one example, I chose the "Read more" outcome.
This is how you have basically five different resources that are
related to critical activities and reach one major outcome.
To point to that with arrows would be very messy.
So, this is just a personal preference to make it easier to read.
And the other thing if you notice, when talking with them they kept
identifying, "We would want parents to be able to do these things,
this would be our outcome for the parent. We want the child to
develop these specific skills.
So, what we did, we identified the individual outcomes
based on the parent and the child.
What I also did is look at how many of the interviewees
actually endorse each outcome.
And, you can see, the critically important outcome for the parent and
the child on a regular basis and for the child which was
to develop skill and recognize letter sounds, rhyming, all the
pen control, counting - those were endorsed by every single person we interviewed.
Others, though, you can see had a lower endorsement.
This is particularly interesting to see when we talk about
intermediate outcomes and long-term outcomes.
Sometimes you see only one person actually endorsed that particular outcome.
When I looked at who actually endorsed that outcome, this was the
manager or the director of the program.
I think what is really good about this particular approach is that it
gives you an opportunity to talk to people on different levels.
So, yesterday afternoon I had a meeting with the CEO
and the CSO of an organization.
And, we went over this draft and said, "Did we capture the vision of
the program as you want it? As you think that the program should be?"
They looked at this logic model and they said, "No. This is not what
we envisioned for the program. This is not our vision."
And I said, "Well, this is the vision that came out from the interviews
with 9 of your staff members, including the manager and director of the program."
And, I basically said, "I don't care if the parent is involved with the child.
All I care about is whether the children score higher on the assessment... (Indistinguishable)"
Because, it's a very business-like approach, they say the funders do not
care whether the parents have an increased awareness of their role as educators.
Whether the parents are empowered as teachers or the funders
would ask me, "Did the scores improve?"
And, that indicates something: that there is somewhere in the
communication between the higher ups in the organization and
those that are actually working on implementing the program, there was a disconnect.
There was somewhere in the message that the CEO and the CSO
wanted to give where it was distorted.
And, they said, "This is very good. We want to stop here.
And then, we'll go in with our people, and we'll decide what the
vision of the program is that everyone is agreed upon
and then move forward with the evaluation."
And, why I think this is very important is, think about the alternative.
If you were hired to do this program evaluation, if we knew only
what the staff told us, the ones who are actually implementing and
working on redesigning the program right now and we have that
evaluation, a year from now, we will have come up with a report and
sent to the CEO, and the CEO would have said,
"Okay, parents read more with their children.
Parents feel more educated about their role as a teacher.
Children know more words or they can recognize rhyming or can do rhyming.
They score higher overall."
If that was not in the foundation program, we would have failed at
our job because they hired us to implement a certain vision.
Now, so early on in the process, they have the opportunity to go back,
to reassess what the vision of the program is, and to have time to get
everyone on the same page.
Then, we move forward with the evaluation.
So, I think that's one of the really good things about this particular approach.
Besides that - and this is the citation - the article includes a whole
battery of questions that can guide you, and you can use or adapt
them to your own program, to your own logic models, and I think
that is really useful especially if it's your first time doing a logic model.
Thank you.
Okay. Yes.
>>DR. KIM LIGHTLE: Any questions?
And, I will make sure that we send out this citation too.
>>ATTENDEE: What software did you use for that?
>>DR. MIHAIELA RISTEI GUGIU: That is a Visio. Yes, that is the software.
It's called Visio. I think it's very neat software.
It takes a little bit to get used to, but once you get used to it, it's a very useful tool.
So, it's Visio. I'm using Microsoft Office.
>>ATTENDEE: If people contact me, I can give you a copy of the
article, and I can give you the Visio template which will save you a day of work.
>>DR. KIM LIGHTLE: Thank you.
[applause]