Tip:
Highlight text to annotate it
X
Susan: And welcome. Today's workshop is number 140 in the AEA Coffee Break webinar
series. Our partners today are from BetterEvaluation and the topic is Describe Activities,
Results, and Context. Our speaker is Irene Guijt. She's the director of Learning by Design.
And I'm going to go ahead and hand off control of our webinar today to Irene.
Irene: Okay. Here we go. Good morning. I'm calling from Canberra in Australia, and it's 6:00
am, so a bright and early start to this day. This is part four of our webinar series. For those of
you who have missed the other ones, they're all online and there's another four coming behind
me, after me, next week and the week after. What I'll be working through with you is the
Describe cluster, as Susan said. What happens is what we want to know in most evaluations, in
fact, all evaluations. So this cluster really guides you through the process of thinking about
planning to describe what happened, describing changes, describing the context, and how
you're going to manage the data once you have it. Just as an artist has many, many tools, so do
we in this particular sort of cluster, as tasks. In fact, it's the busiest or the most populated of the
clusters in the Rainbow Framework. And as those of you who were at the first webinar might
remember, it was launched fairly recently, at the end of last year, so it's very much still a work
in progress. There are some gaps that, even with the gaps, there are dozens and dozens and
dozens of different evaluation options related to this particular cluster.
So this is what the page looks like when you get to the Describe page. And I just wanted to
point out what I'm not going to be talking about, which some of you might expect. And that is
approaches. Approaches are integrated sets of options, that includes familiar terms such as
contribution analysis and outcome mapping. And they're not here because we felt they needed
their own page, so please go to that part of the bar on our website, if you want more
information on that.
So what will I be looking at? I'm going to be looking at the seven evaluation tasks that are
under this particular umbrella. And there's a lot here, so I'm really hoping I can stick to time.
As you can see, they start at sample and they go right down to visualization. And there's no
arrows between them because it's not a linear process. We have to think about all of these
different seven tasks at some point or other in our evaluation process. And so,
onwards, without further ado.
Sampling. Sampling is a process by which you select the units and the people or the subgroups
of people, organizations, maybe a time period or geographic zone, the type of for example,
I'm going to do some impact evaluation work in Vietnam now. And we're looking at both
women's groups and cooperatives, but also communities and households to see the impact
across these different types of units. And by looking at your sample, you will be able to say
something about those units.
So what are the options that we have? We've got three types of clusters of options. The first
one is the one that is very much based around being able to draw some statistical inference, this
is a probability cluster, and on the website, and please do go there, you will find four of them.
The second one is much more targeted. It's a set of purposive sampling options, and we have
about ten different ways in which you can look at that, and I'll give one short example to
demonstrate how the website works, that falls under this particular category. And then the third
one is the convenience cluster, which is basically sitting along the side of the road and waiting
for somebody to come by, but it's the one where, really, you are challenged by access and that's
what you will be doing. In the Vietnam evaluation I was mentioning, we're actually going to be
combining probability and purposive, so we're taking a case study approach that comes from
the purposes, but we're randomizing it, and that comes from the
probability side of it. So you can mix and match.
So how does it work, once you're in a task like this, like the sample task? This is what the
framework is like, and in Describe, you'll have the seven tasks I mentioned. Here you will be
led to the sample page, where you'll get a description of what I've just been saying, but a lot
more, obviously. And then you can zoom in on any of the highlighted options. For example,
outliers. I've just picked outliers because I find it very interesting. It's an option you would
choose when you want to understand the extremes in your sample, what makes for the really
excellent? What are the factors that really make for the excellent women's cooperatives? What
are the factors that influence those that don't do quite as well?
So the second task is the task about metrics. We all need to decide what kind of unit, what kind
of indicators - not units, that's the sample, sorry what kind of metrics, what kind of
indicators we're going to use. So what are the three options? You can resort to existing sets of
indicators. And on the website, the BetterEvaluation website, you'll find ideas related to
different categories, and here's just the list of indicator sets you could go to under the heading
of Governance. But you can also tailor-make your own for the task at hand. And then there is
what I've just labeled emergent indicators, that's when you know that you need to understand
success, but you don't know yet what it looks like. And through more open-ended data
gathering means you will come to the indicators and come to the metrics. The third task is the
big one, the one that we're all extremely familiar with. It's actually gathering the data and this
is the one where you're trying to find out what happened, what are the changes, what is the
context, and what are the contextual factors?
So what are the options one has here? You have basically five options. Here's two of them. You
can gather data from groups and from individuals. And many of the methods, in fact, can be
used for both different groups, for both different types of information sources. But there's also
existing records, and I know that that's something that I don't always have enough time to dive
into because of the constraints of the evaluation. Or because there's such an overwhelming
amount, or because there's nothing, but existing records is a good place to start. In fact, in the
Vietnam work, we'll be starting with some of the existing impacts and building on that.
Physical measurements sometimes are very important to actually understand, are the outputs
there that were meant to be there? Or what's the condition of health and what's the condition of
the education facilities? And then, we of course have our faithful old friend observation,
looking at relationships and the combination of looking and measuring, physical measuring to
understand whether the outputs are there, and what's going on in the context.
So here's one particular example, again, of what it would look like. This is the example of
stories which I believe falls under individual, but stories can also, there's also group story
techniques, the most significant change one falls under that. Stories are of great interest these
days. Everyone's trying to understand how narratives can be used rigorously. So
please go there to find more ideas.
Moving on to task number four, managing data. I like to see data as on a journey, when you
have your first conversations or you do your first measurement, that's the first moment, but it
doesn't stay there, it travels somewhere. It goes to different people, it goes to different places
in order to be analyzed. So what do you have to do on this journey of data to make sure that it's
secure and safe, and is as good quality at the end as at the beginning? Well, as you travel, your
tracks, I suppose, the first thing is to think about is recording. In the Vietnam case I was
mentioning, it's going to be very focused on participatory work with communities and
analyzing with them, and a lot of those will be about visualized discussions. And so we have to
have a very solid protocol for the research team, for the evaluation members, to be very clear
about systematically. And consistently recording these diagrams and these group discussions in
ways that can be used by others later on. We then also have to think about how we're going to
store that data, in terms of who has access, where is it safe, in terms of anonymity, all very
important. And we'll need to clean the data. We all have data sets where some of the data might
be invalid because of incompleteness or because, you know, manipulation was going on, or,
well, for a range of different reasons. So cleaning the data might be necessary prior to getting
to the final task, which is modifying. Now I don't mean manipulating the data, the data is the
data, but it's about maybe modifying it so that it can be analyzed, actually, that it can be
combined. So that could mean coding it from a long narrative, coding it, for example.
So the fifth task is the one of combining qualitative and quantitative, and there's a lot of
intense debate still about which is better. Now, in my opinion, I think we absolutely need both
in pretty much all evaluations. But degrees of emphasis of both will vary depending on the
evaluation questions that are being asked. You can do a good evaluation just with qualitative
and just with quantitative, but the really good ones are where you have the benefit of both types
of data. Combining qualitative and quantitative data is a bit of a juggling act, and a lot of it is
helped by being really clear about purpose. Why are you doing it? Are you doing it to enrich,
for example, qualitative work can be used to enrich, identify issues on variables that you can't
really get from quantitative surveys. But you can also look at examining hypotheses that you
get from more qualitative work by really drilling down with quantitative work. So what's the
purpose, and therefore what's the sequence of this? How are you going to do it? At what point
are you going to gather data that's qualitative and quantitative and how are you going to
combine it? So I won't dwell on this, but on the website you'll find many options and
references, some options and references. This is not as populated as the cluster on collection.
For example, this is what you'd see on the left-hand side on combining. And on the right-hand
side it's triangulation, why you might want to triangulate and how you could go about that.
Task number six is where you bring it together for the first time. It's about analyzing it. Now
there's another level of analysis, and that's in the fifth webinar, which is about causal analysis,
that's really about the sense-making. Here I'm talking about looking at different ways in which
you can bring the data together in order to come to some initial patterns. So what are the
options that we have for this? We have the graphical options, scatter plots, all kinds of
diagrams to bring the different data bits together and see if there's any trends around. We have
numeric options in the framework, those are the ones related to tabulations of different kinds.
We have the text analysis options. How are you stringing, you know, how do words come
together? How often are they mentioned? Where are they, and in what context are they
mentioned? And then we have mapping. Geo-mapping, for example, where the incidences are
of certain reported changes can be an extremely useful way to get a first idea of what you're
looking at. And again going back to the website, GIS Mapping falls under this particular task.
And on the right-hand side, you can see what one of the resources, you can drill down to the
resources. You can see here that the resource on Dr. Robert Chambers talking about
participatory GIS, and here you have that particular page on with that resource.
The final task in this particular cluster is on visualization, and it's really the process, it
overlaps a little bit with the previous one, but there's so much here at the moment, there's a
great interest in how to visualize it. Because I think people are getting a little bit more aware of
the importance of this in order to communicate. And it could be to communicate to a larger
group of people for the analysis stage, but it could also be to communicate the actual findings.
And Stephanie Evergreen, who's moderating the other webinars, has kindly allowed me to use
one of her examples, this is her specialization. This is what you would be getting if you look at
that particular option, those options in the framework. You can see that this is what one table
looks like before, quite hard to understand what you're looking at, and then drilling down and
cleaning it up. Really the main finding was that this particular company, business, received
more clients in three out of the four types that it was serving, every one except the first one.
And you can see that that conveys the message much more clearly and it allows people to act
on it. So that is the very, very bulky Describe cluster with many different options for those of
you out there. For many of you, there will be a lot that's familiar. Whatever's familiar, skip
that part. If you know sampling, but are feeling much more in need of guidance on managing or
combining, dip in to meet your needs. I hope there's time for questions.
Susan: There's a couple of minutes. So if you have questions, please type them in, and I see we
have a couple in here. Does the framework or the resource base include both explanations and
tools? And the example is when to use mapping as well as possible tools to use to map. And
extrapolating that to, when to use visualization and then tools for that.
Irene: Yes, absolutely. I think I believe that that's one of the strengths of this website, of the
platform, is that it gives you, it starts from, what's the need? When would you want something,
and then what are your options? So you're already looking at a tool from the place of
understanding its utility. And then a description, one by step-by-step, of
what you need to do it, yes.
Susan: And does it recommend particular tools, or for instance, what software package to use
for qualitative data analysis? Or does it compare tools?
Irene: Well, no, and I think that that's part of the magic, as well, of the BetterEvaluation. The
Rainbow is a symbol of inclusiveness and we have chosen quite explicitly to not elevate one
option above the other one. What we are hoping and building as part of the structure of
BetterEvaluation, is the ability for people to assess tools and so we have places for comments.
And that's going to be becoming more populated by people as they use the tool to say oh this
didn't work for me because of or it did work for me because of. So we'll start to be supported
by use. But no, we've got everything out there. Not, well, we don't have everything out there,
but we're not saying this is better than that one
Susan: And does this section explicitly have sort of in-depth information on sampling in both
quantitative and qualitative sampling? Or is that somewhere else in the framework?
Irene: Well, I'm not even sure what you mean by quantitative and qualitative sampling. We
work with the terms probability and purposive, and they can be used to then gather quantitative
and qualitative data. So everything, the range of different sampling options that can then enable
you to get to that qualitative and quantitative data step are definitely all in the framework, yes.
Susan: Fantastic, I think that's the extent of the questions we can get to today. I want to thank
Irene and thank the hundreds of you who joined us on the call. As noted earlier, it'll be
recorded. It'll be available on the AEA e-library, and then over on AEA's YouTube channel. I
believe BetterEvaluation is embedding them as well. We're working to have these translated,
so in the near future, you'll also see translations into other languages, and I want to remind you,
please fill out the short evaluation that's going to pop up in your browser. Again, thanks all.
Captioned by GetTranscribed.com