Tip:
Highlight text to annotate it
X
I would like to answer Fabrizio’s question
if no one else on the panel wants to answer.
If I understood correctly, you mentioned the commissioning theory
and the public administrations internal organization referred to the evaluation.
Often I have spoken about this with other colleagues as well.
I think the situation is as follows.
The evaluation policy, if you pass me this term,
in our public administration is still at its initial stage as to RDPs.
That evaluation first came along with European Community programs,
so we did not have a separate administrative organizational structure.
If it was not for the evaluation at a European level,
probably today we would not have neither evaluators within administrations
nor evaluations.
It seems like in Europe the evaluation process
was born together with the community policies
and the public administrations adjusted consequently.
However, we have some regions that still do not have their steering group
or still have not identified people that take on the role of evaluators.
This says a lot about the fact
that evaluation does not have its own dignity within the framework of clients
and public administration organizations.
Maybe we should think about this aspect.
Nicoletta wanted to add something to what he said.
Your comment was an answer to what I said.
Simona is perfectly right. Our country lacks an evaluation tradition.
The theory behind who commissions an evaluation
corresponds to the organization theory.
Who is our client? Our client is the person in charge of the program,
be it the decision-maker or the person who implements it.
Regardless of the activity he or she carries out
our client should feel a constant need to evaluate the progresses.
The client should see evaluation as a natural function.
Mr Pellizzari talked about an evaluation culture.
If there is an evaluation culture, an organization that wants to improve
must continuously make a self-assessment.
Against such background, the client becomes responsible,
he or she is able to ask the right questions and to interact.
This does not exist in Italy, here there is no commissioning theory.
We take for granted that the person who commissions the evaluation
is the same that manages the program and wants to find out if it worked.
This is not useful, therefore I appreciate your request.
We have to look at how the evaluation is carried out,
but we also need to take into account how our interlocutor reacts
because he or she is part of this process.
I speak about an evaluation community which includes the client as well.
I would like to answer the last question that Fucilli asked
since I am personally involved.
He spoke about the literature about the different uses of evaluation.
The traditional literature deals with different uses.
There are positive and negative uses.
The positive use can be the instrumental one.
The evaluator carries out the evaluation,
the client is happy with it and implements the recommendations.
This is the result of a cognitive process that rarely takes place.
It is not necessarily true that if one learns or knows something,
one is also willing to change his or her attitude and act consequently.
The cognitive process implies learning, but also screening information.
I receive a lot of information, but before my attitude changes
I need to put in place some psychological or policy mechanisms.
The instrumental use is the most traditional,
but also the less followed one.
Another widely debated use is the cognitive one, as you mentioned.
The cognitive use is different.
It means that I receive information
that help me redefine the framework of my activity.
The example given by Mr Pellizzari on Ghana shows what the cognitive use is.
In that case the evaluation tells me
that I need to rethink the framework in which I insert my policies.
This cognitive use can be put in place
when a recommendation or the findings of an analysis
are not immediately received because it is not possible to implement them.
However, over time, evaluations that come to the same conclusions pile up
and things start to change, because they always evolve.
Carol Weiss often spoke of this.
She worked on the long-term effects of the cognitive use.
Then there are the negative uses,
the political ones, the ones where you only take what you need.
Moreover there is a symbolic use.
The evaluation is done just because it needs to be done.
In Italy people often brag about the fact that they do an evaluation.
Then, in fact, nobody knows what was written in the evaluation.
We need to reject this.
Recently I came across some interesting papers
that say that these standpoints only take into consideration the evaluation.
Evaluation is carried out in a certain way
and some claim that it is misused because it is not done well.
This is what I was saying earlier.
This means to consider the evaluation only.
The use of it, though, is something that happens
within a context that can be more or less favourable to this use.
For instance, in many cases conflicts arise
and one can accept or not, according to his or her amount of power,
but some other times one does not know what to do.
An evaluation might be used because it is a way to find the middle way.
In other cases the conflict is not acceptable
and the evaluation is used to convince all the different actors.
In those cases the evaluation must be very convincing, done very well,
and include some novelties.
I am talking about an article published on the American Journal of Evaluation,
by a certain Lederman.
He compiles a list of cases
and studies the evaluation according to the situation in which it is used.
He identifies two important variables,
namely the need for change and the presence or not of conflicts.
I hope in Italy we are at last feeling the need for change.
For a very long time we have been going down, down, down neglecting the change.
I see a reaction today.
It could be a situation in which we are ready to follow evaluation findings.
However, one needs to check how many conflicts arise in reality.
The decision-maker, who receives the findings, is not the only actor.
He moves on a complex political stage.
We need to extend our reasoning about the use of evaluation.
We need to be aware of new things.
We say that the evaluation takes place in a political environment,
which turns out to be very important.
The political environment does not include only the parliament,
but also the public opinion, and all the actors within a program.
If we think like this,
we will be able to understand better the need to do evaluations
to keep improving oneself. I firmly believe in this.
We also need to be able to communicate outward
in order to interact with the existing political situation
where people might or mightn't be ready to welcome the evaluation findings.
I agree with our colleague who mentioned the client.
The client is definitely an important element to discuss.
Simona also mentioned the dignity of the evaluation within an organization.
I spoke of an evaluation culture. We could also discuss other points
like the evaluation governance, for instance.
It seems to me that the key word that is missing, also at an institutional level,
is the evaluation policy.
30 years ago, maybe less, 25 years ago
the OECD DAC already pointed out that such a policy is crucial
within any organization that wants to promote evaluation within itself.
What is an evaluation policy?
It is not a free writing to list all the elements that we like or do not like.
It is rather a process
where the role of the evaluation within an organization is defined.
Of course, if the organization is dictatorial in nature,
the evaluation will not be neither interactive, nor democratic.
If the organization does not care for human rights,
ethical aspects will not be taken into consideration.
So the evaluation policy
must be approved by the bodies who rule the organization.
It needs to include some crucial aspects,
like who commissions the evaluation.
In IFAD’s case this body is the Board of Executives,
but there is an evaluation independent unit also.
So, through the policy, we have got the right
to formulate and to prepare both our work program and our budget,
independently from the organization management.
Obviously we do not operate in an anarchic context,
which would be interesting.
Not pleasant but interesting.
We submit our working program and our budget to the Board of Executives
who approves or not and if necessary makes changes.
Every time we have to do an evaluation, be it the evaluation of Ghana
or a corporate evaluation, which we are doing right now,
we discuss the institutional efficiency of the organization.
The Board does the commissioning.
But after all, my team thought that it would be good for the organization.
The management would have never thought of it. They are not stupid,
but they have no interest in calling their efficiency into question.
I think that the evaluation policy concept is important.
Within the United Nations framework
we have the UNEG, United Nations Evaluation Group.
Among the biggest evaluation offices in the Group
there are the FAO one, the IFAD one and the UNDP one.
These offices are helping the evaluation offices
of organizations who do not even dream about having an evaluation policy
to actually have one, because it protects us.
If you do critical evaluations,
and you do not have an institutional anchor behind you,
and you haven’t written it in your statute, your life can turn violent.
To answer to your question about our projects,
I will tell you that we have different kinds of projects
and they usually are multi-dimensional, they deal with different sectors.
We used to call them Integrated Rural Development.
We do not call them that for obvious reasons.
We might have projects that only deal with one or two components.
Then we have sectorial projects at a national level,
like the microfinance and the rural financing in given countries.
What we try to insist on is that the projects evaluation is important,
but it must be done also by governments and by IFAD's operational personnel.
We cannot be the only ones to carry out evaluations.
We push to do what we call higher plane evaluations.
They assess not only projects, which we deal with anyway,
but also programs in a country.
We evaluate all the projects in our portfolio
but also the processes we put in place to foster the political dialogue,
partnerships or knowledge management.
This is crucial for an organization like ours,
who wants to promote innovation.
We are not as big as other organizations like the World Bank.
We do not have the numbers
or the capacity to invest billions as they do.
We have a financial critical mass that we can invest,
but our added value comes from our being small
and from our willingness to foster innovations.
I think I forgot to speak about the most interesting aspect that you highlighted,
that is to say what do we do to encourage the use of all these things.
You can talk and you can write books but I think it is useless.
Better said, it can help creating an initial platform.
Then you need to build those tools
that lead towards the use of evaluation.
At IFAD it is written in the evaluation policy, since we mentioned it.
At the end of every evaluation process there must be debates,
criticism, the cycle I talked about earlier.
Then the partners who will have to implement the recommendations,
must subscribe to a final agreement.
This final agreement says that there was a Core Learning Partnership,
that the parts met, let’s say, 300 hundred times,
and we also ask something to the two main partners.
IFAD gives credits to the sovereign governments.
So there are two main partners:
governments, in the person of the Finance or the Agriculture Minister,
and our management.
We ask to take a look at the recommendations we gave.
They might say that they accept 90% of them
because they consider them feasible,
applicable to the economic, financial or budgetary context.
They accept them formally and there is written commitment.
The Minister signs, the vice-president signs, but not only this.
Here we come to the value of recommendations.
We give few recommendations and they are generic in nature,
like the ones I mentioned for Ghana: go northward around the route corridors.
Then the governments and managements will have to tell us
how they mean to follow these recommendations.
They have to say who will implement them, when and how.
The Agreement at Completion Point
also includes the recommendations that our partners do not consider feasible.
They might have thousand different reasons,
among which the possibility that we were wrong and proposed unreasonable things.
In that case there is a second list with the recommendations that were rejected.
Our policy states
that as an independent evaluation office we are entitled to write our comment.
We might say that a government rejects a recommendation
because it does not want to make efforts,
or they do not want to reshuffle their resources,
or they do not want to fire corrupt people that are good for nothing.
The signed Agreement at Completion Point is submitted to the Board.
It is made up of 36 executive directors who represent 140 countries,
so both the North of the world and the South are represented.
The story does not end there.
We perfectly know that the agreement can be escaped.
So we added another tool to our policy, it is called Prisma.
It is an annual report to be submitted by IFAD’s president
to the Board of Executives.
In this report the president lists all the recommendations from the past years
and from different evaluations of programs and projects.
He also must say what the governments and him made of every recommendations.
If they did not do anything is not a disgrace,
one needs to accept that there can be emergencies in some cases.
The beauty of it is that responsibilities can not be avoided.
Today there can be an emergency, maybe tomorrow.
If after five years one keeps saying that did not have time,
then he or she gets the reproaches of the Board of Executives.
I wanted to further discuss the topic launched by Fabrizio Tenna.
In particular, I am afraid there is a problem in terms of system to build.
When an organization decides to carry out an evaluation
it has to have human resources, economic resources
and time to devote to this activity.
It also has to be ready to react.
This means that when a policy needs to be implemented
or a case needs to be studied quickly,
the organization must be active and react rapidly.
However, as Simona said, it is clear
that the compulsoriness imposed by the European Commission
was crucial for Italian RDPs.
Maybe the Managing Authority wanted to carry out evaluations,
maybe it already did evaluations.
Anyway, this compulsoriness gave way to the problem I mentioned earlier,
which is to find resources, to build an organization that revolves around it.
I think this is very important.
I have always wondered what happens to a policy-maker
who wants to carry out an evaluation, but cannot give resources
and cannot build the necessary organization.
I would like to hear comments from the audience about this.
What if the policy is in place, but the system that can assess it doesn't exist?
How can you solve this problem other than building the system?
One last more general remark
regarding our experience of evaluation findings dissemination.
I personally believe that in our region it was always difficult
to involve people in charge of measures implementations
and people who are not interested in all the aspects the program is made of.
The direct involvement was very useful.
The stakeholder, who already would like to understand how things are going,
favourably welcomes and effectively implements recommendations
if he or she is involved in the building of the evaluation itself.