
9 minute read
Independent Magazine - Issue 7, 2023
INTEVAL: A LEGACY, 38 YEARS IN THE MAKING
Thirty-eight years, and counting. It was 1986 when the International Research Group for Policy and Program Evaluation, known as INTEVAL, met for the first time in Brussels. Ever since then, the group has not skipped a single beat. More than a professional network, beyond a gathering of colleagues, INTEVAL has become a group of friends who respect and support each other, bound together by one common goal: elevating the scholarly thinking of the evaluation discipline.
Under the visionary and dedicated leadership of Ray Rist, former International Program for Development Evaluation Training (IPDET) Director, INTEVAL has flourished over the past decades. Prolific to say the least, to date, the group has published 31 books, which are cited and referenced in just about every piece of literature on evaluation. INTEVAL continues to grow from strength to strength, as new members join and ideas for salient topics are born. The 38th annual gathering followed in this tradition.
The IOE-hosted event bore witness to highly stimulating conversations on some of the core issues that will shape the evaluation world in the years to come. In the margins of this intellectually thriving gathering, Independent Magazine seized the opportunity to sit down with three esteemed INTEVAL members. Ray Rist, INTEVAL Chair and former IPDET Director, Ida Lindkvist, Senior Advisor in the Department for Evaluation of the Norwegian Agency for Development Cooperation and Pearl Eliadis, Associate Professor at McGill University, took time from the busy schedule to have a cordial chat with us. The insights provided were priceless, as expected.
Good afternoon, esteemed colleagues.
Good afternoon, Alexander
What do you believe constitutes ‘success’ within the context of evaluation?
Pearl
I would identify three factors: first, when the commissioner is fully prepared to listen, second, when there is the will to act on the results, and third, when evaluations take a people-first approach.
Ray
A constrained understanding of ‘success’ might be that you are able to actually complete what you propose to do, if it is coherent and relevant.
Ida
If I see high quality, credible and useful information, where evaluation findings and conclusions are taken up and used, and where we actually see changes on the ground, then I think “wow, this is a success”.
What are the enabling and inhibiting factors that affect the potential for evaluation to trigger transformative change, especially within a developmental institution?
Pearl
Embeddedness is an enabling factor. Evaluations must be fully integrated in the policy culture and political will of the place where it is happening. What we want to avoid is someone coming in, doing an evaluation, throwing in their report and leaving with little or no follow up.
The questions need to have a high degree of relevance. If the questions are irrelevant, then the evaluation is just ignored. Embeddedness is one way to frame that. If you want utilization, you have to begin with coherent questions that are appropriate to that environment.
Ida
You need to have questions that can be responded to, and you need to have expectation management so that everyone understands what can come out of the evaluation. Evaluations need to be appropriately resourced, conducted by qualified evaluators and the process needs to engage stakeholders. In addition, I think that you really do need some luckyou actually also need some luck. For example, sometimes something happens that makes the evaluation no longer relevant. You can plan and manage the process as well as you can, but you also need that component of good luck.
What have we learned from the way in which the evaluation community responded and adjusted to the COVID-19 pandemic, and how will this in fluence the future of the evaluation discipline?
Pearl
As our book, Evaluation in the Era of Covid-19 shows, the evaluation community was hy per-focused on operational aspects, especial ly during the first eight to fifteen months of the pandemic. ‘How many missions have to be cancelled?’, ‘how long before field work can be carried out again?’, ‘how can we use remote investigation techniques to gain information?.’ These were the kind of questions that were being focused upon. There was very little in the way of a critical juncture analysis of fundamental change or of what it meant for evaluation.
Secondly, despite the impacts on marginalized and vulnerable groups, the research shows that evaluations failed to pivot around human rights principles and ethics. Evaluation conversations are still almost solely on effectiveness, efficien cy, relevance, fit for purpose, and the like. The pandemic forced us to ask tough questions: what do those terms mean? Fit for whose purpose? Relevance to whom? Efficiency for what and at what cost?
The third thing to point out is that in several countries, evaluators were not present during fundamental national conversations about Covid. In Australia, for example, the national committees saw the involvement of academics, think tanks and other experts, but not evaluators. This lack of involvement of evaluators resulted in a data vacuum, which other knowledge providers moved to fill in. Evaluators are now struggling to catch up.
Ray
We haven’t learned a lot in the evaluation community. In part, this is due to the fact that the community did not take a proactive stance to the COVID-19 issues. There are still many fundamental questions about the pandemic that the evaluation community has not addressed. It’s unfortunate, because we are left guessing. The community has not stepped up.
Evaluators have not been seated at the right tables. In the US and in the UK, for instance, nobody talked to evaluators. A lot of conversations took place with epidemiologists and medical people, but not with evaluators. As a result, certain questions never got asked.
What space does the notion of ethics have in evaluation and, more so, what role does evaluation have in helping to foster the ethical conduct of large bureaucracies?
Pearl
As a lawyer, I find it shocking that legal norms are treated in the same way as soft standards like ‘efficiency’, for example, in many evaluation ethics standards. There rarely is any differentiation made between what we have to do, and what we should or could do. Treating compliance with legal standards as if they were just one of many non-prioritized ethical issues is very problematic. Compliance with international human rights standards should be non-negotiable. Especially for UN agencies. I actually think that evaluation units within the UN should be audited by the Office of the High Commissioner for Human Rights on a systematic basis, and that there should be independent reports issued. This does not mean that other ethical standards are irrelevant but there is a massive difference between a legal standard that is binding in law and other things like relevance, efficiency and the like -- all these things are important, but they are not legally binding.
Ray
This has a lot to do with standards, which are malleable, permeable and largely ill-defined in the
evaluation community. Different associations try to set standards, but these tend to be very vague and generalized in order to appease people who don’t like to be constrained. Consequently, the ethics issues are generally not well conceived or specified. In large organizations, there are gaps in the understanding of how the organization itself needs to behave. There is some specificity about how the evaluator should behave, but not a lot about how the organization should behave. As a result, the organization can just say “we are trying to be ethical”, but there are little or no criteria or standards to which they are accountable.
Ida
I think there should be a strong notion of ethics in evaluation. For a start, the evaluation should be conducted in an ethical manner. This is more than just having anonymous participants. It means that you need to identify ethical risks and prepare ethical safeguards, because the evaluation should do no harm. It also implies understanding if someone is trying to capture the evaluation, and what effects and consequences the evaluation can have for different stakeholders.
Looking ahead, what do you see as the biggest opportunities for evaluation to be a force for positive change vis-à-vis the developmental challenges that affect the world?
Pearl
The first is that the pandemic has shown us that everything we do going forward must be based on fundamental values grounded in human rights and be capable of responding to the demands of climate change.
The second is about real independence: I was recently asked to review a paper for an evaluation journal. The paper presented data showing that evaluators tend to experience pressure from commissioners, especially when evaluators are internal to an organization. The pressure focused on changing evaluation outcomes, suppressing certain parts of the evaluation, and highlighting certain parts of the evaluation over others. Independence is key to making a difference, and what you are doing here at IFAD offers an excellent role model to ensure that independence is front and centre across the board.
Ray
One thing we could probably do is to try to focus more on really important issues. COVID-19 was one that we did not focus well on. Climate change is another that we have not adequately addressed. There are people trying to do things in these areas, but the questions of “so what?” and “what next?” are not being necessarily handled well. I think a lot of this has to do with funders, who are asking really narrow and repetitive questions. For a lot of evaluations, you need resources which are not there. Even if people were willing to address the important issues, you need to travel and get access to data. There is not a lot of willingness to ask the right questions, and even less willingness to pay for these questions to be answered. This is because bureaucracies are inherently conservative, and are not likely to show their own laundry. In addition, many evaluators tend to censor themselves rather than risk being provocative in order to avoid losing funding.
Ida
We should never stop trying to deliver high-quality, credible and useful evaluations. Sometimes we will deliver evaluations that really have an impact, other times not, but we must never stop trying.
Thank you, colleagues.
Thank you, Alexander