
15 minute read
Independent Magazine - Issue n.5, 2023
DEVELOPING DEVELOPING EVALUATIVE CAPACITY
Global professionalization professionalization and contextual adaptation
“If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime.” So goes the old adage. The UN took this lesson to heart a long time ago, and for the past decades has been endeavouring to develop local capacities. The international evaluation community has followed suit, with efforts focusing on building local evaluative capacities at national and sectoral levels. Emblematic is the work of the National Evaluation Capacities series.
The growth of the NEC series responded to the unmet need of national governments for an evaluation-related training platform to exchange ideas and plans with development counterparts. The NEC set out to provide tailored training opportunities for governments, customized to the key development issues being faced by countries on the ground. Over the years, the series has afforded a safe space for candid discussions on the challenges in conducting oversight in political contexts – where transparency and democracy were not always supported –, gaining legitimacy through strategic partnerships with host governments, networks and associations.
The involvement of the funding partners of the Global Evaluation Initiative (GEI) in NEC 7 has further bolstered the series by helping to explore important topics around building resilient national evaluation systems. IOE is one of these partners.
The 2022 IFAD Evaluation Manual – the first jointly prepared by IOE and IFAD management – is a clear example of the importance that IOE affords to capacity development. Conceived as a living document, the 2022 Manual draws on contemporary evaluation literature and advances made since the launch of the 2030 Agenda for Sustainable Development, such as the notion of transformative change and addressing sustainability and climate resilience.
The recent launch of the Manual’s on-line training course (presented a few pages ago), and the growing attention towards using information and communication technology for evaluation, offer further examples of IOE’s efforts to produce capacity development tools
Against this backdrop, Independent Magazine sat down with Fabrizio Felloni, IOE Deputy Director, to discuss perspectives, thoughts and insights related to the world of evaluation capacity development.
Good afternoon, Fabrizio.
Good afternoon, Alexander.
The traditional approach to Evaluation Capacity Development focuses on training, attending courses and getting certificates. Based on your experience, what do you think are the strengths and weaknesses of this approach?
There is no doubt that participating in training programmes will be useful and this is true for more or less any subject. Knowledge and ‘good practices’ evolve and at some stage we all need to brush up on our skills or upgrade them. What we learnt and was state of the art twenty years ago may not be entirely so now.
Having said this, we also know that practice and application of what we have learnt from regular studies or training courses is fundamental. We can come back later to this item.
I would suggest that capacity development in evaluation is not limited to building capacity of individuals, as important as it may be. We also need to strengthen the capacity of institutions. From an IFAD perspective, for example, we would like to see capacity for evaluation strengthened in the Ministry of Agriculture or any agency that has a prominent mandate for rural development. What are the areas of capacity? I can think of things such as issuing bylaws or perhaps a policy for evaluation, which defines those who will be responsible for conducting an evaluation, and how independent they will be from those who manage the programmes that are evaluated. Equally important is understanding who will be responsible for following up on the evaluation recommendations, and who will have the oversight and scrutiny of the whole process. This is fundamental not only to bolster the legitimacy and the credibility of the evaluation process, but also to make sure that evaluation supports changes and improvements in development interventions and policies, and in the organizations that are responsible for the same.
Institutions also need reference to codified good practices and standards. They need principles and methodologies for evaluations to be rigorous, transparent and consistent in their conduct.
We at IFAD have a mature and quite sophisticated evaluation function. We have an Evaluation Policy, a multi-year strategy for evaluation and a manual. As of 2022, the Evaluation Manual covers independent evaluations carried out by IOE as well as self-evaluations undertaken by management. This does not mean that national agencies in developing countries should try to mirror exactly our approach. After all, they are not a multilateral organization. Yet, they will need to define who is responsible for evaluation and how the evaluation process works. This means putting in place something like a policy or statute for the evaluation function. They may need some prioritization of what they want to evaluate and how, as well as some codified collection of good practices (i.e., a methodology). This is where IFAD’s experience can provide insights to national organizations, although what works in an international agency does not necessarily work in a national agency, and thus needs to be adapted.
Lastly, some development practitioners, when discussing evaluation capacity, use the expression of enhancing the capacity of an ‘evaluation ecosystem’. This came out clearly from the National Evaluation Capacity Conference of Turin, in October 2022. The ecological metaphor relates to the diversity of actors involved in the evaluation system, such as: (i) public sector agencies in charge of public programmes; (ii) policy research outfits; (iii) private entities; (iv) civil society organizations; (v) citizens; and (vi) mass media. These entities have different roles and interests in evaluation and may need to be given capacity and space. As an example, public sector agencies may need to enhance capacity for commissioning and oversight of evaluation. Civil society organizations need capacity (and space) to articulate demand for evaluation of public programmes and capacity to draw from findings to prepare their campaigns. Policy research outfits and private entities (including private foundations) may need support on how to conduct an evaluation. The media need to be aware of reliable evaluation sources to engage the audience in a debate that is meaningful and correctly informed.
How do you see on-the-job training as a means to build evaluative capacity?
As I was alluding to before, it is essential. Unless we apply what we have learnt, we are bound to forget most of it very quickly. It is for this reason that many professional training programmes, even when delivered as a part of an undergraduate or graduate programme, include some period of practice or internship. While this applies generally, it is even more important in the realm of evaluation, for two reasons.
First, evaluations happen in an institutional context. They are typically conducted or commissioned by a government, an international organization, or a non-governmental entity. This requires an understanding of the organizational setup, goals, roles, incentives and hierarchy. Second, an evaluation happens in a ‘political’ environment. By this, I mean that we evaluate programmes that involve many and diverse stakeholders, wielding different levels of power. Moreover, programme implementation is inevitably shaped by the interactions between partners that are unequal.
Please do not get me wrong. I am not arguing that evaluators should be involved in the politics of organizations. Evaluators are and should remain technical people. However, they should be perceptive enough to understand organizational power structures and their dynamics.
What are your views on the notion of the professionalization of evaluation? Does it help to enhance quality by creating internationally recognized standards, or does it run the risk of creating an ‘evaluation guild’, which could be exclusionary and elitist, and thus raise equity issues in contexts characterized by weaker evaluation capacity?
Professionalization and adaptation to specific contexts are not at odds. In principle, the more we professionalize evaluations, the more we should learn how to adapt. Besides, I believe in the importance of standards. While agreeing that we need to adapt to the local context, I think it is misleading to dismiss evaluation standards and good practices such as the evaluation criteria.
Some argue that standards and criteria are ethnocentric and tilted ‘to the North’. I am not entirely convinced. There are questions, such as those on relevance, effectiveness and efficiency of a development intervention, which are universal. If we cannot answer these criteria and questions, it is difficult to provide any meaningful information through an evaluation. The way in which we answer these criteria and questions can change, of course, and evaluators need to understand, appreciate and respect the perspectives, the values and the paradigms of the people they interact with
and the context in which they operate. They need to be open to adapting the panoply of tools that they use.
In recent decades, we have seen the rise of approaches such as empowerment evaluation, indigenous evaluation and feminist evaluation, all of which seek to challenge established approaches. Some may disagree with me, but I believe that some standards, such as the evaluation criteria, or the principles of independence and impartiality can be adapted to fit these approaches as well.
Stemming from the previous question, is there a need to balance international evaluation standards for capacity development, on the one hand, with tailored capacity development efforts focused on local contexts and local needs? If so, how might we achieve this?
I say that we need to adopt and adapt international standards to the local context, to the local stakeholders and to the ultimate end-clients of development programmes. At the end of the day, standards should not be seen as conceptual straitjackets, but as practices that have been successful for a number of times in a given setting. We should not dismiss the standards, we need to understand what they are meant for. They may help us strength-
en the quality of evidence; they may help us strengthen the quality of reports. Importantly, they may help us protect the integrity of an evaluation and build an evaluation process that is impartial, on the one hand, whilst also ensuring inclusiveness and ability to integrate the perspective and experience of under-represented groups, on the other. At the same time, we do not want to be rigid and build over-formality when it is unnecessary and hampers dialogue.
I would also add that standards need to be adapted to the institutional and decision-making framework where we operate. It is one thing is to evaluate a programme that is managed by voluntary groups, grassroots organizations and civil society representatives that interact with local government. It is a very different matter to evaluate a programme run by a ministry and by public agencies under the tight supervision of the central government. Evaluation in multilateral organizations is different further still.
At the international level, evaluation networks and initiatives have sought and continue to seek to advance evaluation capacity development through different approaches, leveraging different entry points. These include the National Evaluation Conference series, the Global Evaluation Initiative (GEI), the Evaluation Cooperation Group (ECG) and the UN Evaluation Group (UNEG). Given the on-going global challenges and transformations, what areas of intervention do you think could be prioritized in order to fast-track evaluation capacity development?
These initiatives have sometimes different constituencies but need to work in synergy. The UNEG and ECG are communities of practices of representatives of evaluation offices in multilateral organizations. They exchange knowledge on practices, methodology, major evaluation findings, trends in evaluation products, topics for evaluation, and interactions with evaluation stakeholders.
NEC is a forum for discussion and exchange of experiences for evaluation practitioners, particularly from developing countries. In the recent editions of NEC, we have noticed more and more presentations on M&E systems in the public sector, focused on the type of data they collect and the use of big data. Interestingly, we have also started to see presentations by civil society organizations, either about their experience in evaluating their own programme, or about their role as participants in steering groups for the evaluation of public programmes.
GEI is a network of international organizations, aiming at supporting capacity at the individual and institutional level. In the network, there are also institutions that specialize in post-graduate and professional education and run training programmes. Some of the organizations that are members of GEI are also members of UNEG and ECG, and contribute to the NEC (in fact, IOE is engaged in all four of these networks!). Synergy is necessary. As to what to prioritize, here I hold my IFAD cap and would prioritize the institutional technical backstopping to agencies in developing countries that request capacity support. For example, a ministry of agriculture that wants to set up an M&E system for rural development interventions.
By the way, if we want to engage in individual capacity development, bilateral and multilateral agencies can help hone the skills of young evaluators, for example via internships and temporary assignments or consultancies. I understand GEI plans to come up with a database of young evaluators that have gone through some form of accreditation via renowned international training centres. Evaluation offices are always looking for interns or young consultants, so there is a good match in principle.
At country level, certain sectors – notably health, education and agriculture – have advanced further in building evaluation systems than cross-sector systems. Why do you think this is the case, and what can be learned from these sectoral approaches?
This is in part historical heritage. Attention to evaluation was borne in the USA out of the studies that tried to assess the performance of the so-called Great Society initiatives, launched by President Lyndon B. Johnson, in the 1960s. Many of these were programmes in the education and health sectors. Perhaps this is also a reason why some evaluation practitioners resent the Western orientation of evaluation standards. The reality is that, after six decades, international evaluation practices are no longer following a single country approach. Nonetheless, those early studies were seminal contributions.
The second reason is that educational and health programmes tend to be standardized, and are formulated with explicit goals and results to be achieved that can be expressed in numbers, thus lending themselves to evaluability. On the positive side, this shows the importance of designing a programme with evaluability in mind, and of making programme goals explicit and measurable in others sectors too. Having said this, I would caution against trying to have all goals expressed in numbers, in all programmes. I would also caution against the rhetoric of taking randomized control trials and quasi-experimental methods asthe ‘gold standard’, or as the only legitimate way to conduct evaluations. Anyway, consensus on the gold standards has waned in the past decade.
Looking in-house, how does IOE advance the capacity of its professionals? Are changes in said capacity measured and, if so, how?
Our workforce is well qualified and committed. That said, we need to keep pace with the evolution of the profession and with the approaches and tools that emerge. We need to keep learning. When we prepared the 2022 Evaluation Manual, we had to take stock of emerging concepts and perspectives that were linked to Agenda 2030. As an example, the notion of ‘nobody left behind’ brings to the forefront the issue of social justice and equity, which is at the core of IFAD’s mandate, and thus requires attention when we design evaluation instruments. The emphasis on transformative change calls upon evaluators to have the capacity to identify transformation in the way in which systems perform. That calls for attention when we prepare the evaluation approach, and some brush-up in our knowledge of system analysis.
I would also like to highlight the importance of using information and communication technology for evaluation. Geo-based tools are particularly important for evaluations at IFAD. We need to stay abreast with new tools, with the type of data they can generate, with the type of questions they can help us answer, and how they can help us save resources. IFAD is moving forward on this, fortunately. We in IOE have launched an internal initiative on geo-based tools.
We care about the progress and professional growth of our staff. We calibrate the work programme of staff members considering expertise, skills, track record and past performance. Ultimately, we aspire to ‘graduating’ our staff to more and more complex tasks. However, in many cases, the perfect entry point is a project-level evaluation or a project completion report validation. The latter is based on desk review. While new staff may be impatient to go to the field, it is very important to familiarize with the structure of a typical IFAD project, with the application of IOE’s methodology and criteria, and to hone writing skills (yes, writing cogently and concisely is a challenge for anyone!). I would say that patience and humility are key skills for evaluators, and are those that will allow them to move forward to evaluations that are more complex.
We do help colleagues by giving them exposure to training, in any form, but we want them to put learning in practice, and share their new skills with other colleagues in our office.
Any final thoughts or insights?
To conclude this interview, if I may, I would like to come back to a specific dimension of institutional capacity for evaluation, which I mentioned before. When we issued our IOE Multi-year evaluation strategy for 2022-2027, we committed to engage more in evaluation capacity development. We want to be realistic and prudent in the use of our resources, and do not want to create a new unit for capacity development in IOE. Instead, we take the opportunity of our evaluations to identify countries and national agencies that have special needs to develop M&E systems. In consultation with IFAD Management, we help them interact with specialized networks, such as GEI. We see ourselves in a facilitating role to start-up initiatives. Governments need to be in the driver’s seat, and IFAD can provide further support.
Thank you, Fabrizio.
Your’re welcome, Alexander.