14 minute read

ENSURING GEOGRAPHIC COVERAGE

ENSURING GEOGRAPHIC COVERAGE

Since 2021, IOE has been gradually increase its country coverage. This growth in coverage is mainly attributable to case studies and project evaluations, as well as country strategy program evaluations (CSPEs). The latter are particularly important, insofar as they provide a full accountability assessment and key recommendations to inform IFAD’s new country strategy and opportunities programmes (COSOPs).

With an increased country presence, IFAD needs well-informed COSOPS. This calls into question the quantity of CSPEs produced by IOE. More to the point, the question arises as per whether IOE would need to increase its CSPE coverage in order to meet the growing demand and ensure timely evaluations in relation to the design of COSOPs.

Enhancing the coverage of CSPEs would serve several key purposes at the heart of IOE’s mission, including to enhance IFAD’s institutional accountability and effectiveness through operational learning from evaluation work. However, doing so would also call for IOE to adjust its strategy and capacities, to meet future challenges without compromising on a sustained high quality of its deliverables.

In a world where funds are scarcer, demand for accountability keeps growing, and IFAD cannot afford not be informed on the actual quality of interventions, which comes through IOE products.

To weigh out the opportunities, benefits, costs and challenges associated with increasing CSPE country coverage, Independent Magazine had the pleasure of sitting down with the members of IOE’s Evaluation Advisory Panel, in the margins of their annual meeting. Dr Juha I. Uitto, Visiting Scholar at Environmental Law Institute and former Director of the Independent Evaluation Office of the Global Environmental Facility; Dr Doha M. Abdelhamid, Senior Consultant and Senior International Evaluation Expert at the Islamic Bank for Development Group; and Dr Mita Marra, Associate Professor of Economic Policy at the Department of Social Sciences of the University of Naples, provided valuable insights, thoughts and suggestions related to what might lie ahead for IOE.

Good afternoon, esteemed colleagues.

Good afternoon, Alexander

What do you believe are the main benefits and costs for an office of evaluation to ensure vast geographic coverage of development programmes?

Doha

Ensuring vast geographic coverage of evaluations brings several benefits, as well as costs, to IOE. Starting with the benefits, I would list three. The first is comprehensive insight. A broad geographic reach allows evaluators to gather diverse data and perspectives, leading to a better understanding of the effectiveness of programmes across different contexts, which was an issue that was emphasized over the course of the EAP meeting, during the past three days. This helps in identifying successful strategies that can be replicated and scaled up effective interventions. The second is inclusivity. Evaluating programmes in various regions promotes inclusivity and ensures that the needs of marginalized and underserved populations are addressed. It helps showcase the impact of programmes in different socioeconomic settings and informs equitable resource allocation. The third is enhanced learning. Evaluations enable cross-country learning and the sharing of best practices, fostering a culture of evidence-based decision-making within IFAD and among its partners. They also foster enhanced transformations for the rural people and the poor, which is the main mandate of IFAD.

There are, of course, also costs. These include resource allocation, logistical complexities and data reliability. With regard to resources, broad coverage often requires significant financial and human resources. This can strain budgets and necessitate the prioritization of certain evaluations over others. Regarding logistics, evaluating in different geographic locations entails logistical challenges, including travel, local partnerships, and varying data collection methods. These complexities can lead to delays and increased administrative burdens. Speaking of data, diverse contexts may produce inconsistent data quality, complicating the analysis and interpretation of findings, and limiting comparability across evaluations.

With this said, from my point of view, the benefits outweigh the costs.

MIta

In terms of benefits, when you increase coverage, you increase the scope of evaluations. That means that you are going to explore different themes and issues, and that you will be able to better understand the variety of developmental results across different contexts. One of the key is sues is to understand is the relationship between interventions and contexts, especially fragile contexts, such as those where IFAD has a mandate to operate. However, this broader scope also means increasing costs. These could be looked as being investments. The main challenge I see is that you need to develop capacity in terms of data infrastructure and data gathering, and should foster the development of a data environment that will allow you to create syn ergies within the organization and outside of the organization. You need to learn how to operate at a different level. This will transform IOE over the long term.

Juha

There is a lot of demand for CSPEs and increas ing geographic coverage, especially from IFAD’s Executive Board. Obviously, we have to under stand what is feasible and then be strategic. You would need to develop criteria to determine which are the countries where you want to carry out evaluations. We have talked a lot about criteria during the past three days, such as fragility and conflict affected situations, so it may be useful to stratify the sample accordingly, given the variety of contexts in which IFAD operates. You may also have to keep in mind the need for approximating a zero-sum game with limited human and financial resources, meaning that if you increase one type of evaluation product, such as CSPEs that are now in increasing demand, you might have to decrease the offerings of other types of products, which may be of lesser interest in terms of evaluative evidence from the field at this particular point in time.

From a methodological standpoint, what are the main challenges in evaluating complex development programmes in different countries, in different contexts, over a prolonged period of time?

Doha

Evaluating complex development programmes presents unique methodological challenges, particularly when operating across varied geographic and socio-political landscapes. A first challenge is contextual variability. Different countries have distinct political, economic, and cultural contexts, which affect programme implementation and outcomes. This variability necessitates tailored evaluation approaches, making standardization difficult. Over the past couple of days, the EAP team has talked about the importance of theory of change, of policy engagement and the need to try to measure said engagement through designing metrics for contribution and influence. This discourse is quite innovative and is tied to the request to expand evaluation coverage across different contexts.

A second challenge is long-term monitoring. Over prolonged periods, developing a consistent framework for evaluation becomes challenging due to changes in programme objectives, stakeholder perceptions, and external factors such as policy shifts or economic crises. A third challenge is attribution of impact. In complex programmes, isolating the effects of specific interventions from external factors is challenging. For instance, we have discussed the issue and complexity of attribution. Can policy changes at country level be entirely attributed to IFAD’s interventions, or have other development partners contributed? Does IFAD have the mandate to trigger influence thus the impact?

Multivariate influences can obscure causal relationships, making it difficult to determine the real impact of IFAD initiatives. A fourth challenge is data collection and quality. Variability in data quality and availability across countries poses challenges. In some regions, data may be scarce or unreliable, necessitating innovative techniques to gather credible evidence. I believe that IOE is ready for this using AI and advanced technology evidence gathering and synthesis tools.

Mita

I would go back to the theory of change, because we need to go deeper into the analysis of the causal links. We may have inadequate models of causality. We need to have exchanges of ideas with people on the ground, different stakeholders, that are directly engaged in operations to refine these models. Also, there should be some passion for exploring these relationships that we see are crucial. The issue becomes more complex when you get into the field, and you are confronted with the emergence of local realities that characterize complexity. These are things that you cannot anticipate. Even if you have identified crucial relationships, you may go to the field and find something that is going on and is either an unintended consequence or an effect that you have to look at again and again. We are faced with the need to complexify theories of change, while dealing with different and conflicting attention spans, which call for ensuring that our theories of change are manageable. This is a challenge.

Juha

I like to use the term ‘open theory of change’, in the sense that it goes beyond the linear logic of an individual project. We have to look at who are the other actors involved that might amplify or hamper IFAD’s impacts. The relationship between ‘attribution’ and ‘contribution’ is very important and difficult to determine in this regard. We also must always remember that these are national projects. As a result, IOE is not only evaluating IFAD’s performance, but is also looking at the broader determinants of impacts, including government performance. Aspects such as policy coherence become very important. For example, how does the agriculture policy link with the environmental and industrial policies? It is important to put things into this context.

An important aspect to keep in mind is that impacts are often long term. I am thinking, in particular, about natural resources management and climate adaptation. Given that impacts may come into being much after the completion of a project or programme, it becomes very difficult to measure them. That’s a challenge. The use of geospatial approaches has increased tremendously in evaluation, as technology has become easier to use and remote sensing data ever cheaper. This has helped evaluators to assess the actual change on the ground creating time series data both historically and beyond the duration of the project.

Stakeholder involvement is another critical issue. IFAD is meant to be very participatory as an organization. Thus, IOE needs to reach out to a broad variety of actors, including governments, NGOs, academic institutions, and the actual intended beneficiaries. This is even more important in conflict situations, where you have to make sure that your project is not perpetuating the root causes of conflict.

What are some best practices that we could follow to help ensure that country level evaluations feed into future country programmes both at the management level and in terms of influencing government policies?

Doha

To enhance the impact of country-level evaluations on future programmes and policies, IFAD can adopt several best practices and possibly criteria for influential evaluations, as developed by the EAP. Starting with best practices, actively involving stakeholders—such as government officials, local communities, and non-governmental organizations—throughout the evaluation process fosters ownership and ensures that findings are relevant and actionable. It is paramount to engage people at the grassroots level as much as possible, because they hold the seeds of knowledge. This involvement can also facilitate the integration of evaluation results into policy discus sions. In the case of IFAD, we have noted an excellent understanding between IOE and Management, so there is an excellent chance of improving results on the implementation side. This means that the value of stakeholder engagement translates into some thing tangible to the people.

Second, distilling complex evaluation reports into accessible formats, such as policy briefs and infographics, enhances communication and increases the likelihood that key findings will ‘influence’ management decisions and policy frameworks. A practical solution could be creat ing communities of practice, involving IOE, pro gramme managers and community members. This would help foster a common analysis and understanding of findings among partners. Third, ensuring that evaluation findings inform stra tegic planning and policy engagement processes within IFAD helps bridge the gap between evaluation and operational programming. This might involve establishing formal mechanisms for incorporating insights from evaluations into future programme design and implementation.

Fourth, establishing follow-up mechanisms to track how evaluation findings are used in decision-making processes can promote accountability and continuous learning. This might include post-evaluation workshops to discuss findings with management and stakeholders to translate insights into concrete actions. Fifth, investing in capacity building for local partners and staff involved in monitoring and evaluation can strengthen data collection efforts and enhance the overall evaluation infrastructure, further aligning local practices with IFAD’s evaluation standards. It also improves an understanding of how programmes are designed, implemented, monitored, and evaluated from both ends (governments, IFAD and other stakeholders). This builds a genuine ‘culture of evaluation’ that cascades to and precipitates in the grassroot levels.

These examples, grounded in the mission and practices of IOE, showcase a commitment to rigorous, influential evaluations that inform policy and improves development outcomes across varied contexts.

Equally important would be the application of criteria through which IOE and IFAD Management can assess the potential influence of their evaluations and ensure that they contribute meaningfully to their re spective fields. Criteria would include relevance, rigorous methodology, stakeholder engagement, clarity of findings, actionability, timeliness, impact on policy and practice, dissemination, sustainability, and ethical considerations.

Using these criteria over selected evalu ations considered will provide a better un derstanding of ef fectiveness and im pacts of various programmes to help im prove future programing and poli cy making in IFAD and at the country level.

Mita

I believe that there are no such things as ‘best practices’. As soon as you crystallize some examples, they have already transformed into something else. I would rather talk about promising practices, that generate virtuous circles, which need to be fed on a continuous basis.

I would synthesise the whole discussion on influential evaluation criteria into one specific criteria, which is trying to be very collaborative. This means engaging with managers, programme staff, country representatives, from the outset of the study, identifying direct and indirect users. The channel of communication should be kept open, from the very beginning of the country selection process, through whole process of data gathering, right to the finalization of the report. This will help ensure that evaluation findings do not come as a surprise to anyone, ever. When you go to the field, you need people on the ground to give you feedback that allows you to adjust your process of data collection and focus on salient issues. The same holds true when the time comes to share the final report and discuss the findings and conclusions. This must take place in a fully participatory manner, if you are to be part of a process of policy engagement.

Juha

The evaluators are not the ones who are going to implement the recommendations, it’s the programme people. Therefore, it’s a good practice to develop recommendations in consultation with the people that are supposed to implement them. It’s hard to admit, but evaluators, on their own, may not always be cognizant of all factors that have bearing on policy and practice. We are fully independent in the sense that we choose what we evaluate, what angle we take, the methodology that we follow, and how we formulate our findings and conclusions. If these are based on solid evidence, then they are not negotiable. However, when it comes to recommendations on how you actually address the issues that have been identified, we may not always be the best people to know how to go about things, because we may not always fully appreciate all the exigencies within a given country. This also brings buy-in from the management.

A reform that I carried out in the GEF was in the context of its Council. Prior to my arrival, the official language in relation to evaluation reports submitted to the Council was that “having reviewed the *evaluation*, the Council endorses the recommendations of the evaluation”. I changed that to the Council endorses the management response”. In doing so, the onus for post-evaluation action was moved from the evaluators to the Management. This made a huge difference. The Council then started to demand much more specificity from the management response which, up until that time, had often been quite vague in language. This, of course, does not mean that Management has to accept all recommendations, but it does require managers to elaborate a more detailed and action-oriented response. The Council then also has a stronger role in deciding what should be done.

Thank you, colleagues.

Thank you, Alexander

This article is from: