23 minute read

FEATURE STORY

LEARNING, GROWING, CHANGING LIVES:

Not just another [profitable] day in the office

Learning: a word synonymous with growth. The private sector has embodied this mantra, over the years. Revenues, market shares and shareholder earnings have skyrocketed for business that have ‘learned to learn’, while financial institutions have reaped the benefits of the learning mantra through asset growth and capital accumulation.

IFAD is an international financial institution (one with an AA+ credit rating from Standard & Poor’s). At IFAD, however, learning is not about growing profits. At IFAD, learning means many things to many people. It means strengthening livelihoods, improving nutrition, increasing market access, and enhancing resilience to environmental shocks, to name but a few. Simply put, at IFAD learning means investing in rural people to enable them to overcome poverty – that’s the growth the organization seeks.

If it is to continue sparking this kind of growth, IFAD must continue to learn. One-size-fits-all approaches won’t cut it. Adaptive, tailored and innovative solutions, and flexible project designs are increasingly being evoked, not least in the wake of the COVID-19 pandemic. To realize these, IFAD looks to become more agile, responsive and effective through virtuous learning processes.

This is where IOE comes into play. With its evidence-based focus on learning, transparency and accountability, IOE plays a key role in the successful uptake of lessons learned and best practices. For this reason, a thorough and rigorous evaluation and the production of a good report are not enough for an evaluation to be useful.

Evaluation is called upon to go one step further. Evaluation must inform decision-making and broaden the knowledge base both within and outside IFAD, bringing value added to the global rural development discourse in a timely fashion. Robust collaboration between IOE and IFAD senior management is essential for this to happen.

To deepen this discussion, Independent Magazine caught-up with IFAD senior management. We sincerely thank Donal Brown (Associate Vice-President, Programme Management Department); Jyotsna Puri (Associate Vice-President, Strategy and Knowledge Department); Dina Saleh (Regional Director, Near East, North Africa and Europe Division); Nigel Brett (Director, Operational Policy and Results Division, and Director a.i. Asia and the Pacific Division); and Sara Mbago-Bhunu (Regional Director, East and Southern Africa Division) for generously taking the time to sit down with us, and for kindly sharing their experiences, insights and perspectives.

Good morning, esteemed colleagues.

Good morning, Alexander.

Could you briefly describe your organizational role as it relates to oversight and evaluation?

Donal

I am responsible for the quality of the results of all IFAD’s projects and programmes. I place a strong emphasis on delivering development results and therefore ensuring that we have an accountable, learning oriented, self and independent evaluation system, because to improve results we need to improve learning.

Jyotsna

Evaluation, and especially independent evaluation, plays a very important role because it gives us an unbiased, independent perspective of the direction the organization has taken so far and it gives us a baseline to build strategies that can make IFAD future-fit. In this context, I think that Gilbert (NB: IFAD President), in his wisdom, brought in someone like me, who understands the evaluation perspective, to sit on the management side so that we could create a space within IFAD that appreciates engagement value between evaluation and management.

Nigel

The Operational Policy and Results Division is responsible for aggregating results from self-evaluation products. We are responsible for the corporate results management framework, and for reporting to the member states annually through the Report on IFAD’s Development Effectiveness (RIDE). We support an enabling environment between IOE and PMD for self and independent evaluation.

Dina

As regional director, my role is predominantly one of oversight in terms of deliverables, in terms of making sure that compliance standards are met. I look at certain performance measures and indicators, and I hold conversations around that to make sure that we evaluate those results properly.

Sara

Regional and country teams have very specific roles regarding oversight. We have significant financial resources. We have to monitor the quality of the portfolio, and understand the impacts in terms of transformation. The interface between evaluation and our oversight is structured. There is a whole framework of accountability. We have agreements at completion point and, as regional director, I have to respond to IOE’s recommendations. These are documented and I am accountable to the audit and evaluation committee.

Within evaluation processes, do you believe that spaces for reflection and conversation generate the insights needed to ‘course-adjust’?

Donal

I think that spaces for reflection and conversation help generate the incentives to course-adjust. However, to course-correct, the main issue is to ensure that we have the right evaluation products, and that we have them in a timely fashion. First, we need to be able to learn the lessons and channel them into new project design and new country strategies. In the past, many of the evaluation products have come too late. There has been too much emphasis on the rigour of the product. Second, we need to get the right kind of products. In the past, we have focused too much on accountability rather than on learning. We need the right balance between accountability and learning to allow us to course-correct.

Jyotsna

Interaction during an evaluation is extremely important. In my own experience, interaction has never jeopardized the role of independent evaluation. I actually think it strengthens it, because it makes it far more relevant. For me, this journey is far more important than the destination. It’s the interaction, the richness of the engagement during the evaluation that becomes important to inform staff both on the evaluation and on the management side. I don’t think that a document or an interaction for a couple of hours at the end of a process does full justice to the potential richness of evaluation itself.

Nigel

This is one of the challenges that we have been working to address. Since 2016, we have put in place building blocks for better evidence-based decision making. We use M&E data for mid-course corrections whenever possible, and carry out stocktaking workshops and learning events. We also have complimentary tools, including the new restructuring policy that has enabled a much more methodical approach to project restructuring based on results and learning during implementation. A challenge is that sometimes programme management units (PMUs) see project design documents as blueprints. PMU staff need to be empowered and supported to understand that project designs can to be adapted to respond to lessons generated during implementation, and that these changes are sometimes essential to ensure achievement of project objectives.

Dina

There are spaces for reflection and conversation, of course, but not only through workshops and meetings. It’s very important that there is an ongoing conversation throughout the evaluation process. The thinking and reflection start at the very beginning. Sometimes it could be that you need to course adjust at the very beginning of the process, not at the end. It’s an iterative process. Sometimes we deepen our conversations and understanding with stakeholders during evaluations – it allows us to have a much sharper conversation and reflection on specific points.

Sara

I believe they do. There are a number of workshops and missions during the project life-cycle that allow for conversations and discussion. When we see that our performance indicators are falling short of the standards we have set ourselves, then that triggers an implementation support mission to deepen our understanding of the problems, and then course adjust. The agility that has been built into the IFAD system allows us to proactively restructure, where the context might have changed or a particular component or sub component is no longer relevant.

To course-correct, the main issue is to ensure that we have the right evaluation products, and that we have them in a timely fashion.

- Donal Brown

Sometimes we deepen our conversations and understanding with stakeholders during evaluations.

- Dina Saleh

Broadly speaking, would you describe IFAD as a ‘learning-based organization’, or do you see a contradiction between accountability and learning?

Donal

I see no contradiction between accountability and learning in the context of evaluation.

IFAD needs to be accountable, both in terms of our internal evaluation mechanisms and also of IOE. Donors provide resources for us to deliver impact and results. We have to be accountable for that. At the same time, to do that, we have to be able to learn. The new development effectiveness framework of IFAD puts learning as a key cross cutting priority. The new leadership of IOE has been very good at trying to get that balance right. The 2021 ARRI report was an example of a much better product in terms of balancing accountability with learning. The revised evaluation manual, which comes out next year, will also stress the importance of learning from evaluation products. There is still a long way to go, but I am quite confident and reassured that the much stronger relationship that I have seen with IOE in the past year, and the enhanced focus internally on learning, has put us in the right direction.

Jyotsna

I have never thought that there is a trade-off between accountability and learning. So long as there is an institutional commitment to credibility, to learning, to what the data is showing us, accountability and learning can form a happy marriage. Especially with the advantage of big data, machine learning, real-time data, this marriage is further strengthened because not only you can know at the end if something worked, but you want to be able to adapt as you go along. It is unethical to spend money, today, on something that I might be told did not work, in five years’ time. We are affecting lives, every dollar is becoming so scarce, and thus I need to know what is the biggest impact that I can make now. It is myopic and almost dystopic for us to think that people at the opposite ends of an argument should not be speaking to each other in real time. I think that this leads to dysfunctional organizations. I am hoping very much that with Indran’s [NB: IOE Director] entrance into IFAD there is a greater appreciation of this dialogue process and this engagement, and a greater appreciation of what the potential of a healthy environment can be. The time for learning and accountability is now.

Nigel

I think the adaptive management approach that IFAD is moving towards really represents the intersection between learning and evidence-based decision making. Lesson learning for its own sake is not useful. What is important is that lessons need to be applied. When they are applied you have this proactive approach where you are constantly improving based on evidence. Currently, you could characterize IFAD more as a results and evidence-based organization, and I think we need to transform into a more learning-based organization. More effort is needed to ensure learning is prioritized. The problem is that everybody is overstretched and working flat out. There is enormous pressure just to deliver. The challenge for IFAD is to enable staff to carve out time to reflect, to learn and to apply learning – this should be one of the priorities for the organization moving forward.

Dina

They are mutually reinforcing. IFAD by its genesis is a learning-based organization. Through its growth, IFAD has been very focused on piloting and testing new solutions and innovations. If you look at the structure, where we have a country director that deals with several parts of programme management, you can see that IFAD offers a lot of opportunities to learn. We derive many lessons when we pilot innovations at small scale.

Sara

IFAD has increasingly become a learning-based organization. It’s a journey. The fact that we have an independent office of evaluation helps us in this journey. We have a big quality assurance process that ensures that we take on board the recommendations from IOE. This accountability mechanism is critical in supporting and fostering our continued growth as a learning-based organization, including insofar as it forces us to have critical conversations and dialogue around what we have to change.

It is unethical to spend money, today, on something that I might be told did not work, in five years’ time. The time for learning and accountability is now.

- Jyotsna Puri

To what extent do you think that internal and external stakeholder buy-in weighs on the extent to which evidence-based lessons and recommendations are internalized at the programmatic level?

Donal

Stakeholder buy-in, whether internal or external, is key. If you don’t have stakeholder buy-in you are not going to learn the lessons. Lessons learned is a phrase too easily used and too little understood. Probably 90% of what we do is actually identify lessons rather than learn lessons. To learn a lesson, you have to identify the lesson and then act on it and something has to change as a result. That’s where the stakeholder buy-in comes in. If you don’t have it you will identify lessons but nothing will change. One of the key elements of the development effectiveness framework is that results need to be communicated in a timely and transparent way, particularly to our key external stakeholders, government and development partners. One area where learning will become even more evident will be in the design of country strategies. We really need to build better evidence in there. A COSOP is not an IFAD country strategy, it is a shared strategy between governments and IFAD, so stakeholder buy-in has to be key.

Jyotsna

It’s really important that we have a very alive, responsive and aware external set of stakeholders that are holding us to a high standard of credibility and honesty. Frankly, as an international organization, I don’t think we have a choice. We do need to hold ourselves to a very high level of credibility, transparency and honesty. There is no going around this. If I were an external stakeholder that is the standard that I would hold IFAD accountable for – and they are.

Nigel

Stakeholder buy in is absolutely essential, both internal and external. For this reason, every year we conduct corporate and regional stocktakes to assess portfolio performance and identify areas where we need to make improvements. The resulting reports are discussed with the membership. In parallel, we also conduct impact assessment on at least 15% of the portfolio. This data is also shared. Under the new development effectiveness

framework, we are putting more emphasis on stakeholder consultation and engagement, and ensuring that governments are empowered with more data for evidence-based decision and policy making. To be taken seriously, it is extremely important to be able to engage with stakeholders with credible high-quality data.

Dina

It does weigh. We work in countries that in some cases are politically charged. Very diverse countries. You need to know how to navigate those countries. This does not mean that you shy away from producing results and speaking in those countries, but one has to understand the ‘how’. It’s not what we have as evidence but how we have conversations with political stakeholders that gets the buy-in. We have had some very difficult circumstances.

Sara

We have a big task ahead of us. We need to improve the data quality at the M&E level. This has to be government driven. We need to look at how we can leverage digital technology to more effectively improve this. Recently, we have tried to introduce triangulation at local level as a way to get direct feedback from beneficiaries, who often do not have digital tools. We have the COSOP, which is a very interesting tool, and which offers all country actors an opportunity to engage and to give us their feedback on how we are doing on our priorities and strategies. These mechanisms allow for critical dialogue.

It’s not what we have as evidence but how we have conversations with political stakeholders that gets the buy-in.

- Dina Saleh

To be taken seriously, it is extremely important to be able to engage with stakeholders with credible high-quality data.

- Nigel Brett

Do systems exist to ensure that evidence-based lessons and recommendations are mainstreamed into IFAD strategies, at the corporate level, and into projects at the field level? If so, to what extent do you think these systems are successful?

Donal

Systems exist, in principle. What’s really important, more than the systems, and what’s made a difference so far, is the right choice of the evaluation type, scope, focus area and approach. If we have the right products, then the systems exist. For example, the 2018 evaluation of the financial architecture was instrumental in helping IFAD make substantial reforms in its financial policies, and the outcome of that is that we’ve got the AA+ rating from Standard and Poor’s. Similarly, the decentralization CLE. Again, the right product immediately triggered learning. In 2022, we are putting in place an online tracking system to ensure there is better use of evidence for follow-up and to facilitate continuous feedback.

Jyotsna

Systems consist of people, technologies, processes and methodologies. They contain all of that. Do we have the people who appreciate that real-time learning and accountability should occur? The answer is “yes”. Do we have the processes, the technologies and methodologies? The answer is a bit less than a full “yes”. For example, I want our IT systems to show me in real-time and at a glance every aspect of a project, and I want that to be available to everyone across the system. Can I integrate in real-time all of the GIS data that helps me to understand what the targeting is and to appreciate if the project design has taken onboard all of the vulnerability assessments? We are still building those systems. The same for methodologies. Impact assessment methodologies were built in a world were data was always very sparse and so you waited for data to be available. Now, we have high frequency and highly dense data and there is nothing in our impact assessment methodology that takes that onboard. We have got to update our methodologies.

Nigel

We certainly do have systems in place that facilitate the mainstreaming of evidence. At the field level, and based on personal experience, I can say significant learning happens during supervision missions or midterms reviews. This learning feeds directly into on-going projects, and into the design of new projects. We also have portfolio stock-take exercises, on a yearly basis, where themes are looked at in detail and corporate level lessons are generated, followed by action plans to ensure lessons are applied in practice. At the corporate level, a lot of work has been done to put in place the infrastructure for lessons learning. We have a state-ofthe-art Operational Results Management System (ORMS), where all the data sets and the narratives of project implementation are made available on-line. This is public information. Another tool, the President’s Report on the Implementation Status of Evaluation Recommendations and Management Actions (PRISMA) will also be brought online soon and made public. This will increase accountability and visibility of the follow-up on IOE recommendations. Through continuous upgrade and integration of these systems, there are going to a range of new opportunities to ensure that recommendations and lessons coming from supervisions and evaluations feed back into the design of new country strategies and projects.

Dina

Yes, it has been evolving. It’s five years since we introduced the country strategy (COSOP) completion review processes. These have allowed us to look at strategies, take stock of lessons learned, inform stakeholders, and course adjust. However, there is a lot of reliance on manual systems. We could be more successful if we had automated systems. In this regard, NEN is piloting an artificial intelligence project where we can derive automatically lessons learned. We are at the stage where algorithms are almost ready and we will hope to test the system by the end of this year, or the beginning of next year. It was one of those projects that was derived from IFAD’s corporate innovation challenge. NEN took it on board to develop and test it. If successful, we could scale it up to the rest of the house.

Sara

Yes, IFAD has made huge progress in putting in place systems. For instance, IFAD maintains a database of agreed IOE recommendations, tracking and reporting of progress and implementation, and this is done

through the PRISMA. Looking at this system, it is led from the highest level in IFAD and that forces all levels of management to take IOE’s recommendations very seriously. At the project and local levels, we have the operation results management systems in place, and that’s another database that looks at lessons learned, monitoring, tracking and scoring of results. This ensures that, at the Associate Vice President level, there is an opportunity to see how the results that are coming out are tracked in relation to IOE’s lessons and recommendations. At the regional level there are different platforms, such as the portfolio advisory meetings. The systems are in place. The cycle is there. Everything is well documented. The evidence is there. Can it be improved? I assume it can.

The systems are in place. The cycle is there. Everything is well-documented. The evidence is there. Can it be improved? I assume it can.

- Sara Mbago-Bhunu

What’s really important is the right choice of the evaluation type, scope, focus area and approach. If we have the right products, then the systems exist.

- Donal Brown

Looking ahead, what do you see as being the main opportunities for evaluation to further permeate an evidence-based learning culture across IFAD, and for this culture to trigger life-changing impacts for our stakeholders in the field?

Donal

Firstly, the world, IFAD and the context in which we operate are changing very fast. One of the key opportunities for evaluation is to be agile, quick, nimble, and to get the right balance of ‘quick and dirty’ versus ‘rigorous and time consuming’. It’s important that we shift the focus of evaluation towards the higher value products and services that can take into consideration and respond quite quickly to changes in IFAD, in the development context, and in the countries where we work. We must make sure that we feed lesson back quickly into the system. Second, I can’t emphasize enough the much stronger collaboration we now have between management and IOE. There is collaboration in defining the timing and structure of the product-mix, sharing data sources, and ensuring the relevance of the evaluation products. This does not compromise the independence of IOE. On the contrary, if we are going to look at the learning function much more, then we need to be able to discuss, debate, and work more closely. I’ve seen really, really strong progress in this area over the past year.

Jyotsna

We need to be aware that evaluation is really important for an individual institution, but it’s also a global good as well, very much like climate and health. The more high-evidence we can produce, the more we can help others to become better. We should underscore that there is a service that we are doing to humanity when we produce high quality, credible evaluations. In this context, I would like to see responses to the question “how much did something work?”. For decision making, what is really important is to understand trade-offs. A policy maker is not necessarily thinking “should I introduce cash transfers?”, but rather “should I introduce cash transfers at the cost of setting up a completely separate insurance mechanism?”. So, the overall impact that a cash transfer is making versus an insurance mechanism is the trade-off. To understand that, I need to know how much difference a cash transfer made, and how much difference should I expect an insurance programme to make in the overall resilience of a targeted population before I can start to make those decisions. If evaluation wants to stay relevant in this space, it needs to answer the “how much?” question far better because that’s what will help us to understand cost-effectiveness and trade-offs.

Nigel

IFAD will need to adapt its tools to its new decentralized structure. With more staff decentralized we need to make sure that we keep a bridge between the field and headquarters to ensure that the decision making that is happening at HQ is informed by learning that is happening in the field. Systems will play a key role in facilitating the collection, analysis and utilization of evidence for decision making. Much of our core business has already been put on line, and we will continue this trend moving forward. We are working towards bringing our country strategies online. This will enable better tracking, monitoring, reporting and learning from the country programmes including from non-lending activities such as policy engagement. Perhaps in closing, I would like to take this opportunity to again underline the importance of enabling staff to have time to think, and to prioritize learning. We need to create space for this to happen.

One of the key opportunities for evaluation is to get the right balance of ‘quick and dirty’ versus ‘rigorous and time consuming’.

- Donal Brown

IOE has brought on this new approach whereby it seeks not to be a disciplinarian, but to feed useful lessons into our work. In the past there was this preconceived bias and prejudice against the evaluation process as something that was going to come and uncover what we got wrong. Now there is a much more open culture towards evaluation. I know that country directors look forward to evaluation prior to doing a new strategy, because there are things that they may miss out, and also to further the difficult conversations with local stakeholders. Going ahead, the main opportunities I see are to make sure that evaluations are timed in such a way that they feed into new programming and strategies. It is also very important to hold learning events jointly with the countries. One has to make people feel that they own the solutions. For evaluation to further this learning culture, there has to be a value addition, without additional workload. The learning needs to be complimentary to the work we do, not additional to it. We need to provide out of the box, state of the art, avant garde solutions, less ‘scientific’ and more context specific.

Sara

I see several. One is of course the decentralization process and the emergence of the regional offices. Also, the move of country directors at the country level, with the increased country footprint. We can leverage this stronger presence to initiate a new type of dialogue with stakeholders on the ground. The second aspect is to learn more about the different instruments that IFAD is utilizing, through its new business model, and understand which is more effective. We have invested in expanding our menu of services. Maybe we can hear more form IOE about the impact of these services, since we want to move towards more transformational programming under IFAD 12. I see lots of opportunities for IOE to provide credible evidence not only of our successes, but also to guide us in terms of where we should go.

“I see opportunities for IOE to provide credible evidence not only of our successes, but also to guide us in terms of where we should go.”

- Jyotsna Puri

We should underscore that there is a service that we are doing to humanity when we produce high quality, credible evaluations.

- Sara Mbago-Bhunu

Thank you very much Donal, Jyotsna, Nigel, Sara and Dina.

You are welcome, Alexander.

This article is from: