3 minute read

Independent Magazine - issue 12

ROADMAP AGREED UPON FOR CROSS-AGENCY COLLABORATION ON AI FOR EVALUATION EVIDENCE SYNTHESIS

Evaluation experts of bilateral and multilateral agencies, including from the Global South, have identified and agreed on a roadmap for cross-agency collaboration on artificial intelligence (AI) for evaluation evidence synthesis, underpinned by effective knowledge management (KM). Consideration was given to identifying and developing guidance, frameworks, policies, expertise and tools that will support staff to leverage existing technology safely, responsibly and transparently. Dr Indran A. Naidoo, Director of the Independent Office of Evaluation of IFAD (IOE) was among those who addressed these issues during a three-day learning event which took place under the auspices of the Wilton Park Dialogue series, in the UK, from the 24th to the 26th of March 2024.

The event was convened by the UK Foreign Commonwealth and Development Office, Global Affairs Canada and IOD Parc, who brought together actors at the forefront of international efforts. The objective of the ‘by-invitation only’ event and was to exchange best practice for supporting production and uptake of evaluation evidence synthesis through AI, supported by the appropriate organisational culture. The secluded location of Wilton Park’s setting in the South Downs National Park allowed for focused discussions that led to practical commitments.

Over the course of the three days, evaluation experts shared ideas and experiences on how to apply innovative and cutting-edge methods to making synthesised global evaluation evidence accessible to decision-makers, policy makers and a wider audience through the creation of appropriate systems, tools, processes and learning. In this regard, discussions centred on understanding current and potential opportunities for advancing the use of AI for global evaluation evidence synthesis and identifying good practice to support utilisation and the broader enabling culture.

In this context, the event was grounded around three key strands. The first was to understand the current use of AI in evaluation evidence synthesis across organisations, drilling down to what works and what doesn’t for whom, why and with what implications for achieving development outcomes. Issues addressed under this theme included what we can learn from cutting-edge practice on global evidence synthesis efforts,

where the technology is currently, and what are the current limitations/ The second strand was to understand good practice in KM to enable the production and utilisation of AI-generated evaluation synthesis to ensure the greatest development impact. Issues addressed under this theme included what we can learn from partners about how to improve availability, accessibility and use of evaluation evidence, including the appropriate processes, people, systems and tools, especially for non-analysts and decision-makers. The third strand was to identify and agree on the roadmap for cross-agency cooperation on AI-enabled evaluation synthesis to deliver impactful and sustainable development.

This article is from: