2 minute read

SEAH epitomises a different approach to MO use

The 3.0 and 3.1 methodologies have introduced a set of cross-cutting issues since 2015 that are high on the multilateral agenda (climate change, gender, human rights, good governance; Agenda 2030; SEAH). They mirror the changes in the RBM systems of MOs in the 2010s and make the methodology more acceptable to MOs, especially in the UN system.83 However, the new indicators are not well adapted to all organisations (e.g. Multilateral Development Banks (MDBs)). MOPAN also began to adapt its methodology significantly (and especially how achievements are assessed) to better fit the specificity of some MOs, especially as its members broadened the remit to new organisations. Although it has been emphasised over the years that MOPAN assessments are not intended to be used for comparative purposes (e.g. evolving data quality and methods and different MO contexts), some members do use them this way (over time / across MOs). An adapted framework greatly diminishes the possibility of comparability.84 Finally, some MOPAN indicators are formulated such that assessors have room for appreciation. Having the necessary insight to make good use of this depends very much on the assessors’ expertise. The MOPAN secretariat considers that its new framework contract increases assessor’s expertise. Although an evidence-based rationale is always provided when expert judgements are made, MOs that are assessed worry that this leeway creates a potential for inconsistency. These initiatives have contributed to MOPAN’s improved credibility, which is certainly an important aspect of the possible influence of MOPAN assessments. But they only mitigate the issue partially.

SEAH epitomises a different approach to MO use

Advertisement

The process of establishing a framework to monitor system-wide efforts to prevent SEAH was an exception to the general approach. Whereas MOPAN has previously consulted stakeholders to inform methodological changes, it has never done so on such an inclusive scale. The collective quantification process included MOs in defining the aspects for which they should be accountable to donors and to other stakeholders. This pathway is genuinely different: the “the right practice” is not imposed from outside but emerges from the iterative construction process.85 It is worth pointing out that MOs embraced the MOPAN SEAH framework, although some MOPAN members did not see how this convening process differed from earlier processes and contributed to improving standards (see p.77).

83 Hulme, D. (2010). Lessons from the making of the MDGS: human development meets results-based management in an unfair world. IDS bulletin, 41(1), 15-25. 84 It is of note that the assessments were never fully comparable. Before 2015 and the 3.0 methodology, this was because of reliability concerns (see the 2013 Evaluation of MOPAN). 85 This reflects the opposition between the positivist and constructivist view of indicators. In the first view, indicators are supposed to measure the world “from the outside”, but the way they are used affect their validity to the point where they corrupt the very phenomena they were supposed to monitor (“Campbell’s Law”); while in the constructivist view, indicators are “performative”, the quantification process as much as the way indicators are used are both meant to transform social processes. See Desrosières, A. (2010). La politique des grands nombres : Histoire de la raison statistique. Paris : La Découverte. Chiapello, È. & Desrosières, A. (2006). 18. La quantification de l'économie et la recherche en sciences sociales : paradoxes, contradictions et omissions. Le cas exemplaire de la positive accounting theory. Dans : François Eymard-Duvernay éd., L'économie des conventions, méthodes et résultats : Tome 1. Débats (pp. 297-310). Paris : La Découverte.

This article is from: