Biennial Report on Operations Evaluation

Page 75

has increased for all indicators. However, the goal on all these dimensions is to have 100 percent of PCRs rated as “to a great extent,” and there are still significant shortfalls. About 80 percent did not meet this standard for logical framework or baseline data; 90 percent fell short on standard indicators; and 60 percent did not cite tracking indicators in approval documents. The use of a budget for preimplementation activities,15 including gathering baseline data, contributes to M&E quality at entry. But 40 percent of projects had no baseline data. Moreover, there are instances of problematic baselines—for example, the baselines were not sufficiently representative, or the baseline survey did not ask the right questions or asked leading questions. The new policy gives staff up to a year to collect baseline and other data before the operational start of a project, but the quality of baseline data gathering should be improved. More projects are using standard indicators. In 2008, nearly 60 percent of projects did not have relevant standard indicators. In 2010, nearly 80 percent had at least some. However, only 10 percent of projects had relevant standard indicators, and some standard indicators were not relevant: they were adopted because they were required. Nonstandard indicators are used to supplement; 65 percent of Advisory Services staff say that standard indicators are not sufficient to track project results, and 53 percent of projects they supervise have other, nonmandatory indicators. The PSR is the principal monitoring instrument. The first PSR is an opportunity to refine indicators, collect baseline data, and clarify targets. At this stage, data are entered in the ASOP. The PSR is updated semiannually, with the project’s status and information about risks to intended objectives. Project ratings are assigned during the PSR cycle on several dimensions such as development results—output, outcomes, and impacts; financial—funding, client cash fees, client additional contributions; implementation—timeline and staffing; and overall rating, based on specific guidelines that were developed with CDI and need management signoff and discussed in portfolio review meetings. These performance ratings help identify areas of corrective actions if needed.

DATA GATHERING Advisory Services’ principal sources of data are IFC, client firms and government agencies, and studies and surveys by consultants, national statistics offices, the World Bank, other international agencies, or civil society organizations. According to responses to the staff survey, Advisory Services staff gather data from nonclient sources such as public domain information. About 45 percent of staff sought data from clients; nearly two-thirds sought public domain data and gathered data directly. Staff also look at internal IFC data (60 percent) or third-party data (54 percent). During supervision, staff identified gaps by reviewing documents (76 percent), having discussions with clients (70 percent) and relevant stakeholders (61 percent), and making field visits (47 percent). PSR updating improved between 2008 and 2010, so only 5 percent of projects had little or no information in supervision reports, 10 percent lacked tracking indicators, and 17 percent had no audit trails for results (see figure 2.5).

Monitoring and Evaluation in IFC and MIGA

31


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.