Results based planning, monitoring and reporting (PMR)
· · · · · ·
If you do not measure results, how will you tell success from failure? If you cannot show success, how can you reward it? If you cannot reward success, are you rewarding failure? If you cannot see success, how can you learn from it? If you cannot see failure, how can you correct it? If you can show results, you can win support! (adapted from Osborne and Gaebler 1992)
About Norwegian People’s Aid.............................................................................................4 Chapter 1 Observing success and failure............................................................................6 Chapter 2 SHARP LANGUAGE........................................................................................................9 Chapter 3 UNWRAPPED RESULTS...............................................................................................15 Chapter 4 MONITORING - a frame of mind..............................................................................25 Checklist RESULTS.....................................................................................................................36 Checklist BASELINES.................................................................................................................38 Checklist INDICATORS................................................................................................................40 Checklist LANGUAGE..................................................................................................................44 Checklist MONITORING..............................................................................................................46 Checklist EVALUATIONS.............................................................................................................50 Checklist STORIES.....................................................................................................................54 References..............................................................................................................................56 Your personal Notes............................................................................................................57
About Norwegian People’s Aid
and this book
Norwegian People’s Aid (NPA) is a Norwegian NGO working internationally with Development and Mine Action programs. NPA International Program Department works with more than 100 partner organizations in approxi mately 15 countries. NPA’ s cooperating partners are responsible for implementing programs and projects. NPA’s role is to support these partners in their program work, and through capacity building and organizational development. The Mine Action programs has other types of strategies, partners, and approaches compared to the international development department. Standards for planning, monitoring and reporting (PMR) differ accordingly. To a large extent, Mine Action can rely on quantitatively measureable categories in monitoring: scale of areas covered, numbers of mines detonated, number of mine personnel, dogs and rats, at work.The Development Department likewise relies on numbers and figures when monitoring political and social development work. But, quantifiable information is often only the necessary starting point for planning, monitoring and reporting (PMR), and for identifying results.
This book primarily addresses PMR in the development programs and focuses mainly on the qualitative aspects of change.
NPA will not introduce a new monitoring system, but aims to improve the quality of the information within the existing monitoring systems. In stead of a new system, NPA will encourage a change of attitude in PMR, to simplify systems and language in order to be able to monitor and document results. This book presents an approach to make planning, monitoring and reporting (PMR) more practical. It advocates the use of adapted tools and simple language rather than the use of global tools, language and methods. The book is intended for NPA program staff at all levels, but can also be used with and by partners. The book is divided into two parts. Part one consists of four chapters. Chapter 1 is an introduction to the topic. Chapter 2 is about the development language and how to communicate using less buzzwords. Chapter 3 suggests some steps to take in order to highlight results in program work and PMR. Chapter 4 provides an overview of some basic monitoring approaches. Part two consists of a series of independent checklists. All the examples used in this handbook are from NPA or NPA partners’ recent plans or reports. They have not been selected because they stand out as extreme, nor to point a finger at the program. They have been chosen because they illustrate common challenges.
This book is the result of the following experiences: • Over the last decade NPA has rolled out different methods for monitoring and evaluation (M&E); the Logical Framework Analysis (LFA), Program Evaluation Support (PES) and Most Significant Change (MSC) being the most important ones. We have learnt that a one-size-fits-all PMR system that is imposed from a head office, does not match well with needs in the program context. In addition, the NPA partnership strategy emphasizes local ownership of programs, agendas, tools and methods. Centralized systems and standards are, not well suited for accommodating local ownership, capacities and needs. • Annual plans and reports for 2008/2009 have been studied particularly with a view to identifying topics to be addressed in a PMR handbook. • The NPA project ‘Unwrapping results in planning, monitoring and reporting’ 2008-2010 has in volved more than 130 key NPA and partner staff in workshops in Balkans, Ethiopia, Lebanon and Iraq, South Africa/ Mozambique/ Zimbabwe, Rwanda, Angola, Tanzania, South Sudan, Burma, and Cambodia. In the process we learnt that many key program staff consider monitoring a difficult, technical and demanding task. The abstract theory has made reports difficult to write, read and
understand. Many interesting findings and results are therefore not documented and shared. We have also learnt that there seems to be no lack of formal M&E systems or M&E expertise, but maybe a lack of ownership to planning, monitoring and reporting processes. • In 2006, The Norwegian Association of Disabled (NAD) published a manual on results based plan ning and reporting. The process was facilitated by K. Berre. The current approach leans on ‘pilot’ experiences made in the NAD network. NPA is grateful to NAD for supporting this next phase of result work. • Thanks to all workshop participants who have contributed with constructive and critical comments to make this handbook a working tool in progress. Thanks also to Helle, Sveinung, Martin, Eva, David, Kristine and Anna at NPA head office for patient and impatient commenting on the chapters and checklists throughout the writing process. We hope that the book will be a tool for making planning, monitoring, and reporting easier, more useful and meaningful.
Oslo, October 2010
Observing success and failure
This chapter outlines the approach and NPAs choices regarding planning, monitoring and reporting (PMR).
Organizations who want to see results in advocacy, empowerment or mobilization, have for decades been discussing how to measure, monitor and describe change in these areas. So far we have not been very successful. We keep track of activities, by counting for example the number of workshops held
and the number of participants. This is important information in all PMR, as it justifies budget spent and activities completed. But when we leap from doing some activities to assuming that the activites were a success, and that for example awareness has been raised, this is a shortcut.
NPA understands a result to be ‘the changed situation for the target group/organization/ partner after the activities have taken place.’ When civil society organizations improve infrastructure or health services they report ‘tangible results’ that are easy to measure and to describe. For example, after an earthquake, bridges and buildings are recon structed; schools stand complete with blackboards and are filled with children and teachers. Broadly speaking, monitoring such a project is an easy job: checking that the buildings match the drawings in the plan, that the classes are full, that there are pupils during different times of the school year, that there are teachers and they are paid, that there is a curriculum etc. Count, tick off, make a note, take a picture and compile the report. Program staff, colleagues and donors easily be made to understand the nature of the result, the challenges and the progress. Today’s results in NPA development programs however are mostly ‘intangible’, more unpredictable than planning to build a school, and far more difficult to measure. How can a result such as ‘women in 11 communities have been empowered’ be made more convincing? Even though we cannot take pictures of empowerment, like we can with the school, there are some options for better documenting achievements and failures.
NPA’s choices for planning, monitoring and reporting While many manuals talk about monitoring and evaluation (M&E) as part of the same package, this approach separates planning, monitoring and reporting (PMR) on the one hand from evaluation (E) on the other. Monitoring and evaluation are of course analytically connected, but there is at least one good reason for the separation. Evaluation approaches often have high methodological standards, a separate budget and often scientific requirements for measurements and analysis. Monitoring on the other hand is a routine task, regularly ‘checking up on how things are’ and recording this information in a way that is useful both to staff, partner, international colleagues and donor. When evaluation standards are also applied to dayto-day PMR work, monitoring becomes a task that is too complicated and time consuming for program staff. Program staff know perfectly well about status, successes, and failures in the program. But when this knowledge is not systematized or passed on to others, monitoring becomes informal, and or random. Partner organizations vary in size, program focus, management styles, capacity and history. They operate within very different political, social and cultural contexts. Each partner organization may have several different donors or cooperating partners, each with different requirements for monitoring and evaluation procedures, and different formats for financial and narrative reports.
NPA does not want to roll out yet another PMR system to programs and partners that may have good systems in place. A this stage, a new system will not solve the core challenges of monitoring: actually doing relevant monitoring and using the findings from monitoring to improve programs and projects. This book is meant as a supplement to, not a replacement for, other relevant results-based approaches and methods. It does not go into detail about areas that are extensively covered by other manuals and guidelines (see references). This book will also deal with some challenges with many systems such as logical framework analysis (LFA): i.e. that they tend to grow and become so complex that a specialist is required. NPA’s concern is to keep the systems and tools useful and manageable to program staff. The information we put into a system is more important than the format in which the information appears Many different methods can be relevant to planning, monitoring and reporting (PMR). PMR should make it possible to document actual achievements in social political programs in civil society, so that they can be understandable, credible and monitorable.
PMR is about observing and describing reality PMR should be about the ‘evidence on the ground’, to justify the program’s/project’s overarching goals, objectives and principles. Monitoring tools should therefore be ‘bottom up’, reflecting context, local capacities and the specific choices made by people in the target groups. PMR systems are dynamic, not static. They are best developed step-by-step, growing with the capacity of the program teams who need to have ownership of their methods in order to use them properly. This is far more important than having a perfect system in place. Observing and describing results does not depend on the method or system used. It relies on how the chosen method is used, that it is used, and what goes into the system. Other important factors are the commitment of the management to documenting ‘things as they really are’, personal interest, curiosity, interview techniques, people skills, etc. Monitoring and communicating results must be an ongoing task for management and all involved staff.
’The importance of mobilization maintains to be a vital focus and gives a sustainable momentum for organizational development enhancing and facilitating a democratic participatory and sustainable process, and is of great importance in order to mobilize in democratic processes…’
‘What she means to say is that democracy would be a good thing.’
‘Yes that is fine, we all agree. But hey how is your program doing?’
This chapter is about how we communicate in development programs, how we often fail to communicate, – and what we can to improve the language. Keep it simple Global professional languages such as in law, medicine, and social sciences have specialized terminology. This can sometimes ease communication within these professional groups. But unlike these professions, the development language is not rooted in just one
professional field. People working in development have different professional backgrounds, from politics and agriculture to carpentry and philosophy. They work in different countries and different social and political settings. Development workers have
different traditions for communication, and for using, understanding and writing English. They bring different terminology and expectations with them into their work, but often speak and write using universal standard terms. Planning, monitoring and reporting (PMR) requires skills, but not necessarily in designing or using complicated technical formats and systems. The most important skills in PMR are non-technical: the ability and will to describe results as they actually happen.
Buzzwords in development language Ask any citizen or politician about ‘good governance’ and you will get different answers. In order to find out what ‘good governance’ means in development programs, it needs to be defined. Definitions keep on expanding. The World Bank now has 340 different criteria for ‘good governance’. (Øyvind Eggen, NUPI)
Development language is rooted in laudable global conventions, political theory and internationally agreed standards. It consists of a mix of technically neutral terms (for example: project cycle, indicators, networks, workshop, and implementation), and words signalling ambitions, values and political positions (democratic institutions, marginalized groups, parti cipatory, capacity building, empowerment, good rights-based, strong civil society, gender equality) universal human rights, good governance and transparency.
These phrases are universally used by politicians, diplomats, UN leaders, and grassroots movements all over the world, as well as dictators! NGOs, American and African presidents, and activists all use them in speaches or documents – but without necessarily showing what they mean, or that they mean it. Good intentions and political positions are important, but can easily turn into empty buzzwords if the term remains general. Development buzzwords are a major obstacle in PMR because they hide people and what the change means to them. For example, many programs report that ‘mobilization’ has taken place, but they often fail to show how, that networks ’are in operation’, but not which or how or for what, that ‘strengthening of an organization’, ‘democratic structures’, ‘partnership’, ‘participation’, ‘redistribution of power’ has happened.
When a project in Guatemala on paper seems to have the same challenges, solutions and results as a program in the Balkans, it is likely that buzzwords have
taken the place of description, and that interesting information is hidden.
Two examples from an NPA annual report, with comments:
What type of systems and structures? Established where and by whom and for what? What institutions? Where are the people in this result?
(revised version) ‘Elders from the pastoralist group VB and regional authorities in V have met every 3 months the last year to discuss cases of land dispute. Of 7 cases, 3 were solved: 2 cases of illegal fencing of water wells, 1 case of privatizing common land solved.’
Here the result explains what the standard terms mean for the people affected.
Quote from the report: ’The ‘Women can-do-it’ program in (country) contributed to the process of building institutional gender mechanisms.’
‘To Contribute to the process of building institutional gender mechanisms’ might be fine at the level of strategy or policy. As a result however the statement requires precise, selected, specific documentation about how the program changed what situation, and for whom.
Quote from the report: ‘Land planning systems and structures have been established and are successfully institutionalized.’
This result statement does not answer:
‘Values’ or ‘results’?
PMR results should describe something that can be noticed, seen, and monitored. ‘Democracy enhanced’ might be the overall conclusion after an evaluation of an obviously very successful program after 10 years. ‘Democracy enhanced’ in a PMR document however merely signals a general value or intention.
In order to be able to identify, monitor and document good results, a first step is to distinguish between how you talk about values/visions, and how you talk about concrete achievements. This model is borrowed from yin and yang in Chinese philosophy, and symbolizes two contradictory but complementary principles, such as night and day, hot and cold, male and female, etc. Used on PMR, the circle represents the entire program, encompassing goals as well as the smallest activity. ‘Value’ words provide information about the organization’s
ideological position (yin). ‘Results’ words on the other side (yang) provide information about how this ideological position is put into practice in that particular program. As a rule, policies, goals and strategies need a different set of words than those used for planning, monitoring and reporting (PMR ).
Show don’t tell Finding alternatives to buzzwords can be difficult at first, we are used to them, and they pop up almost automatically. Detecting if buzzwords are used is therefore the first step on the way to documenting
results, and must be done by program staff at all levels: management, program countries, head office, and partners.
More examples where good results are ‘hidden’ inside buzzwords:
‘Youth have enhanced their knowledge of their different rights, developing their capacities and skills of participation in community.’
What youth? Where? What is enhanced? What type of knowledge, what different rights? Why, how, and for what do they participate in their communities.
‘New networks have been established between communities ’
What kind of networks? What do they achieve? What kind of communities? Established: what does it mean, juridical, informal, as part of local government?
‘Marginalized communities in H and V opened up through a process of debate.’
Who are the marginalized communities here? What is meant by ‘opened up’? What is a process of debate?
‘Decreased incidence of land rights related conflict in the project area.’
Show, don’t tell. Unwrap. Select the most important information. Depending on the requirements and re porting format, use numbers, names, facts, illustra
tions, personal stories, and descriptions to show what you mean. Then select the most relevant information, avoid generalisations.
Clear language is good for democracy
Changing old communication habits
Development language and buzzwords standardize relationships and mould fundamentally different approaches into one blueprint version of reality. When you work in partnerships you may share overarching objectives, but your practices will vary according to local solutions and situations in the context. A respect for diversity in approaches and language should guide all PMR work.
Changing old habits is more uncomfortable and difficult to do than changing a method, system or format.
In cases where documents and reports reveal politically sensitive issues, clearly spelling out the achievements and challenges may create difficulties in countries ruled by authoritarian regimes. Transparent information about what the organization does may be politically sensitive and therefore dangerous. Not all organizations and target groups will want such public exposure and would therefore prefer not to publish details about their outcomes and performance. If this is the case, alternatives to communicating transparent results-based PMR will have to be discussed in each individual case. Writing is a profession and writing skills are a personal creative gift not learnt overnight. Writing concisely and precisely, especially in a foreign language, is challenging. But clear language is a prerequisite for transparency that will help improve the quality of dialogue, learning, and ultimately strengthen democratic structures.
Personal effort and good management are needed to start using a simple language and to creating an atmosphere where reporting ‘failures’ is encouraged. Buzzwords are not only a habit, but also comfortably fuzzy: they hint at something good and can at the same time cover all or nothing. They provide room for flexibility and interpretation. Buzzwords ‘warm our hearts’, as one workshop participant said, ‘they are like old friends who we do not want to say goodbye to’. It takes tough decisions from managers, as well as effort from staff, to change attitudes and habits, to select what to include in a document, what to keep in the program files, what to delete, and what buzzwords to unwrap.
This chapter suggests some steps towards communicating program results. Many programs fail to show what results they have achieved. Not because of ill will or secrecy, but because good and bad results alike are hidden in layers of visionary terms and buzzwords. Good achievements are sometimes accidentally stumbled upon by outsiders to the program, who ask program staff informally, who then tell the stories. Some足times evaluations reveal results and successes that have
not been previously known even to colleagues or other departments in NPA. Un足wrapping results is crucial in good planning, monitoring and reporting: to learn from experiences, to improve programs, and to communicate these to others.
This result is from an annual report. The actual result is a great success – but hidden in layers of buzzwords. ‘The level of awareness of the citizens on the developments that took place throughout the year was markedly improved as evidenced by the maturity in the level of participation and engagement with traditional leadership in the public meetings.’
To some extent, the following steps overlap with previously mentioned points, but are still presented separately.
A reader of the report does not know the program and thinks... ‘It sounds nice, and I think I trust the organization. But what actually happend or for whom? The ‘improved level of awareness’, ‘maturity in the level of participation’ what does it mean? What is the specific change here? Are there any people involved?
Use the results chain to organize PMR information
In the first phase of planning, monitoring and reporting (PMR) sessions, place essential information only into 5 main categories.
Includes everything invested the project in terms of money, manpower, or infrastructure.
Information here often overlaps with output →
Includes everything ‘done’ by the partner (for the projects) or by NPA (with organizational development of partner) in order to obtain a result: it could be paying the salary of an accountant, holding workshops/training sessions, preparing radio programs, writing for or printing papers, etc. The activity should not say anything about the quality or aim of the activity.
Information overlaps with ← activity
Refers to the direct and immediate consequence of the activities: expected (plan) or completed (in report). An output is a step on the way towards an achievement, but is not yet a result. In most programs it may take several years to be able to report more than at output/activity level. (Example: If the activity was ‘partner org. to hold 3 training sessions on Gender Equality (GE)’, an output after 1 year could be ‘26 female teachers completed training on GE’.)
Refers to the change that the organization and/or the target group will notice as a short, medium or long term consequence of the activities. In the report, an outcome is described as either positive or negative, as planned, or different from planned. Indicators (see chapter 4 and appendix) are only required by NPA at this level.
Is the result of many factors, some of which lie outside the control of the program; effects of programs in CS as well as other processes. Impact is measured and described at society level. Impact is usually not measured or monitored in the ordinary PMR process, but evaluations at national level, or in larger reviews initiated by a donor, NPA or partner (See chapter 4)
Focus on outcome in PMR
All links in the results chain are important in program work. But in PMR sessions for planning, programming, indicators and reporting, special attention should be paid to making information in the outcome link specific, realistic and concrete. When planned results at outcome level have been formulated clearly, information to fill the other links will come more easily. Stay realistic: adjust the ambitions for the desired outcome according to available recourses.
Sometimes all a program can show in terms of achievements during its first years, are outputs (for example the number of participants who attended a workshop). An output can tell us about a group of peopleâ€™s physical presence at the workshop. This is important information to monitor and to record in order to assess the performance of the organization. But the recorded output does not tell us anything about the quality of the participation or whether the workshop made any difference to the choices made by the participants.
A workshop might in the long run lead to ‘heightened awareness’ and changes in the way that community leaders do their job in the future.
Only qualitative monitoring can tell us anything about what kind of change we look at, and about the process.
Case: A results chain, with emphasis on information at outcome level:
The objective: ‘To reduce incidences of Gender based violence (GBV) in district X by year 2012 Input: 30,000 NOK (from NPA to partner organization) Activity: Partner organization XX conducts 2 weeks training of 25 high school teachers on gender based violence GBV, semi annualy
Outcome year 1: 2 teachers have included GBV in their teaching routine. 57 students have knowledge about GBV. (Indicators at this level are important: for example: Do the curricula used by the teachers include GBV topics? Students report that teaching takes place in this subject?) Outcome year 3: 1) 3 schools in the district have established a Fight Violence Student Board. In total 17 cases of GBV have been addressed by the principal in monthly school meetings since program started. 2) 7 teachers have included GBV in their lessons, 12 teachers have never applied the training in their teaching. 3) The public discussions about gender equality have led religious leaders in 3 school districts to reinforce the rule for girls to wear modest dressing and head cover and while at school.
Output (in report): 19 teachers completed the training on GBV (year 1)
Impact: (Not possible at this stage to say whether the number of incidences has been reduced. Evaluation of the whole program will take place in January 2012)
Quantifiable participation in training or a workshop is often all we have to show in the first year(s) of a program. NPA’s main donor, Norad, includes this type of OUTPUT as a result. NPA agrees that activities and output are achievements in their own right, that they are necessary stages towards the result, and need
to be monitored and reported. However, we draw a line between output (completed activities etc.) and outcome (the change as a result of the activities), and only count the latter as full results.
When working to identify results, outputs must be documented, but measurements must go beyond activities and output. Look for the short and longterm consequence of activities. Outcome state ments in the plan and report should be clear and concrete, specifying timeframes and target groups (beneficiaries/members/organizations and others). The results matrix should contain key information, and be kept as simple as possible and have ‘outcome’ at the centre of discussions.
Results are about all kinds of change
In some cases (for example for CS organizations in Zimbabwe and Palestine) few people expect a program to produce a positive outcome. Just keeping an organization afloat under difficult conditions may require a tremendous effort, and this might be a good result given the circumstances. Reports must not be embellished to make results look ‘better’, but strive to capture all kinds of processes and changes. In order to be credible and trustworthy, setbacks must also be recorded. Reports should avoid generalizing and standardizing statements (Like: the project experienced serious setbacks), but specify or give examples of these setbacks, the main results, be they negative, lacking or positive. Negative results must be recorded, reported, and later analyzed. These kinds of experience are our best source for learning and improving next time around.
Development programs or projects that achieve all plans are rare. Results reported must reflect this reality. Results can be lacking, (for example that ‘no change within the target group has been noticed in spite of 5 years’ training’) or even negative (‘as a result of the training, expectations among the participants about gaining access to the land by the lake have been too high. Disappointed project participants, who after years of lobbying still had no access, decided to close down the project and looted the partner organization’s office.’)
Mobilization efforts do not always end up just as planned!
Distinguish between words describing ‘results’ and words describing overarching ‘values’
Fortunately most development workers are guided by high ambitions and political visions of a better society. These values are the foundation of organizational identity, policy documents, strategies and visions. However, during planning, monitoring and reporting (PMR) the overarching values need to be put aside. Not because values are unimportant, but because valuesbased language makes it difficult or impossible to monitor and communicate actual findings. Planning, monitoring and reporting (PMR) is about ordinary, and sometimes even disappointing, reality. Evaluations and assessments on the other hand will make the necessary link between empirical findings from PMR and overall values, strategies and policies. (Also see page 14) Select only the words that serves the PMR purpose.
Expected result in a plan and achieved result in a report
A plan is based on professional estimates and qualified guess work, since it looks into an unpredictable future. A plan may therefore be a bit general. However, the report at the end of a planning period must reflect what took place, where changes occurred, and even sometimes how these changes
were dealt with. The report must demonstrate that systematic monitoring has taken place throughout the reporting period. If the report merely repeats the same phrases as the plan, this may signal that the project/program was not monitored, that the report is a product of a desk based cut-and-paste desk job and/or that the reporting does not pick up on changes.
Example in which the report does not add information to what was presented in the plan:
Plan (for year X) Capacity of the organization will be strengthenedâ€Ś.
Putting together a report does not mean that all availÂ able and relevant information must be presented. The trick is to select from among the small and/ or big changes, or lack of changes, that have been registered throughout the period. Examples or cases should also be chosen as illustration.
Report (after year X) The capacity of the organization has been strengthened.
Plan Management capacity of the organization will be strengthened
Report (result) The organization has written and uses 2 out of 5 planned steering documents (strategic plan and personnel administration guideline)
Plans should ideally be clear and measurable. Many plans however are general, intangible and point towards a rather vague positive change: ‘Raise awareness’, ‘increase capacity’, ‘empower’, etc. When
monitoring and reporting, these buzzwords must be given a specific content, they must be ‘unwrapped’ to reveal the hidden meaning: What kind of capacity? What is the product so far? Who was affected?
Example: How information could be reported:
… capacity of the organization has been strengthened… … more people have gained access to natural resources.
… number of women’s rights violations has been reduced. … the interest in taking part in dialogue with authorities on land rights has increased among community members.
Results are unwrapped In June 2009, 153 persons participated in a rally against X legislation in the country capital. → organization X has elected a management team and has held its first board meeting → 32 families in South Y are again allowed access to grazing ground that had been unlawfully fenced off by private farmers. → The number of cases reported to the NN police of gender violence fell from 63 in 2007 to 15 in 2009. → In one of the 3 target communities (ZX), 2 male and 1 female representatives from the community participated in an elders meeting on conflict resolution. (A case would be great here!!!) →
Results are hidden … increased mobilization…
Specify at which level results are found
Broadly speaking, results in the NPA network are found at three levels: 1. NPA supports partner programs/projects in communities, and results are to be measured as change with networks, target groups or consti tuencies. NPA in cooperation with the partner is responsible for monitoring and reporting these results.
NPA → Partner Y 1. → program
2. NPA → partner Y
3. NPA → program
2. NPA supports partner organization on capacity building or organizational development (OD). Results must primarily be monitored following the OD process with partner. In addition, NPA is obliged to periodically assess the work of the partners with their constituencies. 3. NPA implements their own programs, and monitors progress and results as part of ordinary program work.
Plantation owner P who had unlawfully grabbed common land has evacuated the site. 71 small holding farmers have moved back. Accounts of partner Y follow international standards. Y attracts more donors.
Y hold 2 workshops on land rights for peasants in district A+C
46 peasants (3 female) have attended the training
In (date), a peasant group from district C signed a petition to local government, demanding XZ.
NPA covers training fee for chief accountant working for Partner Y
Admin staff uses accounting system X on yearly accounts.
Accounts of organization partner Y accepted by international accountant firm.
5,000 eucalyptus seedlings planted
Approx. 2,300 seedlings survived 1st year.
3 students at agri cultural school responsible for plantation and seedlings
MONITORING - a frame of mind
This chapter is based on the understanding that monitoring is not primarily a matter of system or methods, but a matter of approach and attitude. This chapter encourages monitoring systems to be designed locally, and to be flexible and simple. We monitor in all walks of life. We monitor our children, the food, our health, the weather, the fuel consumption of our car. In these daily routines, we use a baseline: what is ‘normal’ in the circumstances, and indicators: what are the signs that tell us that things are not ‘normal’. Everyday indicators are based
on the behaviour of the healthy baby, the colour and smell of good food, the colour of the clouds in the sky, or the average fuel consumptions per km. These signs help us decide what step to take next: seeing a doctor, throwing away smelly food, taking an umbrella for the day, selling or repairing the car.
Monitoring development activities, projects and programs is based on a similar common sense logic. In order to know whether things are on the right track, we need to look for signs of change, record them, and compare the situation today with how things were yesterday or last year. Without a baseline, and effective and practical monitoring, program work can become random. Monitoring can help good programs become better, and can help programs that are on the wrong track get back on the right track. Without monitoring, it is difficult to justify keeping on doing what we are doing, or why anybody should continue supporting it.
The rise and fall of monitoring systems All program countries and partners monitor their work either formally, or informally. Most larger partner organizations and country programs have a system in place for planning, monitoring, reporting and evaluation. This is often based on a version of LFA (logical framework analysis). Many partners and programs find these monitoring systems useful, other finds it complicated to use and may there足fore not monitor systematically. People involved in programs have detailed knowledge about pro足gress and results. However, this information is often lost on the way from the individual memories and experiences, through the monitoring formats, to the report. Plans as well as reports end up, as the previous chapters have shown, not doing justice to interesting programs by following standardized language and complicated formats. In development, monitoring and evaluation (M&E) has become a profession for specialists who are able to utilise complicated LFA systems and M&E vocabulary. While civil society organizations proclaim that programs are partner oriented and bottom up, many monitoring systems are designed at donor level, are managed by academics with special skills, and have a clear top down effect. NPA wishes to reverse this trend by simplifying the monitoring approach so that PMR can be managed by program practitioners. Program staff/stakeholders should consider moni足toring part of ordinary program work. The monitoring method should therefore be kept under control.
Design and use a monitoring system that suits your needs and produces the documentation required.
Over the last 10 years, NPA has made attempts to streamline monitoring and evaluation functions. LFA has been recommended as the norm, but other systems have also been rolled out, such as PES (program evaluation system), and MSC (most significant change).
Looking back, we have learned that introducing new methods does not lead to systematic monitoring taking place in the programs. This is also a finding in ‘2007 Organizational review of NPA’, Kruse et.al. Other approaches are required.
NPA’s approach to monitoring Monitoring is about more than a system, it is about a monitoring culture. The chosen monitoring methods should be designed to fit the specific needs, the capacities and size of each program/organization. Monitoring should be practical, participatory, ensure quality in the program, and give others insight into the program (transparency). A monitoring system may look impressive, but in NPA’s view it is only valid if: 1) it is actually used regularly, and 2) if it manages to produce relevant information for internal monitoring, and for plans and reports. Data obtained from monitoring should be compiled, and selected to provide content for reports and plans. At the same time, precise plans and reports are the basis for good monitoring. Monitoring should not be scientific and impressive, but practical and ‘good enough’.
Measuring change To monitor change in ‘intangible’ areas like attitudinal change, mobilization, awareness, empowerment and organizational development, we need both measurable (quantitative) and descriptive (qualitative) methods. Monitoring progress and change in these programs has often been a matter of counting activities (the output), for example the number of workshops held, number of participants, types and numbers of leaflets printed. This is important information for
monitoring, but not enough to convince anybody of any real progress/change in the long run. To measure ‘intangible’ results, the most important thing is to describe who the change is for and how the interventions have affected specific people or organizations. Have workshops triggered a change in attitudes, choices, or actions? Or are the topics discussed in the workshop forgotten? How have organizations and people acted or reacted? Have new strategies been made? Systematic monitoring of social change often involves asking open-ended questions, taking note of different and often contradictory answers, and finding a way to document and report these findings. Social projects and programs, advocacy, awareness, and empowerment cannot be measured by documenting activities and assuming outcomes. An ‘intangible’ result reported as ‘Capacity building has enhanced organizational struc tures’ does not come across as a credible result, but rather as a rephrased overall objective or goal. The statement contains two general ideas, but hides the result: what happened, the people/ the organization, and how the result appears to those concerned. An example is needed to illustrate what the result may mean. Ask: who, what and how? Make the sentence active, for example ‘3 members of the regional association of lawyers have revised the statutes of the organization, X’
Qualitative descriptions are less precise than quantitative measurements. This does not mean that qualitative descriptions/measurements are unreliable. Monitoring is not done to produce empirical evidence that is 100% precise, but to come up with information that is credible, relevant and good enough.
Make sure everyone involved in program monitoring is able to access, understand and use the method. Management must actively support the chosen approach to monitoring in order for it to work effectively. Below are some of the main steps that must be taken.
Change involves people and the choices they make as individuals and as groups. Political change, empowerment, awareness, capacity building is therefore bound to take place in many different forms. When identifying results, we need to measure HOW awareness, empowerment, etc. is perceived. Not only THAT training or other activities took place.
Setting up a Monitoring system First of all, any monitoring system must be useful rather than impressive. Check available literature and formats on the topic if necessary. Remember that these systems are often designed for large organisations. Start by picking only the essentials parts. Make sure that systems and matrixes are few in number, simple and results-based. Start identifying and ‘unwrapping’ your buzz words as early as possible, if necessary rewrite your plan.
List the different requirements of different stake holders (donors, government, partner organisations) with regard to what monitoring should cover. If possible, find a pragmatic compromise between these requirements and streamline the various information demands. Keep in mind that many donors are flexible and willing to accommodate your needs. Set up your monitoring system so that the required information can be easily accessed and compiled into the various reports.
1. Start monitoring on a small scale and pilot (try it out) whenever possible. Do not overload the system with references to objectives and too many indicators. 2. Revise existing plans: If necessary, reformulate objectives and expected results (outcomes) so they are more realistic, so that monitoring becomes easier (see previous chapters). 3. Select which outcomes/results to monitor: Disaggregate, and unwrap, each outcome; make it concrete and clear. Select the most important outcomes to monitor only. Example where the outcome statement need unwrapping: ‘To increase organisational capacity (OC) of Organisation X’. Unwrap OC to specify key areas: What kind of OC? Where? Who? By when? Making the outcomes clear already during the planning stage is crucial to be able to set indicators, baselines and targets by which to monitor and evaluate.
4. Make sure the monitored outcome reflects the relationship/level where the funding takes place Examples of different relationships/ levels: NPA supports partner in their various programs, or NPA supports partner in building organizational capacity of partner, or NPA implements its own program) (see step 6 in chapter 3.) 5. Prepare and select the basic information needed in planning and reporting: Four basic levels of information are needed when
starting a results-based PMR process (see also chapter 3): • The activities: How are you going to reach the objective? • The outcome: What change the project will change (plan)/has achieved (report)? • The ‘How do you know information’: the signs (indicators) showing how the process towards change will be/has been monitored • The overall reason why the project was started (objective)
HOW DO YOU KNOW Indicators
14 training seminars, 2000 leaflets, etc.
Year 1 (output): 70 girls have enrolled in higher education
From baseline: -No. of girls enrolled in higher education in Y: 13
Awareness raised about gender equality among 100 women in Y province
Year 3 Approximately 25% of women in Y have their own mobile phones
-Women in Y have no or very little information about the law and their rights
Year 3 (unplanned) Divorce rate increased by 50% in Y
-Women are not allowed to own mobile phones
NPA recommends that a planning/monitoring/ reporting process should start with a group process where the aim is to formulate and select key information using a simple format (e.g. as above). In PMR group work, use an old fashioned paper flipchart and colour markers so that the group can follow the process: how overlapping infomation is deleted, and
concepts are unwrapped. A challenge is to keep phrases belonging to overall policies and strategies away from statements about results, indicators and activities. (See also chapters 2 and 3). Distilling concepts and separating between the different levels and terminology is essential in order to effectively monitor and document outcomes.
Example illustrating the different types of information:
Some rules of thumb for this process: • The statements must be as precise as possible. Be selective, avoid generalizations. • Figures, timeframes, places, target groups must be presented at least at one of the levels. • Keep in mind throughout the PMR process that the choices about program strategy, root causes and overall principles have already been made. A PMR workshop has a different and complementing agenda. Monitoring results must stay close to facts and the practical sides of a development program. 6. Select basic indicators Indicators are the main tools for sound planning and monitoring, and essential in order to monitor result outcomes. Indicators are the ‘footprints’ that show where the project is moving, the signs that point towards progress or change of a program or project. Indicators show that the project is going in the expected direction, that nothing is happening, or that the project is having negative effects. They provide the key question in monitoring: ‘How do you know the result will be/has been achieved?’ NPA requires indicators for the outcome level only Some donors require indicators for objective, outcome and output levels. NPA recommends indicators be worked out for outcome level only. The reason for this is that indicators at objective/impact level tend to become broad and immeasurable, and often overlap in content with the outcome statement. Indicators at output level are in effect often merely the quantitative element from the output statement. For example, the finding ‘215 high school teachers have been
trained in gender and human rights’ is fine as it is, and does not need to be separated into two blocks of information (as output (training conducted) and as output indicator (215 teachers trained).)
Indicators often contain good information for communicating with others Indicators are essential for making the program/ project meaningful, also to others. They help ensure transparency, because they provide concrete facts about where we are on the path to reaching our objective at all stages. They allow us to gather systematic information about the project progress without having to wait for the evaluation. Basic indicators are or should be part of the baseline information. They allow monitoring to be based on what the situation was before the program started. For example if the program aims to increase the participation of women in local government, the baseline would provide information about where women participated, how many, how often, and how. If a program/ project is ongoing without a relevant baseline, this information has to be established as soon as possible.
Select few, manageable and good enough indicators Identify key indicators as early as possible in the process. All involved project and program staff should brainstorm, and then select a few, but relevant, indicators according to the criteria CREAM: clear, relevant, economic, adequate, monitorable (see checklist). Take out indicators that are too ambitious, costly and difficult, even though they might be impressive.
Quantitative indicators Count whatever can be counted: the number of workshops held, the number of people who participated, the number of days etc. Quantifiable indicators can also signal important results: e.g. an increase in the number of female voters could be a good indicator in programs where women’s participation in politics is the aim. Important here is that the baseline information about the number of female voters before the program started is available.
Change takes time. Sometimes the only trace of a result at an early stage of a program/project is quantitative information. A completed activity or a group of participants after training are necessary facts but does not indicate a change for a group of people/an organization, a result. It can tell us that the organization implemented the activities planned for.
Qualitative indicators Descriptions, subjective views, opinions, obser vations, examples are all qualitative indicators. They are needed to measure whether the target group experiences or initiates any changes after an activity has taken place. While quantitative indicators are essential for short-term monitoring, qualitative indicators are important for medium and long-term monitoring. Qualitative indicators can for example tell us whether certain training has made any difference to the participants: changes in behaviour, attitudes, or actions. These indicators should be established early in the project life, and preferably as part of the baseline information.
All indicators must be specific, and disaggregated according to gender and/or other relevant groups (young/old, ethnic group, power holders, religion, etc.). This may require information to be collected separately for men and women, for different ethnic groups, for different age groups (f.ex. children, youths, adults, elderly) for different economic (f. ex. rich, poor) and social groupings (for example agriculturists, pastoralists, businesses).
Standard indicators NPA recommends that indicators primarily be selected based on the context rather than from universal lists. Pre-established general indicators often fail to reflect changes in a particular social context. They can be an invitation to do ‘desk monitoring’: ticking off indicators from a list without basis in the facts on the ground. Indicators should represent the bottom up/actor’s perspective of monitoring.
Suggestion for where to look for indicators for the outcome ‘Advocacy/capacity building of …’
Quantitative measurement : • Number/type of leaders who received information or participated in activities • Number/kind of material produced and distributed • Number of presentations or meetings held with opinion leaders • Number and kind of media coverage that the presentation or meeting got • Number/kind of people who got the information • Number/kind of people who engage in network/ organizations that work with the topic
Qualitative measurement: • What do leaders/opinion makers know about the topic? Ask this as a baseline, then with regular intervals. • How many leaders (opinion makers) support the issue in public? Any changes? • Have leaders changed their policy or practice as a result of the activities? Be specific! • Have the messages/issues from the program been in corporated into the documents of decision makers? Which? • Is it possible to measure/describe increased public support to these political themes? How? • Have people become curious about the topic? Do you observe that attitudes have changed? • Do certain groups of members/beneficiaries have new knowledge and interest in the topic/issue? How is this noticed?
Suggestion for where to look for indicators for the outcome involving ‘Increasing women’s participation in politics’
Quantitative measurement: • Count the number of registered female and male voters • Number of women’s networks • Number of women in these networks, number of men in these networks • Number of men supporting women’s rights/gender issues These examples are adapted from Ann Kristin Johnsen’s presentation (NORAD)
Qualitative indicators could be found here: • Women’s opinions about their own political participation, changes • Women’s chances of being heard • Women’s opinions in current political issues • What is the role of the women’s networks in the community?
Look for quantitative information for example here: • Baseline: where are VAW cases registered? Monitor any changes/ frequency in reported cases. • The frequency of victims’ use of health services (clinics, private practitioners, other health workers)
When selecting indicators, keep in mind that you need them for doing actual monitoring. Stay realistic!
Look for qualitative indicators for example here: • Women’s diverse views on the severity of problem. Be specific. • Women’s attitudes and solutions. Be specific. • Power holder’s knowledge about VAW. Note down their attitudes. Any changes over time? • What are the reactions in the community? What is written in the media? What are the opinions of religious leaders?
7. Stories and cases as part of monitoring systems Stories can be a good supplement to ordinary monitoring systems as they provide more in-depth qualitative data on results. They can help com municate and analyze changes in a program/ project where the result is particularly relevant or interesting, where the exact effect is not possible to predict in a plan, or where the preset indicators cannot capture important changes. Stories, and the collection of them, also enable the program staff to monitor in a participatory way.
Stories are becoming increasingly important in documenting progress in social and political de velopment work: cases, examples and quotes can provide a good insight into a topic. The advantage of stories is that they place people as actors in the programs and projects, which is necessary if we wish to document how change affects people, and in order to get other people interested.
These questions could be used for deciding indicators to measure ‘Reducing violence against women’
A classic situation is a result statement where there seem to be no people involved: ‘active participation of women in local governance has increased’. If this is indeed a good result, the statement deserves showing how the women participated. This could be done for example by combining quantitative information and a quote from an interview to be included in the report in a text box: Result: After 4 years, out of 32 women who participated in the WCDI training, 21 continue to meet every month. ’ I was elected to the district council as the vice secretary. I was the first woman to hold such a position. Even my husband was proud of me.’ Said Hortencia, 43, shopkeeper in Esperanza.
A story can be based on examples, quotes, even a poem, a photograph with text, a life story, an interview with a staff member or a village chief.
Stories can illustrate the same project or result by showing several and/or contrasting views and perspectives. Stories make it possible to present voices that are not otherwise heard, including those that may be against the project. They can help make others understand a topic that is otherwise difficult to illustrate.
Stories may also be part of a baseline. A qualitative baseline may for example include a story describing the situation as it is now, and predicting a picture of what the situation might be in the future. These two stories will be complemented by similar stories over time, illustrating what has changed, and they may be important data when analyzing why change has taken place.
Some challenges Story writing is a creative task, and requires patience with the doers and users, as well as a willingness to put up with a period of trial and error. Story writing is a new and unstructured approach
Program type: â€˜Capacity building of an organisationâ€™ Baseline: Describes the organization as it was at the beginning of the program. It does not aim to answer questions about why the program was conceived, nor the justifications for the program, nor the general political or economic context at national level. Start by narrowing down the scope of possible information: 1. Whose capacity is to be strengthened? (Individuals, parts of an organisation, the whole organisation, a network of organisations/institutions, Institutions?)
(See checklist on baseline page 40)
in PMR. Many program workers have been used to highly formalized, structural methods with matrixes and systems, and may be apprehensive of this method. Follow-up and encouragement from management is therefore essential. Program management is also responsible for making sure that time is set aside for trying out story writing as a method. Feedback loops (from the writing of the story, to comments from users, to the actual use, and feedback to producer) are time consuming but important.
As with other parts of reports and plans, there is also a chance that stories that have been collected and written will not be used in reports or publications. This is a natural part of selection in all types of information, but may demotivate staff as creative effort has been invested. With practice and better skills, the quality of the stories will improve, and the areas where stories can be used will increase. The potential of story writing as a tool for monitoring and communication is vast. (See checklist on stories)
2. What is the situation today for this organisation? Be specific. Develop qualitative and quantitative indicators here. In particular consider availability of resources (financial, human, administration etc.), achievements, leadership forms. 3. What are the direct challenges the program wishes to address? 4. What capacities are to be strengthened? (Strategies, management systems, production processes, financial resources, attitudes and values, leadership?)
8. Establish a baseline as soon as possible A baseline is a concrete description of what the situation was at the time of the start of the intervention, and the context of the problem which the project/program aims to address. A useful baseline description contains relevant qualitative and quantitative indicators that can later tell us how far we have come in reaching the result during monitoring and evaluation. It does not aim to answer questions about why the program was conceived, nor the justifications for the program, nor the general political or economic context at national level.
Baseline information can be collected in different ways: from recent and relevant evaluations, surveys or research, or as a study undertaken as part of the program assessment. Information from the baseline is used throughout the program period to check whether progress is being made.
Baselines are the first measurements of the indicators. Collect only the information that the program staff is going to use, and that relates directly to the indicators that have been identified. (See checklist)
Summary NPA’s partnership strategy underlines partners’ auto nomy over their programs and tools. Different partners and programs having different preconditions and capacities, the planning, monitoring and reporting (PMR) methods need to reflect these differences. NPA will not introduce a top down system to be applied by all programs and partners. However, NPA requires that the content of the PMR information sent to NPA for processing satisfies a minimum standard. PMR documentation must reflect the situation ‘on the ground’, the variety among partners and target groups and the different political and social experiences. The quantitative and qualitative information documenting results must be precise, and complement, not repeat, the objectives or strategy chosen. Monitoring means keeping track of negative and positive changes in a program, in a simple and systematic way. Monitoring is done to enhance programs and learning, and should not be a job for specialists. Monitoring and documenting is not only a technical formality, but can release creativity, new information and perspectives. Practical and transparent monitoring is a fundamental part of implementing democratic principles in pro grams and organisations.
RESULTS A result is: ‘The changed situation that arises for the target group/ organization/partner after the activities has taken place.’ Results of the development work can be short-term or long-term. Results are always concrete. Results can be subjective. Results can be negative. Results are monitored in the changed situation for a group of people. Results are monitored and documented at three levels: 1) NPA supported capacity building/organizational development of partner, 2) NPA supported partner programs, and 3) NPA’s own programs. Planning, monitoring and reporting (PMR) systems need to be ‘good enough’, not perfect. If the system itself is too advanced, it is likely that it will not be used or only used by an ‘expert’. Results-based PMR is the responsibility of the management, and the task of everyone involved in programs. Allow time for results. Avoid rushing, or wishing great results into existence. It may take several years before substantial change can be documented, especially in social programs. Reports that are based on activities and the short-term effects (output) are therefore acceptable and expected at least for the first years of a program. Good indicators are essential in order to ensure the realistic monitoring of progress and results. Social environments and communities differ, as do the capacity of staff, program size, type and complexity, people’s priorities, and results indicators. Standard indicators in social programs are therefore often not relevant practical or recommended. Results should be monitored using a few ‘CREAM’ indicators (see checklist indicators) from the context.
The measuring and monitoring of results must be done using quantitative and qualitative means. Use the monitoring system you are used to, but make it simpler by keeping only the necessary parts. It takes time to get a system working as the users need to feel a certain ownership of it. A new system can work well, but can also create new problems and remove the sense of ownership. Separate between monitoring and evaluation (M&E). Monitoring and evaluation functions are often seen together. NPA however separates the different methods in order to underline fundamental differences: Monitoring is pragmatically and systematically done by all stakeholders, while evaluations apply more rigorous standards and can be done by â€˜expertsâ€™.
Narrow the focus The baseline information is used throughout the program phase to bounce off monitoring data. Therefore, concentrate on only studying the areas where you want to see results. A baseline does not need to analyze matters at the level of the larger political picture. Note the difference between a program baseline, a political/national baseline study, or academic research. National surveys or academic research can constitute a good baseline for a development program. For most CS programs/projects we recommend mapping out the situation for limited communities/ networks/organizations. Identify the different sources of data. Identify the most suitable method for data collection. Discuss who will collect, who will analyze - the data, and at what intervals. Balance the cost of establishing the baseline (money, time, human resources etc.) against available program resources Discuss how the data should be presented. Discuss who will use the data.
The baseline Describe the situation for the program as result areas. Select a few good results indicators to be followed up during the entire project. (See checklist indicators). Describe the actual situation for people (in communities or organizations), rather than using concepts from theory or ideology. Use participatory methods where possible. Use and present different perspectives: women, men, leaders, members of the organization, journalists, husbands, children, and the â€˜man in the streetâ€™ in the local communities before analyzing their situation. Identify risks, analyze possible responses. Use simple language! Possible pitfalls with baselines Extensive and resource consuming studies are produced, which may never be completed. The ambition of producing mind-blowing insight gets in the way of usefulness.
INDICATORS Use the baseline for selecting results indicators. Accept that measuring/describing change according to preset indicators will make negative results visible. Decide whether to work with indicators at outcome level only (our recommendation). Make sure the outcome statement in the plan is concrete enough for measurements to take place. Specify if necessary before choosing indicators. Identify the main areas where measurements are essential and possible. Avoid selecting only standard / global indicators. Develop indicators that are concrete enough to be monitored. Make sure you can either observe, count, smell, hear, etc. the indicator at all stages of the project cycle.
Criteria for selecting result indicators: â€˜CREAMâ€™ Clear:
Credible, specific and crystal clear about what you mean. No general terms. No buzzwords. Relevant: Indicators are for measuring or describing how far the expected result has been met, nothing else. Economic: Monitoring indicators must not be too expensive or demanding of the available human resource. Adequate: The indicators seen together must be good enough to measure progress. Choose the right number of indicators in relation to how reliable they are, and how essential they are. Monitorable: It must be possible to check indicators (time, logistics, easy) and they must be simple enough to interpret in a later analysis.
Choose indicators with different perspectives and different degrees of precision (triangulation).
Quantitative or qualitative indicators? Quantitative Crucial for monitoring/ documenting change - especially during first years. Often part of output information To check program activities against budget But ‘the number of activities completed’, ‘the number of people participated’, etc. are most often not enough to show where the program is heading.
Qualitative Qualitative indicators measure and describe the effect of the project/program on the particular beneficiaries/target groups(s) (outcomes). Qualitative indicators describe how the target group experiences certain components of the activity. Can be subjective views, observations about changes in behaviour.
Indicators should point out who benefits from or is affected by the project. Disaggregate according to gender, age or social group. Words like empowerment, awareness, democratic, solidarity, enhance, etc. become ‘buzz words’ if no description is offered. Buzz words can never be indicators. Unwrap! Indicators can be part of the baseline information (ideal), can be added at a later stage (practical), or both. Indicators add information about the particular program/project –not repeat or rephrase. Indicators should represent the exact situation in the context and the perspective of that particular organization, or target group/actors.
How and where to check on indicators: Check daily/weekly/monthly reports. File press reports/photographs. Report from and record meetings. Consider doing surveys/analyses/questionnaires. Conduct group discussions. Select key informants. Make story telling a regular routine (see separate checklist). Develop short standardized forms for recording essential quantifiable data. File and summarize meeting reports (participants, conclusions). Track changes. Develop short matrixes for project visit reports, link to selected indicators, for staff at all levels. Staff field diaries: noting particular events, comments, quotes, that are not recorded in regular monitoring systems (unexpected results). Start by making a selection from among these, and add other sources.
Pitfalls and challenges Too many indicators are produced. Monitoring becomes practically impossible.
Possible solutions Keep basic records short and concise, restrict volume of information.
The information is too difficult to collect.
Limit number of indicators.
Possibility of bias (for example only asking those who are positive about the program in the first place).
Use representative examples rather than generalizing findings.
Events other than those anticipated at the outset of the program cause more change in the indicator.
1. Before you start writing: Who are you writing for? Does your text match the reader? Try putting yourself in the place of the reader/user: Would she/he understand your text? Is the language you are writing in foreign to the reader? To you? If the answer is yes, show respect and keep it simple. Be careful not to take ‘insiders’ terminology and knowledge of context and program for granted. What does the reader need to know? 2. Use simple language Always ‘Keep It Simple and Smart’ (KISS). Simple language prevents ambiguities and misunderstandings. Latin, academic or specialized terminology, bureaucratic language or development buzzwords exclude many of those we would like to include. Put people and action into the text. Avoid general, passive and impersonal phrases. Ask ‘who did what?’ Example in which people and actions are clear: ‘Program staff conducted 5 training sessions with 13 council lawyers on gender based violence (GBV)’: Whereas the passive, impersonal version is: ‘Awareness raising on GBV took place.’
3. Short is good If you have many things to say, say one thing at a time. Separate the different messages. Short sentences help make the message clear. A readable sentence has no more than 22-25 words! Therefore, use a full stop after each message or whenever possible. 4. Prove your statements (‘Show - don’t tell!’) Be specific and clear, use concrete example to show what you mean. Showing means creating ‘pictures’ in the mind of the reader.
Example: Meaning is told: ‘Environmental awareness has been raised’ Meaning is shown: ‘People in community X have stopped chopping down whole trees for firewood’
5. When the text is finished: Read it to yourself. ‘Hear’ if you can understand your own written message. If you think it is unclear, others will too. If you write PMR texts in your own language, make sure the translation to English remains close to the original in meaning.
What is monitoring? Monitoring is part of everyday life and part of the project planning process. Monitoring provides regular information about the achievement of results at a particular point in time. Monitoring signals challenges and risks to be dealt with throughout the project period. Monitoring means following up/checking if things are going according to plan. Monitoring is an ongoing activity that takes place at all stages of the project cycle. Monitoring links all parts of the program/project cycle and enables adjustments to be made in a methodological way. Monitoring processes are initiated and â€˜ownedâ€™ by program managers and implementers. Monitoring is done by program and project staff, as well as target groups. Monitoring may not explain why problems occur or why program do not reach planned outcome. This analysis is usually dealt with through separate discussions, reviews and/or evaluations. Monitoring systems should be flexible, and adapted to the size/capacity of the organization. Results are monitored at these levels: activities, output, outcome. Why monitor? To check up on program/project in a systematic and productive way. To be accountable to beneficiaries/target group, partner organization, members and donors.
To motivate stakeholders for further action.
To improve performance. To improve results (outcome and output). To improve learning. To secure ownership of results and process. (Participatory methods, stakeholders involvement in the process) To improve communication. Who does what in monitoring? Programs and partners decide on their own planning, monitoring and reporting (PMR) systems/ methods/formats. NPA suggests a minimum standard be presented for information presented in these formats, and provides a set of simple tools for monitoring. All stakeholders (target groups, originations, partners, program staff and others) in the program/project should monitor progress and change according to set criteria and time intervals. Field staff (NPA and partner) are key monitors and producers of information. While monitoring, keep in mind: Monitoring means following up on progress and status regarding RESULTS. Results encompass all sorts of development in the program or project: positive, negative, according to plan, something entirely different, or no change at all. Monitoring should be done according to selected indicators. Monitoring is a job mainly done in the â€˜fieldâ€™, monitoring is not a desk exercise.
Bottom up/actor’s perspective: the information is gathered at context/field level, and analyzed at program level. In NPA’s PMR approach, monitoring shows how the project/program works in the particular context. Monitoring information should not become ‘top down’ reflecting the overall policy/vision. NPA encourages partners and its own staff to use simple planning/reporting frameworks. Logical formats and plans should be used in all phases of the project. They should be designed to be used, and to fit the size, type and complexity of the program/project. Using simple language is crucial for monitoring as well as for other communication. Monitoring is the responsibility of program management. NPA recommends that monitoring primarily should be done by the practitioners at programs and projects level. Separate M&E units are not recommended as this removes ‘ownership’ of monitoring and of the results. External consultants should be hired only to undertake evaluations. Indicators are the lifeblood of sound planning and monitoring. Indicators should be selected from the program context. Indicators should be developed at planning stages (baseline), and should be regularly monitored at all stages (including evaluation) of the process. (See separate checklist.) Choose monitoring methods Daily/weekly/monthly reports? Filing press reports/photographs? Reporting from and recording meetings? Surveys/analyses/questionnaires? Group discussions? Key informants?
Short standardized forms for recording quantitative data? Meeting reports (participants, conclusions)? Project visit and field trip reports according to chosen indicators for staff at all levels? Staff field diaries: noting particular events, comments, quotes, that are not recorded in regular monitoring systems (unexpected results). While selecting your methods, consider: The indicators must be limited in number, CREAM, and always provide the point of departure for good monitoring. Resource availability, access, needs, time constraints. What degree of precision is needed? Strictly necessary? Balance COST+ EFFORT against TIME. Combine different data collection strategies (triangulation). Pitfalls and challenges Too much data produced.
Possible solutions Keep basic records short and concise. Restrict volume of information
Findings are aggregated, buzzwords used.
Limit number of indicators. Use examples rather than generalizing your findings.
Systematic monitoring is not done, considered a desk job for â€˜expertsâ€™.
Knowledge about how the project/program works is found in the context, mainly with field staff. Monitoring happens here. Responsibility of program office to decide the tools, analyze incoming data, and ensure feedback of information. Monitoring of program progress and results must be done by field/project staff (as well as program staff at all levels incl. HO).
What is an evaluation? Part of the program cycle, and should be planned for from the start. A periodic, often retrospective, assessment of an ongoing or completed program/ project/ organisation/ strategy. Can cover the whole result chain from input to impact. Input
Can look at the relevance, performance, efficiency and impact of a piece of work with respect to its objectives, policies and strategies. Is usually carried out at a significant stage in the programâ€™s or projectâ€™s development, e.g. at the end of a planning period, as the project moves to a new phase, or in response to a particular critical issue.
Difference between monitoring and evaluation (same as in checklist monitoring): Monitoring Looks mainly at activities/output, at outcome, never at impact.
Evaluation Looks at outcome and impact. Can also look at input, output and strategy/policy.
Follows up activities and results in relation to preset indicators.
Is based on more formal surveys, interviews, field work, information from baseline, indicators, data from previous monitoring as well as findings from other relevant evaluations. Multiple sources of data.
Continuous. Data are routinely collected. Tracks progress against small number of predefined indicators. Responsibility of the program management.
Episodic, ad hoc. Questions validity and relevance of predefined indicators.
Monitoring is done by program or project staff. Deals with wide range of issues decided by management. Responsibility of program management, and/or HO and/or donor. Evaluation is done by external evaluators, possibly with internal participant in the team.
Checklist for initiating an evaluation Define purpose of evaluation. Involve key stakeholders. Define the questions you want the evaluation to answer Check already available information on the topic. Estimate cost, set a budget limit. Formulate the TOR. Recruit consultant(s). Present report structure and size. Decide roles/tasks/necessary practical assistance during the research phase. Who will be involved in the evaluation process and how will they contribute? Decide dissemination of report. Questions to ask in the process: Who will use the findings of the evaluation? What kind of information is required? What are the policy issues that should be addressed? What other relevant similar evaluations can be consulted? Are the recommendations and findings relevant for your evaluation purpose?
Is the necessary baseline and monitoring data available? Are the key informants (planners, project staff, target group representatives, etc.) available? Are the required skills and qualifications of evaluators formulated in the TOR? Do the evaluators satisfy requirements for independence and objectivity? Are key stakeholders invited to respond to the draft report? How will comments be incorporated into the final report? Disseminate results of evaluation to all interested parties. Role of evaluators The evaluatorâ€™s role is to help clarify the purpose and realism of the desired results. Evaluators can offer conceptual and methodological options. Evaluators interpret the situation of the program or project and can suggest ways forward. They should give constructive criticism rather than be a judge of success or failure. Evaluators provide feedback, generate learning, suggest direction, and help develop new measures and monitoring mechanisms. The evaluator is a facilitator who brings critical thinking to the organisation. The evaluator should encourage an urge for more learning, not a fear of failure. (Adapted from SIDA 2007)
A story can be one of several indicators and can help to illustrate or document social programs where results might otherwise seem general or â€˜intangibleâ€™. Stories as a method in planning, monitoring and reporting (PMR): Stories easily communicate results and challenges across cultures, languages professions, countries and programs. The collection of stories is a participatory monitoring method, and a good tool for semi-structured dialogue with project/program beneficiaries. Stories can identify and illustrate outcomes (positive, negative and/or unexpected) Stories can build staff capacity in analyzing data and understanding outcome/impact. Collecting stories helps monitoring where no indicators have been pre-defined. Stories can be used in reports and evaluations to contextualize findings. Stories help deliver a rich picture of what is happening, rather than deliver a standard description using numbers, general terms and standard indicators. Stories as a tool for PMR require curiosity and interest, but no technical professional skills and special techniques. The story can be a masterpiece of literature, a quote to illustrate a point, or a photograph. A story does not need to be complete or have a happy ending. A story can serve to start discussions about a certain approach and/or to illustrate the complexity of social change.
Choose. Be creative!
Selecting a topic for a story Anything that contributes to better understanding an approach, point of view or dilemma connected to the project/program is relevant. The story could illustrate what the organization/partner/participants/members do. The story could be part of monitoring: when different people are interviewed about the same thing, different views about the program/partner initiative are exposed. This is essential in monitoring, and can be an interesting topic for a story. What does the result, or the lack of a result, of an activity or approach look like when seen from different perspectives? Observations and/or written material produced by the program or outside (reports, clippings from newspapers, etc.) can be used when selecting a topic for a story. Take notes during a project visit/interview/meeting that can be used in a story describing the place, atmosphere, number of men/women attending, body language, expressions, etc. Checklist for the story â€˜Show, donâ€™t tellâ€™: describe the context, avoid buzzwords. Quote the people telling their stories as directly as possible, using their phrases. Be sure to give the names of both the person doing the interviewing as well as the person being interviewed, the date, the place and the name of the program/project. Take photos and, with reference to a story, the name of the person(s), place, date etc. Ask permission to use the story and photos. If you wish to use story writing as a tool, these questions must be answered: who did or said what, when and why, and why is the story important? Have fun!
INTRAC Praxis Series No 1, 2003/2008, Bakewell: ‘Sharpening the Development Process. A Practical guide to Monitoring and Evaluation.’ Rick Davies and Jess Dart: ‘The ‘Most Significant Change Technique’: http://www.mande.co.uk/docs/MSCGuide.pdf SIDA 2007: ‘Looking back, Moving forward’ http://www.sida.se/PageFiles/3736/SIDA3753en_Looking_back.pdf NORAD 2008: Results Management in Norwegian Development Cooperation. A practical guide http://www.norad.no/Results_Management_in_Norwegian_Development_Cooperation. pdf NAD (Norwegian Association of Disabled) 2005/6: Handbook on Results Based Planning and Reporting http://www.norad.no/en/Tools+and+publications/Publications/ Publication+Page?key=109837
© Norwegian People’s Aid 2010 Writer/editor: Kjersti Berre Editorial assistant: Helle Berggrav Hanssen Illustrations: Per Ragnar Møkleby Design and layout: Magnolia design as Print: Fladby as
POB 8844 Youngstorget N-0028 Oslo Norway Phone Fax E-mail Homepage
+47 22 03 77 00 +47 22 20 08 70 firstname.lastname@example.org www.npaid.org