THE MAGAZINE OF PSI
MAKING 7 QUESTIONS WITH R A C H E L G L E N N E R S T ER ABDUL LATIF JAMEEL POVERT Y ACTION LAB
MAKING EVERY GLOBAL HEALTH DOLLAR COUNT CENTER FOR GLOBAL DEVELOPMENT
NONPROFIT, CORPORATE MEASUREMENT IN ACTION
Sign up today at psi.org to receive impact.
EDITOR-IN-CHIEF Marshall Stowell
MANAGING EDITOR Mandy McAnally
ASSOCIATE EDITORS Rebecca Regan-Sachs Michael Chommie Director PSI/Europe
EDITORIAL CONTRIBUTORS Nika Ankou External Relations and Communications
Rebecca Firestone Research and Metrics
Regina Moore External Relations and Communications
Cate O’Kane Corporate Partnerships
Amy Ratcliffe Research and Metrics
Graham Smith Research and Metrics
PSI is a global nonprofit organization dedicated to improving the health of people in the developing world by focusing on serious challenges like a lack of family planning, HIV and AIDS, barriers to maternal health and the greatest threats to children under five, including malaria, diarrhea, pneumonia and malnutrition. psi.org
CONNECT WITH PSI
Population Services International
FROM THE EDITOR
n this issue of Impact we look under the hood to see how NGOs, donors, charity watchdogs and corporations measure impact and what role measurement plays in decision-making. Answering 7 Questions, Abdul Latif Jameel Poverty Action Lab’s Rachel Glennerster discusses the benefit of randomized evaluation and offers NGOs some sage advice, “Be willing to change things, mix things up a bit, not be too scared to try something different.” Tom Murphy explores the debate on overhead vs. impact. Art Taylor, President and CEO of BBB Wise Giving Alliance, Jacob Harold, President and CEO of Guidestar, and Ken Berger, President and CEO of Charity Navigator write, “The percent of charity expenses that go to administrative and fundraising costs – commonly referred to as ‘overhead’ – is a poor measure of a charity’s performance.” A move in that direction would make heeding Glennerster’s advice a lot more likely for many – giving NGOs greater latitude to spend funds on research and innovation. The Center for Global Development’s Amanda Glassman discusses practical recommendations to help global health funders maximize their impact on health – or, in her words, “get more health for their money.” Jodi Nelson, at the Bill & Melinda Gates Foundation, makes a compelling case for putting data front and center in the post-2015 agenda. She also shares the foundation’s new evaluation policy. We also explore the range of ways some of the world’s leading NGOs measure impact. In her piece, PSI’s Kim Longfield sets context for NGO measurement as “stakeholders want to invest increasingly scarce resources where they will have the most impact.” Four leading corporations have more in common than one might imagine when it comes to measuring corporate social responsibility efforts and their philanthropic investments. As Jon Lloyd of the London Benchmarking group states, “community investment is becoming more strategic and more focused.” For many, it’s now being integrated into broader business goals. The Skoll Foundation’s Ehren Reed shares three innovative examples outside of global health – from a project measuring the success of a nation to a tool for companies to measure and disclose performance to the Ecological Footprint, which measures humanity’s demand on nature in comparison to available biocapacity. Ehren’s piece reminds us to look beyond our industry for innovative models. What’s coming next for Impact? Our year-end issue will be released online and will once again look at the top moments in global health of 2013. We’re compiling our list and want to hear what you think the year’s most important moments were. You can email me directly at email@example.com or visit the Impact blog and share your thoughts.
MARSHALL STOWELL Editor-in-Chief, Impact
NO. 14 | 2013
3 ENTERPRISES USING MEASUREMENT FOR SOCIAL CHANGE By Ehren Reed Skoll Foundation
7 QUESTIONS WITH RACHEL GLENNERSTER
FOCUS ON IMPACT CHANGING CORPORATE COMMUNITY INVESTMENT
By Kim Longfield, PSI
Abdul Latif Jameel Poverty Action Lab
By Jon Lloyd London Benchmarking Group
BACK TO MEASUREMENT BASICS
HOW PRICING AFFECTS MALARIA TREATMENT
By Farron Levy True Impact
By Jessica Cohen Harvard School of Public Health
NONPROFIT SECTOR MEASUREMENT IN ACTION
DONORS: FORGET OVERHEAD, LET'S TALK IMPACT By Tom Murphy View From The Cave blog
VALUE FOR MONEY: NARRATIVE VS. NUMBER
By Allison Beattie
POST-2015 AGENDA SHOULD PUT DATA FRONT AND CENTER
By Jodi Nelson Bill & Melinda Gates Foundation
25 Final Word
MOVING IN THE RIGHT DIRECTION By Karl Hofmann President & CEO, PSI
5 LESSONS FOR SUCCESSFUL RESULTS MEASUREMENT By Howard White, 3ie
MAKING EVERY DOLLAR COUNT IN THE FIGHT AGAINST AIDS, TB AND MALARIA
HOW DATA DRIVES DECISIONS AT USAID Q&A WITH ELLEN STARBIRD
PRIVATE SECTOR MEASUREMENT IN ACTION
By Amanda Glassman Center for Global Development
THE MOST MEANINGFUL METRIC IS LIVES SAVED By Deb Derrick Friends of the Global Fight against AIDS, Tuberculosis and Malaria
psi.org | impact
ITH W RACHEL GLENNERSTER is Executive Director of the Abdul Latif Jameel Poverty Action Lab (J-PAL). Her research includes randomized evaluations of community-driven development, the adoption of new agricultural technologies, and improving the accountability of politicians in Sierra Leone; empowerment of adolescent girls in Bangladesh; and health, governance, education and microfinance programs in India. She serves as Scientific Director for J-PAL Africa and Co-Chair of J-PAL's Agriculture Program, and is a board member of the Agricultural Technology Adoption Initiative. She is lead academic for Sierra Leone for the International Growth Center. Between 2007 and 2010, she served on the U.K. Department for International Development's Independent Advisory Committee on Development Impact. Glennerster helped establish Deworm the World, which has helped deworm more than 35 million children worldwide.
REBECCA FIRESTONE: J-PAL has
RF: What is the value of randomized evalu-
changed the conversation in global development by conducting randomized evaluations of development programs. What is a “randomized evaluation” and why is it important?
ations for development policy and program design?
➤ RACHEL GLENNERSTER: Randomized evaluation in development draws on the concept of a randomized clinical trial, but adapted to a development context. What we often do is work with people who are implementing programs, and encourage them to select more areas than they were planning to work with and randomize who receives the program. If they’re rolling it out over time, we randomize when communities get the program. That allows you to compare those who have received the program with those who haven’t, or haven’t yet, received the program, to isolate the impact of the program from all the other things that are going on. And there are lots of things going on in developing countries – countries are growing, there are droughts, etc. This makes it hard to distinguish what is the effect of a program and what is the effect of all the other things happening at the same time. By creating a comparison group that is likely to be the same statistically as the group who gets the program, you can know that any difference that you find is due to the program.
➤ RG: I come from the position of having been a policymaker before I joined J-PAL, so I’m very conscious of the fact that policymakers constantly have to make choices. There are a lot of things that we want to do, but we only have the money and capacity to do a relatively small number of them. So it’s important that we put resources where they are most useful and where they can have the biggest impact. Certainly, as a policymaker myself, I always found that I was having to make decisions without enough evidence. Providing policymakers and NGOs with information on what’s the most cost-effective approach, what actually changes in people’s lives if you spend money this way versus that way is very important if we’re going to reduce poverty.
RF: Randomized evaluations and other types of endline evaluations often produce results only after a project is completed, which some may say is too late to inform decisions. How can we generate more real-time data for decision-making?
➤ RG: There are two parts to this answer. One is that there are often things that you can do to
impact | No. 14
© BLU NORDGREN
EXECUTIVE DIRECTOR OF THE ABDUL LATIF JAMEEL POVERTY ACTION LAB
provide real-time feedback to the organization that you’re working with, whether it’s midline results or simply tracking the progress of the intervention. So for instance, we monitor how many students or teachers are showing up to the school, or whether or not people are using the service. That’s the kind of information that can be fed back very quickly to the organization and can help them improve. The second answer is, even if there’s very important information that we don’t find until the end of the study and maybe that doesn’t help with that particular program, evaluation is a global public good. In other words, it helps lots of other programs all over the world, including future programs of the implementing partner. Yes, it would be nice to have all the answers tomorrow. But it’s better to have some answers in a few years than to continue not having answers. Even if it comes too late for this program, it’s very important for learning for future programs. I think that’s the real value of the randomized evaluation.
RF: How would you respond to criticism that the conditions required to maintain random assignments to treatment/control in evaluations don’t represent the real world? How do we make the results of these studies useful more broadly?
➤ RG: We think a lot about what questions to ask and where to do impact evaluations so that they are going to be most useful. One of the things we look for is that the program is likely to replicate – it isn’t so expensive that nobody could ever copy it, or doesn’t use very scarce human capital like very motivated people. Another criterion we often use is a representative location, somewhere that is not so special that people will say, “This is not relevant to my context.” We also try to ask questions about human behavior rather than just evaluate a very specific project. If you evaluate a project that has 20 different components, you’re only really answering, “Does that specific project work?” An alternative is to ask a question like, “Are people willing to pay for preventative health care, and how much?” That’s a more fundamental question about how humans behave, and you can test it in different contexts. There’s also often a trade-off in research between answering a specific question well and answering a very general question maybe not so well. So sometimes we think it’s useful to ask a specific question well, because if you ask a general question but don’t really find an answer to it, well, that hasn’t moved us forward.
➤ RG: What we know with reasonable
“VIRTUALLY NONE OF THE WORK WE DO COULD BE DONE WITHOUT A BIG TIME COMMITMENT FROM IMPLEMENTING PARTNERS.”
RF: Can you describe a global health policy change that came on the heels of a randomized evaluation? Why was the randomized evaluation needed to motivate this change?
➤ RG: One of the areas where randomized evaluations had a big impact on policy is schoolbased de-worming. The WHO had a target of treating all children at risk for worms with deworming pills, but very few of them – less than 10 percent of kids – were treated. Part of the issue was that one of the most cost-effective ways to do it is through schools, but there was no evidence of an educational impact of de-worming. So ministers of education weren’t very excited about doing what they considered a Ministry of Health job. But the evaluation provided evidence that there was an educational as well as health impact of the program. That was very important for motivating ministers of education to work with ministers of health to implement the programs. So now, in the last year, about 40 million kids were treated through school-based de-worming. I don’t want to say it was because of the impact evaluation because I don’t have a counterfactual, but it was conducted by states that read about the evaluation and learned about the evaluation. So we have a reasonable sense that this evaluation was influential.
RF: PSI often makes decisions about whether to provide free versus sold products in lowand middle-income countries when the goal is ultimately to reach the poor. We know that you have been studying the evidence base for this. From your perspective, what do we need to better understand regarding pricing?
certainty is that people are very price-sensitive for preventative health products. So the more you charge, the lower demand will be, and even very small price changes can have big effect on demand. We also know that these preventative health products are extremely cost-effective. So we ought to be putting a lot of money into getting them out there. We still have a lot of questions about distribution. How do you motivate people to get the products to people? We know that people doing health delivery respond to incentives, but we don’t yet have a good model to motivate the public sector. Things like penalizing nurses who have high absenteeism rates have been tried many, many times, and we’ve never found a public sector that was able to make that work. So there are some advantages of going through the private sector if it’s better at providing incentives for delivery. But you have to weigh that against the cost of lower access because of charging customers for the health products. Ideally, you’d have incentives for providers and free products at the point of delivery, but that’s a hard thing to make work. So in each situation, you have to figure out what is more important. Do you have a public sector delivery system that’s fine, in which case you can give free products through that? Or is it not working, and what are the gaps in public delivery that should be complemented by the private sector?
RF: Based on your experience evaluating programs with different implementing partners, what lessons do you have for implementing organizations like PSI? How do implementers work best with evaluators?
➤ RG: The people implementing programs that are evaluated are absolutely critical for developing an evidence base. Virtually none of the work we do could be done without a big time commitment from implementing partners. The role they play is to think strategically about which questions should be evaluated. [I would advise implementers to] work in partnership with researchers, so that you listen to what the researchers have to say about how they think the programs can be improved. Be willing to change things, mix things up a bit, not be too scared to try something different or introduce an element of randomization. Go into it understanding that an evaluation is a big time commitment, but it can provide a lot of benefits. Go in with a lot of commitment to knowing the true answer. n
psi.org | impact
HOW PRICING AFFECTS MALARIA TREATMENT B Y J E S S I C A C O H E N , P H . D., H A R VA R D S C H O O L O F P U B L I C H E A LT H
impact | No. 14
can influence the distribution of ITNs to pregnant women in Western Kenya. Twenty health centers providing prenatal care were randomly assigned to distribute nets at a price ranging from free to $0.60 (a 90-percent subsidy), lower than the prevailing price of $0.75. Results revealed surprising price sensitivity – coverage dropped from 100 to 40 percent when the price increased from free to $0.60. One reason could have been that most women didn’t plan to use the ITNs. On the contrary, we found that women who took free nets were no less likely to use them than those who paid the highest price. In this context, rather than improving ITN targeting, even low ITN prices shut out women who could not afford them but who would value and use them. A 2013 study (with Pascaline Dupas and Simone Schaner) explored the impact of ACT subsidies and found strikingly different results. We worked with drug shops to pilot an ACT subsidy program in Kenya. Households were randomly assigned to a control group getting ACTs at the regular price ($6.25 for an adult dose) or to a subsidized price ranging from $0.50 – $1.25 (80-92 percent subsidy). We found that subsidies of 80 percent or more dramatically increased
coverage relative to no subsidy. However, within the range of subsidized prices explored, there was almost no change in coverage, especially for children, who are the most vulnerable. While coverage did not change, targeting did – the proportion of people taking ACTs who actually had malaria increased at higher prices. More research is needed to discover how price and other factors in health product delivery influence effective public health programs. These studies show that the role of price can vary substantially across products and contexts. This research should apply the same rigor, transparency, experimentation and replication to the science of delivery that is applied to the basic science of product development. n
JESSICA COHEN, Ph.D., is Assistant Professor of Global Health at the Harvard School of Public Health, non-resident Fellow at the Brookings Institution and Burke Fellow at the Harvard Global Health Institute. She is also the co-Founder of TAMTAM Africa, Inc. (Together Against Malaria), an NGO working on malaria prevention among pregnant women in Africa.
© MARCIE COOK/PSI SOUTH SUDAN
n the past decade, the development and deployment of highly effective tools like insecticide treated nets (ITNs) to prevent malaria and artemisinin-combination therapies (ACTs) to treat it have fueled significant declines in malaria rates. In the Africa region, malaria deaths have dropped by 33 percent. However, while the science behind product development proceeds rapidly, the equally complex science behind product delivery has proceeded less surely and evenly. Further progress toward malaria elimination requires continued advances in deciphering the science of delivery. As Rachel Glennerster discusses in her Impact interview, pricing plays a central role in the successful delivery of public health products. Malaria products must typically be subsidized to be accessible to those who need them. Ultimately, the final (subsidized) price charged for an ITN or ACT likely plays a major role in coverage levels, targeting and appropriate use. All of these factor heavily into the usefulness and cost-effectiveness of distribution programs. In 2010 I published a study with co-author Pascaline Dupas exploring how subsidy levels
ÂŠ OLLIVIER GERARD/BENIN
B Y K I M LO N G F I E L D DIRECTOR, RESEARCH AND METRICS, PSI
here is no question that measurement matters. It changes programs, shapes policy, affects organizations and drives funding decisions. The quest for meaningful measurement is common, and the development world is looking for reliable information to improve performance. Government and foundation donors need to monitor and evaluate the performance of grantees and implementing partners and analyze future investments. Implementing organizations use measurement to track performance and determine how to reach more people, work smarter and reduce costs. Funders want to invest increasingly scarce resources where they will have the greatest impact.
psi.org | impact
➤ continued from page 5
CHECK OUT LATEST RESEARCH FROM PSI AND PARTNERS
Private Sector Healthcare in Myanmar, examining PSI’s Sun Social Franchise. Produced with UCSF, the Gates Foundation and Johns Hopkins School of Public Health. Go to psiimpact.com/healthinmyanmar to read the case study.
Check out PSI's 2012 Health Impact Report at www.psi.org/psi-2012-impact.
impact | No. 14
BMC Public Health Journal, Volume 13, Suppl. 2, published June 17, 2013. Sponsored by PSI, Pathfinder, MSI, and UCSF. Visit www.psiimpact/BMCsupplement to see the complete series of articles. There are several perspectives on what types of measurement provide the greatest insight for making decisions. For some, it is rigor, and may include a counterfactual. For others, routine data on sales, activities or client flow is sufficient for everyday decision-making. At PSI, measurement is practical, and evidence guides program decisions to make them as effective as possible. By providing data from a variety of sources and presenting it in a way that is useful, we give decision-makers the information they need to make smart funding and programmatic choices. Finally, by aligning our measures with other organizations we contribute to a community of practice that makes our work meaningful, comparable, and replicable. PSI’s history reflects an evolution in evidence-based decision-making. During the first two decades of operation, our bottom line was the number of products sold. While basic, it was a meaningful metric when we were growing markets to satisfy a latent demand for health products that neither the public nor commercial sector could meet - for instance, the number of condoms sold. Then our portfolio grew, markets matured, and the needs of those we serve changed. We started using more metrics like the Disability-Adjusted Life Year (DALY) averted, which incorporates burden of disease and helps us estimate the impact of those programs. We also found more meaning in field-based research, which helps us design, monitor and evaluate the effectiveness of our programs. Annually, PSI conducts hundreds of studies in the countries where we work to better inform our decisions. In some cases, we execute experimental designs or answer questions of a strategic nature. The most recent studies test hypotheses about pricing for products and models of implementation, like franchised health operations. Over the last three years, in collaboration with the University of California, San Francisco (UCSF) and Johns Hopkins University, we answered questions about the impact, quality, cost-effectiveness and ability of franchising to reach the poor. Those findings are highlighted in the report, “Private Sector Healthcare in Myanmar: Evidence from the Sun Social Franchise.” This year, with the support of the United Nations Population Fund, we are
conducting a set of studies to evaluate how PSI can work with the public and commercial sectors to strengthen markets for health products. No matter the research question or design, the objective is always to make measurement a useful tool for decision-making. Transparency and sharing good practices are also essential. PSI co-leads a working group with UCSF, which includes members from nine organizations and four donor agencies. Together, we pilot, adopt and advocate for standard metrics and evaluation protocols for social franchising programs (which use networks of private providers to deliver a range of quality-monitored health services). This allows us to draw comparisons across our programs, countries of operation and health service areas. Working together helps us understand whether this model for organizing networks of private providers to deliver health services actually improve health, offer quality services, and reach the poor. PSI finds itself once again looking for ways to make our measurement more meaningful. We are working to better link our metrics and analysis more closely to the business decisions that PSI executives, regional and country directors, and program managers need to make. These kinds of business analytics will likely be based on routine data. Research and evaluation will continue to test hypotheses, measure impact directly, and examine how we can improve what we do. Coupling routine data with strategic research and evaluation is intended to strengthen the relevance, scale and value of our programs by providing the best evidence to inform key decisions. n
KIM LONGFIELD has worked for PSI since 2001. Kim’s expertise is in social marketing, qualitative research and studies among populations at high risk for HIV/AIDS. Kim earned a Ph.D. in Sociology and International Health and a MPH in International Health/Health Communication and Education, both from the Tulane University. She is the author of more than two dozen published journal articles, reports, book chapters and working papers.
MEANINGFUL IN THE
Data and measurement play a diverse role in programmatic decision- making in the nonprofit sector. PSI and four global health NGO partners talk metrics, program design, donor relations and how to put measurement into action.
Alasa brings her baby to a Save the Children Outpatient Treatment Program for malnourished children in Wargaduud, Wajir, Kenya, close to the Somali border.
© REGINA MOORE/PSI MYANMAR
PSI is a global health nonprofit organization based in Washington, D.C., with offices in 69 countries. Founded in 1970 with an initial focus on family planning, PSI now works in a variety of health areas worldwide. GENERAL MEASUREMENT PRACTICES: PSI relies on a suite of metrics that frame success around use of family planning interventions and reductions in disease burden. Our fundamental measures of health impact are the disability-adjusted life year (DALY) averted and couple-years of protection (CYPs) provided. When PSI averts one DALY, it means that we have prevented the loss of one year of productive, healthy life. When PSI provides one CYP, it means that we have provided one year of protection against unintended pregnancy. PSI uses these metrics, alongside others, to inform decisions, track progress and demonstrate value. MEASUREMENT IN ACTION: For PSI’s first two decades, we measured our bottom line by the number of products sold. With the switch in 2006 to DALYs averted as our key performance metric, PSI could estimate the health impact of its products, services and behavior change interventions across all of its health areas. These results have been used consistently to inform strategic decisionmaking. In fact, PSI decided to double its global health impact goal (measured in DALYs averted) in five years, from 2007 to 2011. This goal helped align programs and motivate staff across the globe, with the DALYs averted metric factored into individual performance goals, annual appraisals and incentive compensation. By the end of 2011, PSI had achieved its goal of doubling its health impact and averted 22.8 million DALYs in 2011 alone. Visit www.psi.org to learn more.
SAVE THE CHILDREN Founded in 1919, Save the Children is a global movement of 30 member organizations working in 120 countries. With general consultative status with the United Nations Economic and Social Council, Save the Children works to assist vulnerable children through programs in health and nutrition, education, livelihoods, child protection and child rights governance in both emergency and development contexts. GENERAL MEASUREMENT PRACTICES: Save the Children conducts a variety of operations and implementation research, largely funded by grants, but also supported by the agency’s innovation funds to generate evidence on innovative approaches. The organization has adopted a five-year
psi.org | impact
➤ continued from page 7
strategy (2010-2015) to bring greater coherence around program activities, creating a harmonized monitoring and evaluation (M&E) system to measure progress against its strategic goals. The global M&E system has four major M&E components (in addition to program-level research and M&E), which focus on documenting the number of program beneficiaries, measuring progress in core program areas, measuring advocacy, and setting internal evaluation standards for country programs. MEASUREMENT IN ACTION: While the bulk of the data is fed back to the project and to project donors, it is also used to share results with the public and other peer organizations, and as evidence to support its advocacy work for policy change. For instance, numerous studies concluded that the first 28 days of a baby’s life carries the highest risk of death for both mothers and newborns. Save the Children therefore sharpened its focus on the key causes of preventable death and used the data to influence policy in a number of African and Asian countries as well as the global newborn health agenda. Visit www.savethechildren.org to learn more.
impact | No. 14
Marie Stopes Nigeria outreach midwife Eunice Inusa talks to a group of women in Nasarawa State in central Nigeria about all of their family planning options.
MARIE STOPES INTERNATIONAL Marie Stopes International (MSI) is one of the largest international family planning organizations in the world, with operations in 38 countries. A social enterprise with headquarters in London, U.K., MSI has provided reproductive health care through a system of clinics, outreach teams and social franchising partnerships since 1976. GENERAL MEASUREMENT PRACTICES: MSI’s country programs monitor and evaluate their work against a shared, standardized global M&E framework that measures the effectiveness of family planning interventions from both a client and provider perspective. As part of this framework, MSI tracks its contribution to three key goals: reduced maternal mortality, fewer unsafe abortions, and increased contraceptive prevalence. Assessing their impact at this level helps MSI understand how they improve maternal health and address wider initiatives such as the Millennium Development Goals. MSI is also currently working to evolve the couple-years of protection (CYP), a common measurement for family planning programs, into a new High-Impact CYP measure that takes account of important issues such as equity, quality and access. MEASUREMENT IN ACTION: Last year, MSI conducted 13,000 exit interviews to find out more about its clients and how they used MSI’s services. After learning that 60 percent of clients use mobile phones, and that most clients are willing to be contacted this way, MSI launched a number of mobile health initiatives to improve client follow-up. In addition, research is used to influence policy. MSI has recently amassed a portfolio of research around the safety and acceptability of task-sharing tubal ligations to lower-level providers, information that has led to positive changes in policy in countries where MSI works, most recently in Uganda. Visit www.mariestopes.org to learn more.
FHI 360 NONPROFIT
FHI 360 is a nonprofit development firm based in Durham, N.C., with offices in 60 countries. Initially focused on contraceptive research, the organization broadened its scope to HIV and other infectious diseases, and now works on a range of development issues both abroad and in the U.S. GENERAL MEASUREMENT PRACTICES: FHI 360 conducts a range of research and metrics activities, including clinical trials, epidemiologic surveys, social and behavioral studies, operations research, program and implementation science, research utilization and facilitation of clinical research networks. It also conducts costing and cost-effectiveness studies, as well as integrated behavioral and biological surveillance for HIV and family planning services. When measuring program impact, FHI 360 relies on metrics based on actual outcomes, such as HIV incidence or unintended pregnancy rates after preventative interventions. To compare impact across different interventions, FHI 360 has used cost-benefit analyses and DALYs. MEASUREMENT IN ACTION: The audience for this R&M work can include other researchers, community stakeholders and policymakers. For
AMIT PASRICHA, 2009
PATH is a nonprofit organization that specializes in health-related innovation and technology in the developing world. PATH has offices in 21 countries, with headquarters in Seattle, Wash. GENERAL MEASUREMENT PRACTICES: PATH annually measures the work of its programs and overall progress as an organization using a comprehensive monitoring and evaluation framework and set of strategic indicators. Among the indicators are geographic reach and influence, the number of people reached, the number of health workers trained, the pace at which new health products and technologies are developed and introduced, and the organization’s success rate in winning competitive grants. The results of this work (as well as other internal sources of business intelligence) are factored into strategic planning and program decisions. MEASUREMENT IN ACTION: Following a school-based cervical cancer vaccination campaign in northwest Peru, PATH aimed to measure the level of vaccination coverage achieved. Household surveys were conducted to determine how many eligible girls received the three-dose vaccine series to prevent infection with HPV, the primary cause of cervical cancer. Data collected from the survey showed that more than 82 percent of eligible girls in the region were vaccinated, demonstrating that high vaccine coverage is feasible with a school-based strategy. The results helped build the evidence base for HPV vaccination and influenced Peru’s decision to launch a national HPV vaccination campaign in 2011. Visit www.path.org to learn more.
instance, FHI 360 conducted extensive research on the validity of “pregnancy checklists,” job aids that listed criteria with which healthcare providers could rule out pregnancy in order to provide clients with contraception. The studies found that the checklist correctly identified women who were not pregnant 99 percent of the time, and that introducing the checklist significantly reduced the proportion of women who were denied contraception. The tool has since been requested by health professionals in approximately 20 countries and endorsed by at least seven ministries of health. Visit www.fhi360.org to learn more.
Community-based health workers play an essential role in educating communities and gathering data that strengthen programs.
A health worker prepares a vaccine for PATH's health program in Peru.
psi.org | impact
DONORS: FORGET OVERHEAD, LET’S TALK IMPACT
B Y TO M M U R P H Y
VIEW FROM T H E C AV E B LO G
verhead rate. The number that calculates how much a nonprofit spends on administrative and fundraising costs is a fixture on newsletters, websites, reports and even Tweets. Nonprofits signal to their donors that they are effective because more money is spent on programs. Some even go as far as to say that 100 percent of donations go to programs.
TOM MURPHY is the founder and writer of the blog 'A View From the Cave' and the co-author of the morning "Healthy Dose" round up for PSI. He is based in Boston, Mass.
impact | No. 14
A quiet debate emerged over the past year. Some people within the charity sector began to question the focus on overheads. The barn doors of the conversation blew open this year when the leading charity rating organizations published an open letter on the 'Overhead Myth' and a TED talk by charity fundraiser Dan Pallotta succinctly argued against the overhead fetish. “The percent of charity expenses that go to administrative and fundraising costs – commonly referred to as ‘overhead’ – is a poor measure of a charity’s performance,” wrote Art
Taylor, President and CEO of BBB Wise Giving Alliance; Jacob Harold, President and CEO of GuideStar; and Ken Berger, President and CEO of Charity Navigator. They argue that too much focus on overheads can hamper the work of organizations. Impact, what programs actually accomplish, is emerging as the area where charity raters are focusing their energy. In addition, they believe that potential donors should consider transparency, governance and leadership when determining where to give money. “We think governance is critically impor-
tant,” says Berger. “Not enough attention is paid to principles of good governance and ethics.” Figuring out whether or not a nonprofit has good governance may sound hard, but Berger says there are some basic things that can speak volumes. Board structure, for example, is one of those areas. The board for a nonprofit should be made up of at least five people independent of the founder. A poor board structure led to the problems of Greg Mortensen's Central Asia Institute and singer Madonna's organization Raising Malawi. “Some people say it is all about results. They are wrong,” said Berger. “If you have good results today, but poor ethical practice and governance you will not have good results tomorrow.” The open letter decrying the overhead myth was spearheaded by GuideStar's Jacob Harold. He brought together Taylor and Berger to reach out to individual donors and advise them against placing too much emphasis on overheads. “It is true that the nonprofit sector has reinforced this myth by prominently displaying overhead ratios,” says Harold. “There is a rational reason to do that, and there are donors who demand it. That is something I hope we can push back on.” The letter is the first step by charity rating organizations in pushing back against the myth. Berger and Harold agree that impacts are a vital measure, although caution against focusing on it too much. “The nonprofit sector is diverse,” says Harold. “We can’t expect to have one single metric that will capture performance across all that diversity. So we have to look for proxies.” Assessing universities and global health organizations on the same impact indicators is insufficient, he argues. That is why a diversity of measurements and organizations that are doing it are a good thing. Harold's vision for GuideStar is for it to act as a sort of warehouse for the nonprofit information supply chain. Its site can take in the varying reports and feedback collected elsewhere in order to create a more complete picture of any given nonprofit. Advertising is one area where Harold shows reluctance. He says that the money spent raising money should be seen separately from other administrative costs like staff fees, travel and supplies. Spending money on administration is what allows programs to work, but it
“THE PERCENT OF CHARITY EXPENSES THAT GO TO ADMINISTRATIVE AND FUNDRAISING COSTS – COMMONLY REFERRED TO AS ‘OVERHEAD’ – IS A POOR MEASURE OF A CHARITY’S PERFORMANCE.” – THE OVERHEAD MYTH
should be balanced against wasteful spending. “To me, administrative costs are money going into what you want to do. They are essential to achieve the results you want to achieve,” says Harold. Pallotta's headline-grabbing “TED” talk made the case for spending on advertising and marketing in order to raise more money. He said that overhead rates should not detract from the amount of money brought in. Berger said that there are some important points made by Pallotta, but he worries if his message prevails. “I remain in strong disagreement with Dan with some issues,” Berger says. “I think it is a road to ruin to have him as the darling for the charity sector.” Pallotta’s office did not respond to Impact’s request for a comment. Charity Navigator has let nonprofits know that impacts are going to be a more important part of their rating system. Surprisingly, the majority of nonprofits do not publish anything on their websites about program impacts. A two-and-a-half year analysis of nonprofits by Charity Navigator revealed that the vast majority do not publicly report their results in a meaningful way. “We believe that the vast majority do not have anything to report,” says Berger. Nonprofits have until 2016 to start publishing impacts or they will see their scores decrease. Furthermore, the organizations that continue to make claims of zero overheads will see notices added to their profiles. Berger hopes that this will help dissuade the cham-
pioning of low overheads. The majority of organizations already fall under a 25 percent overhead rate and nearly all are under 35 percent. Exceeding those numbers is an easy red flag. The raters that are meant to hold nonprofits accountable are now holding themselves accountable. Actively dispelling the overhead myth is a way to ensure that they are keeping up their end of the bargain. The changes in the sector will have the greatest benefit on the people who matter most: the beneficiaries and people affected by nonprofits. n
”WHO CARES WHAT THE OVERHEAD IS IF THESE PROBLEMS ARE ACTUALLY GETTING SOLVED?” Go to www.ted.com to watch Dan Pallotta's provocative March 2013 "TED" talk on re-thinking how to measure charities' success.
psi.org | impact
MAKING EVERY DOLLAR IN THE FIGHT COUNT AGAINST AIDS, TB AND MALARIA
BY AMANDA GLASSMAN C E N T E R F O R G LO B A L D E V E LO P M E N T
n the poorest countries of the world, millions of people still suffer and die from easily preventable diseases like malaria, tuberculosis and polio. The world has responded to this scandal by pouring billions of dollars of aid into health through organizations like the Global Fund to Fight AIDS, Tuberculosis and Malaria and the GAVI Alliance. Developing country governments themselves are also spending much more on health. But how do we know if all of this spending is actually improving people’s health? How do we know that when a child in rural Tanzania has a fever, she gets diagnosed correctly and treatment is available? Or how do we know that a baby born in Kenya to a mother with HIV will be able to receive life-saving antiretrovirals? Often, we don’t. And that too is a scandal. Too often, governments and donors track receipts better than they track results, and don’t create incentives that work to improve people’s health and healthcare. Too often, we don’t recognize that wellmeaning but wasteful spending results in missed opportunities to expand coverage to the millions waiting for life-saving medication; only 54 percent of the people eligible for HIV treatment in low- and middle-income countries are receiving care. Too often, spending goes to interventions that aren’t directed to the most at-risk or suffer-
ing populations. This is particularly true for HIV treatment and prevention, where key populations have sometimes been left out – including sex workers, injecting drug users and men who have sex with men (MSM). While many funders have progressive polices to address these key populations, it’s unclear if the policy influences allocation of funding in practice. For instance, in a sample of Global Fund grant agreements in the Philippines – where the epidemic is highly concentrated in a few high-risk populations – the majority of funding from 2002 to 2012 did not indicate a specific target group. Likewise, a recent AMFAR report shows that only 0.07 percent of Global Fund support to six Southern African countries went to programs specifically targeting gay men, men who have sex with men and transgender individuals. Only two of these six countries had programming for MSM in their PEPFAR (President’s Emergency Plan for AIDS Relief) annual budgets. In all of these instances, we fail to avert preventable disease. At the Center for Global Development, we convened a working group to examine these issues and make practical recommendations to help global health funders maximize their impact on health – or, get more health for their money. Our recommendations focus specifically on the Global Fund, which is particularly well positioned to lead in this space. We ask the
Visit MoreHealthfortheMoney.org for a quick and interactive way to read and share the report’s findings. 12
impact | No. 14
Global Fund Board and leadership to set a future agenda that:
➤ Allocates funds between diseases, interventions and populations by better using economic tools and epidemiology, rather than historical trends, inputs and horse-trading. ➤ Designs contracts that reward impact and accountability, rather than spending according to line items. ➤ Uses cost and spending data to understand the drivers of efficiency. ➤ Tracks and evaluates results in a representative and rigorous way.
Modest changes in each of these areas could free up millions of dollars that can then be reinvested to save millions more lives. Progress is already being made. The Global Fund Board has identified value for money as a priority, and has taken preliminary steps to improve the health impact of their funding. They have undertaken mapping exercises to better understand highrisk populations and identify geographical “hot spots”. They have started to place a greater emphasis on results-oriented indicators through the performance-based financing mechanism. And they have already taken aggressive steps to
verify fiscal performance through strengthened fiduciary controls and financial oversight. But moving further and faster is possible. The Global Fund and its partners can and should more systematically implement and evaluate these values for money principles and practices throughout their grant cycle.
Total Spent: $16.5 million
92% Not Specified 7% to People living with HIV/AIDS
BEYOND THE GLOBAL FUND Of course, these recommendations aren’t only applicable to the Global Fund. “Value for money” principles apply to all external funders of health policies and interventions – such as a bilateral donor or global health partnership – each of which work with a very limited toolbox to leverage or create value for money incentives among recipient country governments and implementing agencies. For instance, an international NGO or a bilateral donor agency like PEPFAR can directly hire and supervise doctors or managers. An organization like the Global Fund takes a more indirect approach, and can give or deny approval of procurement proposals, or withhold payments, bonuses, rewards and other incentives based on measured performance. But a funder does not set national policies, produce health commodities, or provide health services, all of which must be optimized in order to achieve health goals. Thus, achieving more health for the money means leveraging the comparative advantages of all global health funders, and the governments, implementers and civil society agents they work with. Everyone bears some responsibility for improving value for money – and everyone will benefit from ensuing gains in efficiency, quality and health. It will take us all to transition toward a culture that values results more than receipts. As a first step, I would ask that we all think about how our organizations can move toward more health for the money, assuring that we reward and track the results that we care about most: that the child in rural Tanzania is only sick for a few days; that the baby in Kenya and her mom thrive. This is what we must accomplish and measure to stop the scandal, to put an end to needless deaths and suffering from preventable disease. n Amanda Glassman is the Director of Global Health Policy and a senior fellow at the Center for Global Development, leading work on prioritysetting, resource allocation and value for money in global health, with a particular interest in vaccination. She has 20 years of experience working on health and social protection policy and programs in Latin America and elsewhere in the developing world.
<1% to Pregnant mothers
In the Philippines, the majority of Global Fund aid from 2002 to 2012 did not indicate a specific target group.
There are four common areas within a grant cycle where global health funders can better align incentives.
CGD HEALTH FOR MONEY LAUNCH
Panelists discuss how to get more value for money out of global health funding at an event on the sidelines of the UN General Assembly on September 25, 2013.
GD hosted an event on the sidelines of the United Nations General Assembly in New York City to discuss the challenges and opportunities to improved "value for money" within the Global Fund's New Funding Model. Panelists included Christoph Benn and Shu-Shu TekleHaimanot from the Global Fund, Stefano Bertozzi from UC Berkeley, Julia Martin from PEPFAR, and Mphu Ramatlapeng from the Clinton Health Access Initiative. The discussion highlighted recent successes in this space, including reductions in the costs of bed nets and other key commodities, and noted opportunities that exist around improving performance-based financing and verifying key health results. All agreed that global health funders can do more to improve their return on investment in health around the world. As Julia Martin from PEPFAR put it, "We aren't finished stretching our dollars – not even close." For more information about the event, go to www.cgdev.org/event/more-health-moneyprogress-and-potential-global-fund.
psi.org | impact
THE MOST MEANINGFUL METRIC IS LIVES SAVED B Y D E B O R A H D E R R I C K, F R I E N D S O F T H E G LO B A L F I G H T A G A I N S T A I D S, T U B E R C U LO S I S A N D M A L A R I A
age of indicators pointing to the critical moment in which we find ourselves with respect to AIDS, tuberculosis and malaria. Worldwide efforts have reduced HIV incidence by 33 percent, tuberculosis deaths by more than 40 percent, and malaria deaths in Africa by 33 percent in the past five to 10 years. A
number of recent and dramatic scientific advances and improved epidemiological data, combined with implementation experience, could enable us to finally turn the tide on these plagues.
But there is strong scientific evidence to suggest that taking our foot off of the gas pedal now could allow these epidemics to come roaring back; one need only look at the history of malaria eradication efforts to see this. As the world’s most powerful tool in this fight, the Global Fund to Fight AIDS, Tuberculosis and Malaria is working to make full use of these recent advances and the knowledge we have gained to maximize the return on our collective global health investments.
DEBORAH DERRICK is the President of Friends of the Global Fight Against AIDS, Tuberculosis and Malaria and a global health thought leader, with nearly two decades of policy and international development experience. She previously served as a Senior Program Officer at the Bill & Melinda Gates Foundation. Deborah also was Executive Director of the Better World Campaign, and earlier in her career worked as a senior advisor at the United Nations, at the State Department and on Capitol Hill.
New epidemiology allows us to better apply resources directly where they have the most potential for impact – both geographically, and among the most at-risk populations. Tests have become much more precise, enabling better tracking of incidence and providing the ability to pinpoint where the diseases are occurring. With this information, we can focus on driving transmission to low levels, making the spread of the diseases very inefficient. By concentrating investments on these “hot spots,” we can be much more strategic and effective with resources, achieving the same – if not greater – impact for less money.
SCIENTIFIC ADVANCES Innovations in science and technology have produced tools to more effectively prevent, diagnose and treat all three diseases. Studies
impact | No. 14
show that we can use existing interventions to decrease new HIV infection rates, giving us the ability to move from a pandemic to a low-level endemic. The first new tuberculosis drug in 50 years recently became available, four more are in the pipeline, and GeneXpert – a rapid diagnostic test – allows for diagnosis in a matter of hours as opposed to weeks. In the case of malaria, better insecticide treated nets, artemisin-based combination therapies (ACTs), and a new line of drugs are helping rein in the disease. These advancements could make the containment of AIDS, tuberculosis and malaria a reality within our lifetimes.
NOW IS THE TIME As an organization that continually learns and adapts, the Global Fund is poised to respond to this unique moment in history. By combining recent advances with its decade of implementation experience, the Fund and its partners are well positioned to attack these three plagues. In this challenging economic climate, the continued support demonstrated by donor countries is a testament to their confidence in the ability of the Global Fund to help meet this goal. This is an important point to bear in mind this year when the Global Fund will hold its Fourth Replenishment effort, asking for multiyear commitments from donors. Pledges made at the Replenishment conference this December will impact the Global Fund’s budget – and its ability to continue to deliver results – from 2014 to 2016. The world cannot afford to put off these promising investments for another three years. Doing so means we’d be walking away from an historic opportunity and potentially jeopardizing our investments so far. Continuing our efforts, however, will increasingly lead to one of the clearest returns on investment there is: more lives saved. n
lear metrics can be hard to come by, but there are no short-
VALUE FOR MONEY:
Narrative vs. Number B Y A L L I S O N B E AT T I E , P H . D.
e in the development community have likely all wrestled with the concept of “value for money.” While it sounds straightforward, the concept can be difficult to pin down. This is because value for money is not only what is cheapest or most evidence-based or even what responds most immediately to an identified bottle-neck. The final decision hinges on a narrative rather than a number, and human judgment rather than money and formulas. Demonstrating good value for money requires making the case that in a particular context, using available resources in the proposed way will deliver the best outcome and impact. At DFID, we have circled round the issue a number of times looking for ways to improve and strengthen our focus on value for money without losing vision or cutting edge thinking. In the end, a case for investment, centered on a compelling value for money narrative, has several key ingredients: ➤ ACCURATE STATEMENT OF THE PROBLEM: Defining with precision the problem that a program aims to
© GURMEET SAPAL
ALLISON BEATTIE, Ph.D., is a health and development practitioner. At DFID, she has served as Global Health Services Team Leader, acting Director of the Human Development Department and Head of Programmes in Zimbabwe. Allison lived and worked in Africa for more than 20 years as a policy adviser, researcher and program manager in Mozambique, Zimbabwe and South Africa. She has a Ph.D. from the University of London. Allison is currently on leave from DFID.
address is the essential starting point and can often be overlooked in favor of a broader analysis. ➤ E VIDENCE REVIEW: Evidence may include more than just proof that an intervention delivers a specific outcome and could also include cost-effectiveness data and efficiency data. If evidence is in short supply or inconclusive, the value for money narrative should explain how an approach will build or strengthen evidence. ➤ O PTIONS APPRAISAL: How could evidence be applied to address the identified problem? What is the cost? Which option is the best and why? Short-term and long-term costs and impacts should be considered. Deciphering what are and aren’t good options can build confidence in those selected. ➤ E XPLICIT THEORY OF CHANGE: This step links the program to the problem identified above: How will the chosen option work? What explicitly is meant to happen to ensure results that address the problem are identified? How will we know if it is successful? ➤ T RANSPARENT MONITORING AND EVALUATION strategy to demonstrate results and contribute to building the evidence base whether this is successful or not (failure is equally important to record). There are several excellent articles in this edition of Impact that talk about measurement and results. Taken together, this line of sight from wellarticulated problem through evidence for action, theory of change and scrutiny of results leads to a more transparent and accountable use of funds, promoting better understanding about
underlying assumptions and making explicit anticipated transformations at individual, community and national levels. Over time, it may also broaden and strengthen public understanding about development decisions within and between donor and host countries, making the link between problems and programs more transparent and comprehensible. U.K. Secretary of State for International Development Justine Greening has repeatedly said she considers the value for money agenda to be an essential part of building transparency and accountability both in the U.K. and in development practice. There’s a snag though. With such an explicit concern for evidence and transparent monitoring and accountability, some fear (not without justification) that only programs based on existing strong evidence are funded. Where does new learning come from? How can we remain ready to take risks, and what about experimentation and testing new ideas opportunistically? While this is a risk, its mitigation lies exactly in the strength of the narrative itself. If there is no evidence, we need to be upfront about that and demonstrate how the proposed intervention will contribute evidence to address a crucial problem; if it’s not clear whether the theory of change is sound, build that uncertainty into the program and design a program that can test alternative approaches. Good program design, incorporating flexibility and innovation while still based on solid value for money narratives, will walk the line between reassuring cautious decisions-makers that funds are being soundly used and yet strengthening evidence, taking risks and pushing the boundaries of development. The value for money narrative can thus be a handy tool rather than a straight jacket. n
psi.org | impact
HOWARD WHITE is the Executive Director of 3ie, Co-Chair of the Campbell International Development Coordinating Group, and Adjunct Professor, Alfred Deakin Research Institute, Deakin University.
LESSONS FOR SUCCESSFUL RESULTS MEASUREMENT B Y H O WA R D W H I T E, 3I E
As both a knowledge broker and funding agency, 3ie funds impact evaluations and systematic reviews that generate evidence on what works in development programs and why. Throughout the past four years, it has made grants for roughly 140 studies in more than 40 countries. Below are the main lessons that 3ie, and others, have learned through the experience of conducting and managing impact evaluations throughout the past decade.
TO UNDERSTAND 1/ RESULTS, LOOK TO IMPACT EVALUATION, NOT JUST OUTCOME MONITORING.
The rise of the results agenda has, rightly, focused attention on outcomes. This is good. But outcome monitoring is often used to “measure” results. This is wrong. Monitoring tells us what happened. It does not tell us why it happened. The “result” for an agency is that its programs make a difference to the outcomes. And for that we need a comparison of what happened with what would have happened in the absence of the intervention, i.e., estimation of a counterfactual. We have many cases in which outcomes have improved, or worsened, but these changes have nothing to do with development programs. This counterfactual is usually measured by the establishment of a comparison group, that is, a group of people who are, on average, similar to those in the program in all respects except that they don’t receive the program. Where it is an option, randomization is the best way to ensure this equivalence, though other impact evaluation methodologies may also be used. 16
impact | No. 14
2/BUT USE FORMATIVE RESEARCH AND EVALUATION FIRST.
Impact evaluations should be applied to all pilot programs to decide whether to go to scale or not. But new ideas should first be tested on a smaller scale, with formative research to inform program design and formative evaluation to see if the program can be successfully implemented with sufficient take up to make it worthwhile. Only after such studies should a larger pilot be conducted with an impact evaluation design.
WARY OF RELYING ON PROXY 3/BEOUTCOMES.
For outcomes that are infrequent or take a long time to materialize, it is common to rely on proxy outcome. For example, since very large sample sizes are needed to measure maternal mortality, we may rely on proxies further down the causal chain. For example, a program to reduce unwanted pregnancies, and therefore unsafe abortions, may take “couple years of protection” (CYPs) as its main outcome indicator, using model estimates to obtain the impact on maternal mortality. Unfortunately, several studies show that achieving proxy outcomes is often a poor predictor of success in affecting the final outcomes of interest. Conditional cash transfers, for example, work to increase utilization of health facilities, but most studies find no impact on health outcomes. So it really is better to collect data on final outcomes, even if it is more costly and you have to wait longer to do so.
GLOBAL POLICY 4/ SHOULD NOT BE BASED ON SINGLE STUDIES. An individual randomized control trial tells you if the program under evaluation worked for the population it reached. It should work for a similar population elsewhere in similar settings. But decisions to scale up beyond the specific context of a single study are best made on the basis of systematic reviews of all available evidence. The Cochrane Collaboration manages a library of thousands of reviews of health interventions, though a small share of these reviews are specific to health issues in low- and middleincome countries.
5/BEYOND IMPACT, WE NEED TO KNOW COST-EFFECTIVENESS.
And the decision to expand an intervention does not rest on just showing that “it works.” At what cost does it work? Cost-effectiveness analysis allows us to compare which type of intervention is most cost-effective in achieving a given outcome. Performing cost-effectiveness requires a common outcome indicator across interventions. The health sector has led the way here with the widespread adoption of Disability Adjusted Life Years (DALYs), which were indeed used to design cost-effective basic health packages in a number of countries. The Disease Control Priorities Project summarizes what is known about the cost-effectiveness of a wide range of interventions. Where there are multiple outcomes, then cost-benefit analysis should be used. Well-designed impact evaluations can help better inform these analyses. n
HOW DATA DRIVES DECISIONS AT USAID
Impact Talks with Ellen Starbird IMPACT: How does USAID assess the effec-
IMPACT: USAID has a long history of using
tiveness of its health investments?
a “logical framework of results” to monitor health programs. Could you describe this framework and how it is used to facilitate decision-making?
ELLEN STARBIRD: USAID assesses the effectiveness of its health interventions by looking at trend data in health indicators that are related to the programmatic interventions that we support. For our family planning and reproductive health programs, contraceptive prevalence, improvements in birth spacing and increasing age at marriage are all measured by surveys, including the Demographic and Health Survey. Changes in these indicators can be related to our investments. USAID uses evaluation findings to inform decisions, improve program effectiveness, be accountable to stakeholders, and support organizational learning. Research tests the effectiveness of possible interventions and is used to identify high-impact practices for our family planning and reproductive health programs. Pilot studies and introduction studies test the effectiveness of interventions in specific contexts or countries. Those interventions that best "fit" a particular context (i.e., level of program development, epidemiological context, resources available, etc.) are selected.
part of project design, as it identifies and briefly describes the problem the project intends to address and the expected outcomes of the project. The framework includes inputs, outputs, outcomes and impact. USAID uses Project Monitoring Plans to monitor at each step in this process. These plans examine answers to questions such as: Are inputs being delivered as planned? Are inputs leading to the anticipated outputs? Are outputs leading to the desired outcomes? If not, is the problem failure to deliver the input, or is the problem that inputs are delivered but for some unanticipated reason are not leading to the expected outcome?
IMPACT: USAID recently conducted a thorough review of its evaluation practices and developed a new policy on evaluation to guide the organization. What does USAID want to learn through implementation of this policy, and what does this mean specifically for health programs?
ELLEN STARBIRD is the Director of the Office of Population and Reproductive Health. She has worked for USAID for 23 years and was the Deputy Director of PRH for the past seven years. Previously, she was the Chief of the Policy, Evaluation, and Communication Division in the same office. Before joining USAID, Ellen worked for three years at the Rand Corporation, principally on the Malaysian Family Life Survey.
ES: The logical framework is an important
ES: USAID conducted this review to ensure
that effective evaluations were taking place and guiding programmatic decisions. There was a concern that over the last several years fewer evaluations were being done, and the agency wanted evaluations to play a more prominent role in program decision-making. By implementing the new policy, USAID hopes to get a better understanding of the success with which its programs are implemented (process evaluations) and the impact of those programs (impact evaluation). This means that our health programs will put more focus on the implementation and
D I R E C TO R O F T H E O F F I C E O F P O P U L AT I O N A N D R E P R O D U C T I V E H E A LT H
impact of its projects, and that this information will guide future programming decisions. Ultimately, this creates a quality-improvement process, capturing experience to develop increasingly effective programs.
IMPACT: Can you share a recent example of receiving surprising results from work our office has been supporting? How did these results shape the decisions you and your colleagues had to make?
ES: In recent years, results from the DHS, especially those from Africa, showed an unexpected level of interest in and demand for long-acting contraceptive methods. These findings led us to expand our efforts to make these methods more widely available in an acceptable, accessible and affordable ways. Another example is that survey and qualitative research have identified a substantial demand for contraceptive information and services among youth in developing countries. M-Health is providing access to information on methods and source of supply to youth via electronic communication. Information collected on these programs indicated that youth are interested in a wide variety of methods, including natural methods, injectables and longer-acting methods. ➤
IMPACT: What are some challenges you anticipate in generating meaningful data for decision-making post-2015?
ES: As we continue to make progress, what and how we measure will also have to change. In the area of family planning and reproductive health, for example, we’ll need better measurement around costs, as well as better understanding of how to measure choice and rights. The current data collection mechanisms in place will need to be adapted for such advances, or new ones will need to be developed. n ➤
psi.org | impact
Should Put Data Front and Center J O D I N E L S O N , P H . D., B I L L & M E L I N D A G AT E S F O U N D AT I O N
he United Nations High-Level Panel report on the post-2015 agenda recently called for a “data revolution” to improve the accuracy and availability of development data. While this is a welcome change, it should not be mistaken as a panacea for improving measurement overall. Having the data to define relevant baselines and targets for the new set of goals, and to track progress toward achieving them, is a necessary but insufficient solution. There is more to do if we want to assure that governments, NGOs and other partners have the tools they need to achieve development results that matter for people. A 2007 editorial in Nature titled “Millennium Development Holes” articulated what many saw as the emperor without clothes: the UN's “pseudoscientific” progress reports masked the "fact that the quality of most of the underlying data sets [was] far from adequate" and that "to pretend that progress towards the 2015 goals could be accurately and continually measured [was] false." The call to put data front and center is a move in the right direction. But it reinforces the idea that measurement’s key purpose is for tracking progress, rather than informing decisions about how best to achieve the goals in the first place. Evaluation is the tool of choice for this purpose. Here, too, we’ve learned lessons since the Millennium Development Goals (MDGs) were defined. In 2006, a Center for Global Development report argued that evaluations tended to focus too much on monitoring and operational assessments, without adequately determining “which interventions work under given conditions, what difference they make, and at what cost.”
JODI NELSON is the Director of Strategy, Measurement and Evaluation at the Bill & Melinda Gates Foundation. Prior to joining the foundation, Jodi was the director of Research and Evaluation at the International Rescue Committee. She has a doctorate in Political Science from Columbia University with a concentration in political economy, and a B.A. from Northwestern University.
impact | No. 14
Today, governments, NGOs, and multilateral, bilateral and private donors alike are interested in this type of evidence and the evaluations that can produce it. According to the Center for Global Development, the number of impact evaluations, for example, has increased significantly, from 15 in 2000 to 120 today. But increasing the number of impact evaluations is also insufficient. Evaluations need to be useful and used by those in a position to make decisions about how to improve people’s lives. As is the case with improving development data, this requires collective action to build new norms and shift priorities. Two key things stand out. One, we need to move away from investing in evaluating the impact of programs and toward evaluation that produces evidence of what works best to achieve impact. The former approach to evaluation is not likely to produce actionable evidence that decision-makers can use, unless the decision is whether to stop or continue a program. Evaluation that can inform policy or practice decisions should focus on whether some models, mechanisms and execution tactics are more effective than others at producing results, as well as how and why this is the case. This evidence can inform decisions about which approaches provide the most leverage for scarce resources, how to adjust and improve them, and what part of a bundled program is most powerful and should be replicated or scaled. Second, we need to concentrate scarce evaluation resources in investigating paths to achieve the most urgent goals. Impact evaluation dollars to date, for example, appear to concentrate on programmatic models that
were perhaps scaled and replicated without sufficient evidence (microfinance, conditional cash transfer, community driven development). The alternative, more actionable approach would be to focus on those goals that need to be achieved most urgently and concentrate limited resources on evaluating innovative approaches conceived to accomplish them. Indeed, the call for a revolution in development data is a welcome sign of progress since the MDGs were first defined. But we have learned even more this decade about how to build a measurement toolkit that goes beyond tracking progress over time, to help decisionmakers spread innovation and learn what works best to achieve results. n
FOUNDATION LAUNCHES NEW EVALUATION POLICY
s measurement changes get underway at the United Nations, the Bill & Melinda Gates Foundation has also been reviewing its evaluation practices. For the first time, the foundation has developed an evaluation policy for internal staff, grantees and partners. This policy articulates when, how and why the foundation uses evaluation as a tool for making decisions and learning what works best to achieve results. It is rooted in the organization’s business model, which involves working with partners to achieve the greatest impact.
To access the evaluation policy, visit the foundation’s website at www.gatesfoundation.org. The policy is available online beginning in November.
Enterprises Using Measurement for Social Change
E H R E N R E E D, S KO L L F O U N D AT I O N
he mathematician and physicist William Thomson, the Baron Kelvin, once said, “If you cannot measure it, you cannot improve it.” The Skoll Foundation, which has funded innovative social entrepreneurs since 1999, applauds the pioneering ways in which its grantees put this idea into action. Like many social enterprises, these three organizations incorporate a focus on measurement common to corporate practice, but in this case to maximize their impact and to look at problems in a new way. Using cutting-edge measurement approaches, these organizations are addressing some of the world's toughest challenges.
EHREN REED © ISTOCKPHOTO / SHUTTERSTOCK
As Research and Evaluation Officer for the Skoll Foundation, Ehren Reed is responsible for assessing the impact and effectiveness of the Foundation’s efforts in order to support ongoing learning and evidence-based decision making. He was previously a Director of Innovation Network, a Washington, D.C.-based evaluation consulting firm.
THE SOCIAL PROGRESS INDEX Designed by Harvard Business School Professor Michael Porter and the team at the Social Progress Imperative, the Social Progress Index was launched at the Skoll World Forum in April. It offers the most complete set of metrics currently available for measuring a nation’s success, ranking 50 countries by their social and environmental performance and pointing out where nations should focus their efforts to improve the well-being of their people. More countries will be added over time until at least 120 nations are included. Channeling Lord Kelvin, the Social Progress Index’s website notes, “To truly advance social progress, we must learn to measure it comprehensively and rigorously.”
THE GLOBAL FOOTPRINT NETWORK CERES Ceres mobilizes investors and business leaders to adopt sustainable business practices. Fifteen years ago, Ceres introduced the Global Reporting Initiative (GRI), which allows companies to measure and disclose their performance on four levels: economic, environmental, social and governance. To date, more than 5,600 organizations have produced sustainability reports using the GRI. To learn more, visit www.ceres.org.
While completing his Ph.D. at the University of British Columbia, Mathis Wackernagel developed the Ecological Footprint, which measures humanity’s demand on nature in comparison to available biocapacity. Or, in other words, how fast we consume resources and generate waste in comparison to how fast nature can generate new resources and absorb our waste. According to the Ecological Footprint, if everyone on Earth lived the lifestyle of the average American, we would need five planets. To learn more, visit www.footprintnetwork.org.
To learn more, visit www. socialprogressimperative.org.
Visit www.skollfoundation.org to learn more about our work or join in the conversation at www.skollworldforum.org.
psi.org | impact
SECTOR ELI LILLY AND COMPANY AT A PROGRAMMATIC LEVEL, we designed our programs in ways that would address the challenges faced by the country in the disease areas that we address, and ensured that each program has a rigorous monitoring and evaluation framework that also includes measurement of patient outcomes. For instance, in our
current work in India, the most important need is to ensure that the program is addressing the implementation of diabetes care across an entire community, while in Mexico, our work addresses the implementation of diabetes care at the primary care level. So that is a starting point, but to make this measurement more meaningful and relevant to business, we need to add in new parameters that also don't have a core place in all aspects of business, such as metrics for advocacy, and values metrics that show our business shift from a purveyor of medicines to also being a partner in healthcare management issues. Finally we do intend to add in the business bottom line metrics. Bringing these disparate and often previously diametri-
impact | No. 14
Impact highlights four corporations managing large-scale philanthropy or corporate social responsibility initiatives around the world. In this section, we ask how these organizations use research and measurement tools to assess the impact and value of their work, what kind of measurement guides their business models, and how financial vs. social or environmental results are weighted against each other.
cally opposed values into one tool is something that takes time and evolves as projects mature. The Lilly Non-Communicable Disease Partnership highlights this commitment to measurement. Launched in September 2011, it is a major corporate responsibility initiative to address the global health crisis of non-communicable diseases (NCDs) in four emerging economies: Brazil, India, Mexico and South Africa. It combines our company’s unique resources with the expertise of leading global health organizations to identify new models of patient care that increase treatment access and improve outcomes for people in need. The campaign also employs a novel approach that immediately benefits healthcare providers and patients and, in parallel, assesses program outcomes. The campaign’s operational framework includes three components: ➤ Research: Pilot new, comprehensive models of healthcare based on sophisticated research and detailed data collection; ➤ Report: Work with well-respected partners to share data and lessons learned; and ➤ Advocate: Inform key stakeholders about program findings and encourage the adoption of proven, cost-effective solutions. The Lilly NCD Partnership is an extension of Lilly’s corporate responsibility efforts, but is closely integrated with the company’s core business. The program represents a modern approach to corporate responsibility known as “shared value,” created when a business applies its unique assets and expertise to a pressing societal need in which the company has a vested interest. —Dr. Evan Lee, Sr. Director, Global Health Programs & Strategy, Eli Lilly
© GURMEET SAPAL BANGALORE, INDIA
JOHNSON & JOHNSON EXXONMOBIL OUR BUSINESS SPANS THE GLOBE, and no two operating environments are the same. Our approach to conducting our business and environmental protection, for example, begins with a thorough understanding of the local environmental and socio-economic surroundings. Everywhere we do business, we determine how we can best serve the needs of communities and countries. To accomplish this, our citizenship group vets partners for superior track records in producing results, impeccable financial management and a strong commitment to collaborate with governments, NGOs and the private sector. We subscribe to the belief that if you can’t measure it, you can’t manage it. So, when data are not readily available that allow us to assess how best to explore a new business asset, we conduct our own research. For example, several years ago, we sought to expand our citizenship work by looking for a community investment program that could be applied across countries and address fundamental development needs. The results of this analysis pointed us to increasing economic opportunities for women. However, the data also showed that more granular knowledge was needed about which specific interventions worked best and which program design features were critical for success. Accordingly, we commissioned a two-year research project to answer these questions. This empirical research proved how simple investments like savings accounts can increase women's earnings; how the use of mobile phones can deliver financial services and market information for women farmers and entrepreneurs; and importantly, it showed how very poor women need a more intensive package of services to really see sustainable income generation. So now, we have a roadmap of fact-based research, peer-reviewed data and randomized controlled empirical testing. We see these research findings as a new evaluation tool for impact investing and as another piece of the ongoing conversation to help us get better at assisting women and to bolster economic development around the world.
JOHNSON & JOHNSON’S STRATEGIC PHILANTHROPY is focused on saving and improving the lives of women and children, preventing disease among the most vulnerable and strengthening the healthcare workforce. To maximize the impact and reach of our giving, we work closely with our partners to set clear program goals and incorporate solid practices to measure and assess the impact of their efforts. These collaborations have yielded a number of protocols and instruments to enable rigorous monitoring and evaluation practices. We develop unique measurement tools for each of our program areas. Our grant review process includes annual reporting on outcomes, and we develop common indicators that enable us to compare results across regions. In addition, directors of our major programs develop logic models to articulate a theory of change for their programs. We also have an internal evaluation team that works across all of our programs to ensure that we are aware of best practices and emerging trends. One example of our measurement approach is our Bridge to Employment (BTE) program, which seeks to strengthen health systems by supporting and encouraging young people from underserved communities to stay in high school and pursue careers in health care. The program has 13 sites in Africa, Asia, Europe, Latin America,
PHOTO COURTESY OF PACE/UGANDA.
—Jim Jones, Manager, Community Investments, ExxonMobil continued on page 22 ➤
psi.org | impact
SECTOR ➤ continued from page 21 and the United States. Each site has its own implementation structure and local evaluator, and data on common indicators are rolled up across the sites. With comparison groups in place, we measure academic achievement, the number of students who attend college after completing the program, the quality of their analytical, communication and teamwork competencies, and how BTE partner organizations that, post-funding, are now sustainably working together toward common goals. The program also emphasizes Johnson & Johnson employee mentoring. We collect data on their engagement and how the program has helped them develop their mentoring and management skills.
Pfizer Global Health Fellow Jerid Lydic trains social marketing staff at PSI in Tanzania on medical detailing.
Two members of the Go Getters’ club, a program targeting female university students to address cross-generational sex in Uganda, funded by J&J.
impact | No. 14
PHOTO COURTESY OF PACE, PSI'S LOCAL AFFILIATE.
—Michael Bzdak, Executive Director, Corporate Contributions, Johnson & Johnson
PFIZER MEASURES THE VALUE AND IMPACT of our corporate social responsibility efforts, with particular attention given to our impact on patients. We focus our corporate social responsibility activities on issues aligned with our areas of expertise and business interests – high among these are issues that relate to access to healthcare and medicines. We then measure and report on financial, environmental and social governance outcomes in an integrated annual report. This report includes Key Performance Indicators like the number of global programs and commercial transactions that increase access to medicines in emerging markets, and the number of the top 20 global burdens of disease addressed by our current products and those in our pipeline. One example of our work is Pfizer’s Global Health Fellows (GHF) Program, designed to improve access, quality and efficiency of health services for underserved populations through individual and team volunteer assignments that range from two weeks to six months. The following tools are used to measure the impact and value of the program: ➤ Strategic reviews are conducted every two years to ensure the program remains innovative, relevant and valuable for our partner organizations and Pfizer’s business. ➤ Both fellows and partner organizations take assessment surveys immediately after the fellowship and one year later. These surveys provide insight on the value of the partnership, related knowledge and impact. Fellows also participate in individual growth measurement surveys that are reviewed by their Pfizer managers to determine if they are meeting their business objectives. ➤ Pfizer also works with the Boston University Center for Global Health to gather data and develop case studies to understand and share how the program contributes to building capacity in ways that promote better health services. Since 2003, 377 Pfizer employees have completed an estimated 334,000 hours of skillsbased volunteering, which is valued at approximately $49 million of pro bono service to partner organizations. Through GHF, Pfizer has partnered with more than 40 international development organizations in more than 40 countries. This year marks the 10th anniversary of the program. —Caroline Roan, Vice President, Corporate Responsibility, Pfizer Inc., and President, The Pfizer Foundation
Focus on Impact Changing Corporate Community Investment B Y J O N L LO Y D, LO N D O N B E N C H M A R K I N G G R O U P
eading companies are taking their responsibilities to their communities more and more seriously. With this commitment comes an increasing drive to demonstrate what results from the contributions that are made, in terms of benefits both for the communities receiving support and for the business. It’s no longer sufficient to simply report what a company contributes – it's also necessary to explain what this contribution achieves, what the impact is.
HOW IS THIS CHANGING THE WAY COMPANIES WORK? This increased focus on impact is changing the way companies manage their community investment, not just how they measure it. It inevitably makes companies consider their objectives before they embark on a community initiative: why they should support it, what they hope to achieve by doing so, and how it will enable them to achieve their wider sustainability goals. As a result, community investment programs are becoming more strategic and focused. This is evident in companies like Unilever, whose foundation, launched in 2012, is built around five core partnerships all aimed at improving quality of life through the provision of hygiene, sanitation, access to clean drinking water, basic
nutrition and enhanced self-esteem. Another good example is KPMG, whose U.K. community program has been reconfigured to focus on improving employability, increasing access to the accountancy profession and strengthening the governance and capacity of the charitable organizations that it supports.
THE NEED FOR BETTER MEASUREMENT The focus on impact has also increased the need for effective measurement. If a company and its community partner have agreed-upon goals and targets, they need a way of assessing progress toward them. Fortunately, companies do not need to tackle this challenge in isolation. More than 300 major companies around the world (including GSK, Telefonica, Intel and 3M, as well as Unilever and KPMG) have worked
together to develop a solution to this issue, using the London Benchmarking Group (LBG) framework (www.lbg-online.net) to measure, manage and report the value and achievements of the contributions they make. The LBG framework enables companies to look at each activity they support in a consistent way; to establish what’s contributed (the resources committed to an activity), what happens (the activities that take place and the numbers involved) and what’s achieved (how the community and the company are better off as a result). Last year, LBG members were not only able to assess that they had contributed more than $2.5 billion to communities, but also to establish how many people and organizations had been reached (11,000,000 and 38,000, respectively), and how those people and organizations are better off as a result. As the ambition that drives corporate community investment increases, and companies work more closely with their community partners to deliver impact, they are also working together to address the associated need to assess and report their achievements in a credible, consistent and comparable fashion. n
JON LLOYD is Head of London Benchmarking Group and an Associate Director at Corporate Citizenship, a global sustainability consulting firm. Jon has spent more than ten years helping companies apply the LBG framework and so better measure and therefore better manage their involvement with, and impact on, the communities in which they operate. Jon managed the development of LBG’s current approach to impact measurement and devised the assessment framework that is currently used by LBG members.
psi.org | impact
BACK TO MEASUREMENT BASICS
B Y FA R R O N L E V Y, T R U E I M PA C T
or years, companies seeking to justify and guide their investments in corporate social responsibility have looked to each new research study seeking linkage between corporate social responsibility (CSR) and profitability, stock price, growth and the like. This focus is misguided. Enterprise-level metrics are too blunt an instrument to help, for example, measure the value of individual community programs or cross-sector partnerships. And in any case, sophisticated regressions and longitudinal studies are too costly in terms of time, resources and technical expertise to be practical tools for already overburdened managers. While the quest continues for that elegant, singular equation that captures CSR’s total return on investment, we suggest an interim solution: refocus on bottom-line outcomes. Though pedestrian, this back-to-the-basics perspective can help you leverage everyday metrics on sales, recruiting, productivity and so on to piece together an ROI picture – both financial and social – that you can use to continuously improve your programs, build support among stakeholders and guide investment decisionmaking.
BUSINESS VALUE A key principal of a bottom-line-outcome perspective is remembering that, regardless of your industry, there are only two ways to create business value: increase revenues or reduce costs. Calculating the business value of a CSR program is, then, the degree to which it contributes to either of these two outcomes. On the revenue side, programs can either help attract or retain customers, or enable a company to charge a premium for its goods or services. The monetary value of these impacts is the profit from the resulting transactions: generally speaking, revenues multiplied by profit margin. On the cost side, programs can increase efficiency by, for example, improving recruiting,
impact | No. 14
productivity or retention; or by reducing risk, energy use or waste. The monetary value of these impacts is the amount of time, materials, overhead and other costs avoided or reduced as a result. For example, impacts on brand, employee satisfaction, professional development and reputation can all be valuable, but only to the
unsatisfactory to a satisfactory level (quality); or an education initiative in terms of how many kids (quantity) improved from grade 1 to grade 2 reading levels (quality). The final two ways involve monetary measures. One is calculating a program’s socioeconomic value, or how a program results in increased revenues or reduced costs for society.
WHILE THE QUEST CONTINUES FOR THAT ELEGANT, SINGULAR EQUATION THAT CAPTURES CSR’S TOTAL RETURN ON INVESTMENT, WE SUGGEST AN INTERIM SOLUTION: REFOCUS ON BOTTOM-LINE OUTCOMES. degree they “drive” reduced costs and increased revenues. That is, increasing brand awareness is valuable if it helps attract or retain customers or employees, or otherwise lowers costs or increases revenues. If the increased awareness does not accomplish this, it holds no value. These basic dynamics often make monetizing direct business impacts a matter of simple arithmetic. In cases where a CSR program’s impacts on revenues or costs are indirect, the data collection demands may increase, but the underlying dynamics – and the arithmetic – remain largely the same.
SOCIAL VALUE A focus on bottom-line outcomes can be just as useful in quantifying social value. Instead of dollars spent, volunteer hours invested or people served – all measures of investment (inputs) or goods or services delivered (outputs) – assessing how your program ultimately improves education, the community or other social causes is what measures the return on that investment (outcomes). In general, there are three ways to measure social value. Often the most valuable is a simple description of the resulting change in social condition. Typically expressed in a “quantity x quality” format, an environmental clean-up initiative can be measured in terms of how many acres of wetlands (quantity) improved from an
For example, reduced demands on social welfare system, crime and state-supported healthcare reduce societal costs, and job-training programs that result in new taxpayers increase societal revenues. The second is calculating the market value of goods or services provided by a program. This common metric should, however, be calculated in terms of social outcomes (rarely done) to be accurate. For example, a company that donates $100,000 worth of computer equipment to nonprofits, only half of which actually end up using them, has provided no more than $50,000 in market value to the social cause.
“BOTTOM LINE” Training yourself to focus on bottom-line outcomes, and to use models and estimates when data gaps appear, can help you to use everyday measures to evaluate even complex CSR programs and manage them to maximize returns. Indeed, you may find that you don’t need that elusive and magical CSR equation after all. That is, if it ever is discovered. n
FARRON LEVY is President of True Impact (www.trueimpact.com), which provides web-based tools and support services to help organizations measure their social, financial and environmental impacts.
Karl Hofmann PSI President and CEO
MOVING IN THE RIGHT DIRECTION
omething interesting is happening in global health. Organizations like PSI – that have worked for decades in the developing world, strengthening health systems, meeting the health needs of those hardest to reach, and building local capacity – are being joined by the corporate sector, which sees future business growth potential. Instead of working independently, these sectors – nonprofit and private – are working together. Profit and purpose are increasingly aligned. PSI is partnering with Unilever on handwashing. USAID announced a global initiative with Walmart. Other international NGOs have developed deep relationships with corporate partners. It’s not perfect, and there’s room for better coordination, but it’s exciting and increasingly effective.
We see this paradigm shift in various places. Thousands of corporations have signed on to the United Nations’ Global Compact, an initiative to advance sustainable business models and markets. And the UN's post-2015 sustainable development agenda has, from the start, embraced a multi-sectoral approach that includes a shift away from old models of corporate social engagement toward a more concrete "total value" partnership in which business, civil society and governments all advance a common agenda. In PSI’s area of focus, global health, NGOs benefit from private sector approaches, innovation, measurement practices, consumer insight, and a constant pursuit of “the next best” health technology or tool that can help turn the tide against major threats to global health. Private sector partners are able to expand into new markets with the help of established
NGOs that have a deep understanding of the local market, need and distribution channels. Governments benefit by access to private sector investment in programs and people that free up national budgets to focus on other critical areas. Our beneficiaries/customers/taxpayers have improved access to products and services that, when collaboration works best, can help keep them healthy and moving forward. PSI has a 40 year history of applying business approaches to our nonprofit work, and we aim to help develop the public-private partnerships of tomorrow. When we invest in improving the health of communities in emerging markets, we invest in the growth of the global economy. This issue of Impact shows how a range of nonprofit, governments and corporate development actors are tackling challenges through differing but increasingly aligned prisms of institutional priorities and culture. This is a good thing. We should have the full range of academic research, NGO program experience, corporate measurement instruments, and other tools at our disposal when determining how to achieve our goals, monitor and evaluate our work, and collectively set and meet an ambitious post-2015 agenda. From my vantage point, we’re headed in the right direction. n
© BENJAMIN SCHILLING
▼ Karl Hofmann talks with Charlotte Kabirigi during a visit to Burundi. Charlotte, 33, has had seven pregnancies, but lost three of her children to malaria.
psi.org | impact
1120 19th Street, NW, Suite 600 Washington, D.C. 20036 p (202) 785-0072 | f (202) 785-0120 www.psi.org
Data and Decision Making