Evaluating Better Together 2018: An Evaluation Conference

Page 1

Student-Led Conference

Morgridge College of Education Research Methods and Statistics Program September 27 - 28, 2018 Katherine A. Ruffatto Hall Keynote Speaker Dr. Beverly Parsons President and Executive Director, InSites



Dear Colleagues, Thank you for joining us for our first annual Evaluating Better Together conference. In this effort, we seek to build authentic relationships between evaluators across Colorado through a series of sessions fostering dialogue, networking, and mentoring. The conference will provide a shared, safe space to examine challenges and lessons learned from past and ongoing evaluations. Presenters will share original work, gather feedback from a broad audience, and consider how best to translate evaluation theory into everyday practice. In 2018, we will also consider the state of evaluation in the region, reflecting on the anticipated evaluation workforce needs and the preparation of aspiring evaluators to meet those needs. Organized by faculty and advanced graduate students in the Research Methods and Statistics Program (RMS) at the University of Denver (DU), Evaluating Better Together is sponsored by the Organization for Program Evaluation in Colorado (OPEC), the Morgridge College of Education (MCE), and the Research, Evaluation, and Assessment Collaboratory Hub (REACH). Through this conference, we hope to continue OPEC’s meaningful and long-standing tradition of regional community-building through annual meetings at Peaceful Valley Lodge. Our steering committee first met in September 2017 to brainstorm how RMS graduate students might carry on this strong tradition while also helping young and aspiring evaluators to become familiar with our professional community. This conference is a result of those conversations. Evaluating Better Together joins a growing assembly of national university-led, regional evaluation conferences, including: the Edward F. Kelly Evaluation Conference (held in Canada and upstate New York from 1990-2014), the Emergent Voices in Evaluation Conference (held in Greensboro, North Carolina, from 2017-present), and the DC Consortium Student Conference on Evaluation and Policy (SCEP, held in the District of Columbia from 2017-present). This event would not be possible without the generous efforts of our community, specifically our planning committee chairs: Debbie Gowensmith, Megan Kauffmann, and Jessica Morganfield. We are grateful to our steering committee members for their commitment to developing this conference: Carsten Baumann (OPEC), Antonio Olmos (RMS), and Yvonne Kellar-Guenther (CO School of Public Health). We are equally thankful for the organizations who contributed financial support for this inaugural effort. Finally, we would like to acknowledge the Morgridge College of Education, including Dean Karen Riley and Associate Dean Mark Engberg, for their support in providing a meeting space for the conference in Katherine A. Ruffatto Hall. We are happy that you can join us for this inaugural event and hope you enjoy your time on campus! Included in this conference program, you will find additional information about the structure of the conference and descriptions of each session. We hope you will find this information useful for navigating your conference experience. If you have any questions or concerns, please don’t hesitate to reach out to one of our planning committee or steering committee members. Sincerely,

Robyn Thomas Pitts, Ph.D. Assistant Professor, Evaluation & Mixed Methods Research Morgridge College of Education University of Denver 3

TABLE OF CONTENTS Planning & Steering Committee.......................................................... Acknowledgements............................................................................ Keynote Speaker: Dr. Beverly Parsons............................................... Getting to DU...................................................................................... At-a-Glance......................................................................................... Full Schedule........................................................................................ Evaluation at Morgridge..................................................................... Presenter Bios......................................................................................


5 6 7 8 10 12 27 28


The co-chairs of the planning committee are advanced doctoral students from the Research Methods & Statistics Program at the University of Denver:

Debbie Gowensmith Vice President of Groundswell Services, Inc.

Jessica Morganfield Data and Targeting Manager with the Colorado Civic Engagement Roundtable.

Megan Kauffmann Performance Analyst within the Colorado Department of Human Services.

Steering Committee

Robyn Thomas Pitts, Ph.D. Assistant Professor of Evaluation and Mixed Methods Research at the University of Denver

Carsten Baumann Manager of Early Childhood Evaluation Unit and External Evaluations at the Colorado Department of Public Health & Environment

Antonio Olmos, Ph.D. Executive Director of the Aurora Research Institute and Adjunct Professor at the University of Denver

Yvonne Kellar-Guenther, Ph.D. Clinical Associate Professor at the Colorado School of Public Health and Senior Research Scientist, Center for Public Health Innovation at CI International 5


ACKNOWLEDGEMENTS This conference was made possible through the contributions and volunteer efforts of our local evaluation community. The Planning and Steering Committees extend our gratitude to the following individuals and organizations who graciously provided support: Sponsors • Morgridge College of Education Higher Education Department • Morgridge College of Education Research Methods and Statistics Program • Morgridge College of Education Teaching & Learning Sciences Department • Research, Evaluation and Assessment Collaboratory Hub (REACH) • The Colorado Evaluation Network • The Organization for Program Evaluation in Colorado Conference Support • Dr. Karen Riley, Dean • Dr. Mark Engberg, Associate Dean • Cristin Colvin, Digital Marketing Specialist Student Volunteer Committee (Research Methods and Statistics Program, Morgridge College of Education) • Kyle Brees, M.A. Student • Menglong Cong, Ph.D. Student • Katie George, Ph.D. Student • Beth Gregory, Ph.D. Student • Emma Lawless, M.A. Student • Jenn Light, Ph.D. Student • Lin Ma, Ph.D. Student • Suzy Matevosian, M.A. Student • Maggie Patel, Ph.D. Student

We extend a special thank you to our presenters and to our keynote speaker, Dr. Beverly Parsons, for her contributions to this inaugural effort.


KEYNOTE SPEAKER: DR. BEVERLY PARSONS Beverly Parsons is President and Executive Director of InSites, a Colorado-based non-profit research, evaluation, and planning organization. InSites promotes sustainable, equitable, systems-based social and ecological change in the U.S. and internationally. Beverly’s work at InSites focuses on evaluation designs for multi-year, multi-site initiatives in education, health, social services, and environmental fields. Beverly’s academic background has equipped her to apply systems-thinking and systems-theory to her work in evaluation design. She holds a Ph.D. and M.A. from the University of Colorado in Educational Research and Evaluation, a B.S. from the University of Wisconsin in Medical Technology, and certificates in Sustainable Business, Human Systems Dynamics, and Appreciative Inquiry Facilitation. As president of the American Evaluation Association in 2014, Beverly focused on visionary evaluation for a sustainable, equitable future. She has worked with organizations such as the W.K. Kellogg Foundation, Robert Wood Johnson Foundation, Danforth Foundation, University of British Columbia’s Faculty of Medicine, networks of school-university partnerships, National Science Foundation, and the Center for the Study of Social Policy. In addition to evaluations throughout the U.S., she has consulted and conducted evaluations in China, Japan, Europe, Brazil, Nepal, and South Africa. Earlier in her career, Beverly worked at the Education Commission of the States (ECS), a Denver-based commission that collaborates with the nation’s governors, legislators and state education leaders on education policy and leadership. She led ECS’ systems redesign initiative (with the Coalition of Essential Schools) to update the underlying paradigm of teaching and learning undergirding the nation’s education system. In her early years at ECS she served as Director and Associate Director of the National Assessment of Educational Progress. Previously she served as Director of the Assessment and Measurement Program at Education Northwest (Portland, OR). She also worked and lived on the Pine Ridge Reservation in South Dakota and held university adjunct teaching positions. Beverly lives with her husband in Hansville, WA and enjoys hiking and organic gardening.



GETTING TO DU: LIGHT RAIL From the Light Rail (RTD) Station â—‹ From the University of Denver station WALK SOUTH on High St. to Evans Ave. Katherine Ruffatto Hall is on your left side. If you pass over Evans Ave, you have gone too far.


GETTING TO DU: I-25 Highway I-25 exit University Blvd (exit 205) â—‹ GO SOUTH on University Blvd TURN RIGHT onto Evans Ave. TURN LEFT onto High St. Parking Lot E is directly on the LEFT.







FULL SCHEDULE: THURSDAY, SEPT 27 Registration 9:00-9:30 in the First Floor Foyer Opening Remarks 9:30-9:45 KRH Commons Breakout Sessions I (Sessions A-C) 10:00-10:45 Session A KRH Commons Presentations A1: Trailblazing in Program Evaluation: How We Conducted a Program Evaluation Without a Road Map Neil Gowensmith, Associate Professor, Graduate School of Professional Psychology, University of Denver In 2018, a small research team at the University of Denver (DU) conducted an internal program evaluation of an innovative psychology service program housed on campus. This presentation will discuss the approach and lessons learned from the program evaluation. The program evaluation’s time frame spanned the first two years of the program’s existence. An emphasis will be placed on challenges and solutions in evaluating a unique program for which no analogous programs exist -- informally speaking, we were trailblazers. In 2015, the Graduate School of Professional Psychology (GSPP) launched an outpatient competency restoration program (OCRP). The program was coordinated by the University of Denver’s Forensic Institute for Research, Service, and Training (Denver FIRST). Briefly, the program provides mental health and educational services for individuals who are charged with minor criminal offenses and are not able to participate meaningfully in their court hearings due to symptoms of mental illness, cognitive impairments, and/or developmental delays. The program is staffed by graduate student therapists and supervised by licensed psychology faculty members. It replicates other OCRPs around the country, but no such program had ever been attempted on a university campus with trainees as the primary service deliverers. Briefly, we collected information on the first 50 participants in the OCRP, spanning the first two years of the program. Demographic and outcome variables were collected. Results showed a mix of positive and negative outcomes. However, the results were extremely helpful in adjusting the structural components of our program. The program evaluation illuminated several areas for improvement in our program, and those have either been implemented or are currently in process. The program evaluation itself was challenging because no such evaluation has been published on an OCRP. Further, as mentioned above, our OCRP is qualitatively different than other OCRPs. We struggled as an evaluation team with how to conduct a program evaluation from scratch, and how to make it valid, reliable, and meaningful. We utilized approaches from established program evaluation tenets and implemented some approaches used in other forensic psychology programs. We also customized variables and procedures according to our own needs. We will discuss the process of developing our program evaluation from scratch and how we addressed the challenges faced therein. We will also discuss how the results have been used to improve our program, and make recommendations for future evaluation protocols.


A2: Mixed Methods in Program Evaluation: Integrating Quantitative and Qualitative Data to Address Evaluation Goals and Objectives Laura Meyer, Clinical Associate Professor, Graduate School of Professional Psychology, University of Denver The mixed methods approach to research is well-suited to the demands and complexities of many evaluation projects. In mixed methods designs, both quantitative and qualitative data will be collected, analyzed, and synthesized. The salient features of this methodology facilitate a holistic, flexible, and comprehensive approach to evaluation that provides empirical evidence of progress toward program goals while capturing the real-life impact of the program activities on diverse stakeholder groups. Flexibility is a key factor when appropriately designing and conducting program evaluations, and suggests the need for an applied approach that supports timely, sensitive decision-making; is responsive to stakeholder needs; and can easily be modified according to situational demands and unexpected challenges. Mixed methods designs represent a strengthsbased approach to evaluation that capitalizes on the objectivity of quantitative research while honoring stakeholders’ voices. For example, in summative evaluations, qualitative data may be used to illustrate and add depth to statistical findings, highlighting their relevance and implications for stakeholder groups. In formative evaluations, quantitative and qualitative data may be integrated to suggest innovative research strategies, reveal concerns and challenges, and facilitate deeper understandings of critical issues and stakeholder perspectives. This presentation will focus on data collected for three major projects completed by the presenter: a statewide environmental scan on adolescent mental health; a needs assessment of truancy services in three Colorado school districts; and a large, multiyear Health Resources and Services Administration grant that was conducted with justice-involved individuals and those who serve them. The presenter will provide examples that illustrate the ways in which the words of various stakeholder groups “humanized� and enriched the closed-ended numerical data; the use of statistical analyses to suggest the scope and generalizability of the issues gleaned from interview and focus group data; and the ways in which evaluation conclusions and decisions were arrived at after considering the integration and interpretation of the two types of data. Session B KRH 206 Teaching Session Preventing and Addressing Common Evaluation Challenges Annette Shtivelband, Founder/Principal Consultant, Research Evaluation Consulting Evaluation can be tricky for many social sector agencies. There are so many decisions to make and all the while, these organizations may wonder whether they are asking the right questions, collecting valuable information, and finding ways to measure and demonstrate their impact. Social sector agencies may also not fully understand the value of evaluation and internal efforts, while rooted in good intentions, can waste valuable time and resources without appropriate guidance and support. In this interactive teaching session, attendees will learn about common evaluation challenges and how to address them. This session will provide attendees with ample opportunities for discussion and brainstorming. Attendees will also learn about common evaluation challenges in applied settings and how to prevent and address these issues as they happen. This session will involve both relevant and timely content as well as opportunities to learn from one another. Attendees will learn how to avoid common evaluation challenges and walk away with improved evaluation practices. Learning Objectives: 1. Learn about common evaluation challenges 2. Identify solutions for evaluation challenges 3. Improve evaluation practices



THURSDAY, SEPT 27 continued Session C KRH 105 Problem Solving Roundtable Challenges in Museum Evaluation: New Strategies for Persistent Problems Valerie L. Williams, Senior Program Evaluator, University Corporation for Atmospheric Research Laureen Trainer, Principal Evaluation Coach, Trainer Evaluation The National Center for Atmospheric Research (NCAR) located in Boulder, CO is one of the country’s leading resources on scientific research related to Earth system science. In 2016, NCAR installed a new climate exhibit designed to increase the public’s understanding of a changing climate and encourage them to reflect upon their current responses and potential future actions. A summative evaluation of this climate exhibit is being conducted to determine how well the exhibit is achieving its intended goals. This evaluation draws on several data sources, such as visitor observation (timing and tracking studies) data, on-line visitor survey, focus groups with targeted audiences for the exhibit, and brief clipboard questionnaires. Despite the plethora of data sources being used for this evaluation, each of these methods has presented several challenges. In this problem-solving roundtable, Valerie Williams, Senior Program Evaluator at UCAR, and Laureen Trainer of Trainer Evaluation, will provide a brief overview of the goals of the exhibit, the methodologies they are using to evaluate the exhibit and why they were chosen, and present some of the challenges they have encountered in their summative evaluation of NCAR’s climate exhibit. For example, do you collect data when a major interactive is down? And if so, what parameters should be used during the data analysis phase? Where is the best location for a QR code for an online survey? How do you reduce positive bias from data collected from tour surveys about an exhibit? How do you encourage visitors to complete a visitor book? How to reconcile the behaviors of different groups of visitors - families vs. science-driven vs. hikers/bathroom users? The roundtable will begin with a brief presentation of the evaluation tasks completed thus far and the major questions that have emerged before opening the roundtable for general discussion. Although these questions were raised in the course of conducting this evaluation, it is anticipated that many of these questions will be applicable to other museum evaluations. Our goal with this session is to: 1. Solicit feedback from the attendees on some of the questions we have wrestled with, 2. Encourage a robust discussion around problems that are common to collecting data in museums, and 3. Inspire participants to build off of each other and think of new ways to look at traditional data collection issues. Coffee Break First Floor Fountain Lobby 10:45-11:00 Breakout Sessions II (Sessions D-F) 11:00-11:45


Session D KRH Commons Presentations D1: Using Objective Evaluation to Guard Against Subjective Bias Neil Gowensmith, Associate Professor, Graduate School of Professional Psychology, University of Denver Forensic psychologists regularly conduct forensic evaluations pursuant to judicial proceedings. These include evaluations of child custody, sexual harassment claims, personal injury claims, competency to stand trial, the insanity defense, and many others. Courts rely on these evaluations to help make decisions on cases. The adversarial court process is fundamentally based on the notion that these evaluations are neutral and objective. In other words, it shouldn’t matter who does the evaluation, the facts should speak for themselves. However, recent research indicates that evaluations and evaluators are not as objective as presupposed. For example, evaluators can be biased by the side that retained them, fees, and internal metrics of justice, prejudice, or skepticism. The field of psychology is replete with theories to identify potential areas of bias and tactics to try to safeguard against them. What doesn’t work is ignoring the potential for us to fall victim to biases. Most of us like to believe that we are not biased, but we show great concern for others that are not so fortunate. Of course this is not true. The poorest judge of bias is our mirror’s reflection. Forensic psychologists are no exception. When faced with emerging research that bias is alive and well in evaluations and evaluators, evaluators have used the same reasoning: other evaluators, maybe, but not me. To that end, I worked with students to create a bias-detection database with 25 independent variables and four dependent variables. Independent variables were chosen that could impact my opinions: defendants’ ethnicities, severity of the crimes, the fees I was paid, and so on. Analyses revealed what effects these independent variables had on my outcomes (my final opinions, if the opinion is favorable to the side that retained me, etc.). This presentation will discuss how the database was created, what challenges were faced, how we blended a priori variables based on clinical experiences vs. those found in existing literature, how data was analyzed, and how results were interpreted - especially when those results were less than favorable. D2: Health eMoms in Colorado: Adapting Online Panel Methods for Public Health Surveillance and Evaluation Sarah Blackwell, Panel Survey Methodologist and Operations Manager, Center for Health and Environmental Data, Colorado Department of Public Health and Environment The effectiveness of traditional modes of public health surveillance is in decline: response rates are falling, administrative costs are rising, yet federal funding is dwindling. Recognizing a need for innovation to collect high-quality maternal and child health (MCH) data for program planning and evaluation at a sustainable cost, the Colorado Department of Public Health and Environment (CDPHE) has developed Health eMoms, a web-based surveillance system for the collection of timely, flexible, and longitudinal data on the perinatal and early childhood periods. Health eMoms is a web-based longitudinal surveillance system that collects information on mothers’ attitudes, behaviors, and health and social conditions from shortly after birth until their child’s third birthday. In April 2018, CDPHE began drawing monthly simple random samples of 200 mothers from live birth certificate records and recruiting these mothers by mail at 2-4 months postpartum to enroll in the Health eMoms online platform. Enrolled mothers receive online surveys every 6-8 months by email and Short Message Service during the three-year study period. The six, 10-minute surveys are being developed in collaboration with CDPHE programs and partners to fill data gaps and enhance existing MCH surveillance and evaluation. Survey topics include: vaccine hesitancy, breastfeeding, mental health, parental leave, illicit and prescription drug use, child social and emotional development, social determinants of health, and more. Crosssectional and longitudinal data will be weighted to represent the annual Colorado live birth population. In the future, Health eMoms may incorporate more online panel design elements, allowing for rapid and targeted data collection on priority or emergent issues among mothers in Colorado. During this presentation, CDPHE will describe the reasons behind the development of Health eMoms, detail the development process, share early successes since implementation in April 2018, and discuss current and future opportunities provided by the Health eMoms surveys and methodology for evaluation in Colorado.



THURSDAY, SEPT 27 continued Session E KRH 206 Teaching Session Applying an Equity Lens to Our Work Laura Sundstrom, Evaluator, Vantage Evaluation This past summer, the Colorado Evaluator’s Network hosted an Equitable Evaluation design lab with 15 organizations. As part of this process, Vantage Evaluation and Spark Policy Institute tested new ways of incorporating an equity lens into their work. Vantage and Spark work in different contexts and tested different approaches. Yet, the work to apply an equity lens overlapped in unique ways. During this session, we will each discuss why we thought it was important to do this work now, what we tested and why, what we learned through the process, and share any tips that we have for applying an equity lens to your evaluation work. Vantage Evaluation took on two projects over the summer: one internal and one external facing. Internally, we facilitated a learning club with staff with the goal of taking one step forward in our personal understandings of equity, and to build a shared understanding at Vantage of equity, equitable evaluation, and how we marry our evaluation approach and equity. Externally, we tested language with clients about multicultural validity to both raise awareness about forms of validity outside of methodological validity and make it ok for our clients to value different forms of validity. Spark Policy Institute also took on two projects over the summer. First, we continued work to update and refine an equity rubric. We gathered input through an internal review and external working group, as well as a series of pilot tests both internal and external to Spark. Second, we have committed to updating and refining our open-source Equity Toolkit to promote equity in how we think about, design, and implement evaluation practice. We used our internal Research and Evaluation Learning Lab to review and discuss current equity documents, publications, and online resources. Session F KRH 105 Problem Solving Roundtable Troubleshooting Clinician Rated Mental Health Recovery Scales: Skewed Response Distributions Mark Leveling, Doctoral Student in the Research Methods and Statistics Program, University of Denver Behavioral health providers are increasingly asked to demonstrate the effectiveness of their services through outcome measures as regulators and funders of the behavioral health field desire to see providers demonstrably improve the overall condition of their clients. In practice, mental health professionals often refer to the process of improvement as recovery. Recovery is a broad term, often speaking to the concepts of hope, positive-self image, symptom management, and personal independence. Overall, the underlying ethos of the recovery movement is that managing mental illness is both possible and a lifelong process. Social connectedness and communication skills of persons with mental illnesses have been explored extensive in the recovery literature, expressing support for the theory that persons with mental illnesses experience better outcomes when they have meaningful communications and interactions with others. The round table will examine two scales designed to measure recovery and communication skills in persons with mental illnesses. (contd. on next page)


The topic of discussion for the roundtable is skewed response distributions in scale development. The problem will be discussed in the context of measuring mental health recovery and communication skills for persons with mental illnesses from the perspective of the clinician. The presentation will explore the processes and methods used to develop and test the two recovery oriented scales with attention given to the issue of adaptive reserve in clinical practice. Further, the presentation will examine the challenges of measuring recovery in persons with mental illnesses. Overall, the goal of the session is to develop a strategy to improve the heterogeneity of scale response distributions. The research presented at this round table was collected over the course of four years by the principal investigator through his work at a community mental health organization. The presentation will provide follow-up on a previously unpublished paper carried out by the principal investigator, Lilian Chimuma, and Dr. Kathy Green, assessing the initial attempt to measure the recovery process from the clinical perspective. Lunch 11:45-1:00 Location: Outdoor Classroom or KRH Commons State of the Workforce Panel 1:00-2:30 in KRH Commons What’s Going On With Evaluation in Colorado? Elena Harman, Owner and Lead Evaluator, Vantage Evaluation Laura Sundstrom, Evaluator, Vantage Evaluation What do students and new evaluators need to know entering the evaluation field in Colorado? Vantage Evaluation will host a panel of evaluation practitioners that will highlight the current state of evaluation in Colorado, including skills needed in different evaluation roles, how to align evaluation training with the skills needed, and potential workforce needs. The panel will be comprised of a consultant, internal evaluator, foundation evaluator, and academic evaluator. We will discuss what makes each position unique in the evaluation field in Colorado, the challenges associated with each position in the evaluation field, skills new evaluators need to have to enter the field and what skills they can anticipate developing along the way, and opportunities for new evaluators. The discussion will also be open for audience questions to engage new evaluators in the conversation and start to build relationships. The goal of this panel will not only be to educate attendees about the evaluation field in Colorado and potential career paths, but to also start to build relationships across sectors and experience levels. Coffee Break First Floor Fountain Lobby 2:30-3:00 Breakout Sessions III (Sessions G-J) 3:00-3:45 Session G KRH Commons Teaching Session Using Facilitation Skills in Evaluation Maggie Miller, Principal, Maggie Miller Consulting Sure, you can facilitate a focus group or interview, but how do you use facilitation in the entire soup-to-nuts evaluation process? In this interactive workshop, we will first map out the steps of the evaluation process with desired levels of stakeholder engagement, just to make sure we’re on the same page. We will also generate a broad list of facilitation skills; then we’ll get down to the nitty-gritty of specific skills. At this time, Maggie will address the logistics of facilitating, and the preparation needed to do so. She’ll also talk about the tricks of using three different parts of your brain at the same time (the agendacizing part, the attending part, and the analyzing part). We will touch on a few key coaching tricks and we will remember to acknowledge our own cultural humility related to models of facilitation. (Maggie will present this workshop with her “outside evaluator” hat on, but will work with you to adapt the content if you are in-house.) 17


THURSDAY, SEPT 27 continued Session H KRH 206 Teaching Session Engaging Stakeholders in Project Evaluation Through Process Mapping and a Brainwriting Premortem Heather Gilmartin, Investigator/Nurse Scientist, Rocky Mountain Regional VAMC; Seattle-Denver Center of Innovation for Veteran Centered and Value-Driven Care Chelsea Leonard, Methodologist, Rocky Mountain Regional VAMC; Seattle-Denver Center of Innovation for Veteran Centered and Value-Driven Care Marina McCreight, Health Science Specialist, Rocky Mountain Regional VAMC; Seattle-Denver Center of Innovation for Veteran Centered and Value-Driven Care Implementation and evaluation of evidence-based practices in community settings is a complex process. Engaging stakeholders in program evaluation can facilitate insights into the processes, practices, structures, and context that support or hinder program success. Recent research demonstrates that process mapping and a brainwriting premortem are valuable and feasible methods to collect stakeholder perceptions, identify barriers to spread and scale-up of programs, and will inform program adaptations to support sustainment. Process mapping is a Lean Six Sigma tool that visually presents process data to help identify the current state of a process in questions, inefficiencies in processes, and areas for improvement. The method identifies each step in a process, which staff perform those steps, how long and what is needed to complete each step, and what is necessary to move a process forward. A brainwriting premortem is a group activity where stakeholders silently share written ideas about potential failure points within a program. The method facilitates the rapid generation of a large number of ideas within an environment where everyone feels safe to share ideas. The two methods have been combined into a single, 60-minute work group session and are taught using a standardized toolkit developed as part of a national Veterans Health Administration quality improvement initiative. This teaching session will introduce the process mapping and brainwriting premortem toolkit. Participants will practice each skill with peers during the workshop using an action learning, case-study approach. Objectives: 1. Describe the background, methods, and application of process mapping and brainwriting premortem in program evaluation 2. Practice the core components of process mapping and brainwriting premortem, as outlined in the toolkit 3. Discuss how the methods can inform program planning, evaluation, and spread and scale-up


Session I KRH 403 Teaching Session The Program Was a Success! (But Did it Work?) Christine Garver-Apgar, Director of Research and Evaluation, Behavioral Health and Wellness Program Assistant Professor of Psychiatry University of Colorado Denver, School of Medicine This session is designed to explore methods and challenges of evaluating whether a program had the intended effects among individuals in a target population (i.e. patients, clients, employees, etc.). There are almost limitless ways of assessing the success of a program but very few ways of assessing whether observed changes among individuals in a target population can be directly attributed to implementation of the program itself (i.e. “Did the program work?”). In this session, we will first briefly discuss methods and study designs which allow evaluators to draw firm conclusions about the impact of a program at the level of individual outcomes. We’ll discuss the important role evaluators can play in educating both funders and key stakeholders about the knowledge that will (and, importantly, will not) be gained from utilizing various evaluation models. Then, we will discuss practical challenges and lessons learned from using more methodologically rigorous approaches in real world settings with a focus on programs implemented in community-based organizations. Session J KRH 105 Problem Solving Roundtable How Nonprofit Evaluators Can Address the Conflicting Evaluation Priorities of Nonprofits and Their Funders Anne Marie Runnels, Evaluation Advisor with Research Evaluation Consulting Nonprofits and donors often have different perspectives about the best way to conduct program evaluations (York, 2005; Zinn, 2017). Specifically, they often disagree regarding the purpose of evaluation, what program elements or outcomes are measured, and methods selected. Evaluators must be cognizant of tensions in these three areas and position themselves to resolve these conflicting priorities while conducting practical evaluations that meet standards of excellence. Motivations: Nonprofits generally conduct evaluations for internal learning purposes; whereas funders seek evaluations as accountability measures (Morariu et al., 2016; York, 2005). Evaluators are inclined to prioritize both objectives but must learn to balance the two perspectives within the confines of budget and staff capacity of nonprofits. Measures: Nonprofits tend to focus more on short-term outputs (e.g., what they do), rather than measuring long-term outcomes and impact (i.e., what happens because of what they do) that are requested by donors (Morariu et al., 2016; York, 2005). Evaluators bring value and support because they recognize the need to move beyond simply measuring outputs and they also recognize that some nonprofits need more preparation to pursue long-term outcomes and impact evaluations. Methods: Nonprofits tend to prefer less-rigorous evaluations conducted in-house, but the donor may expect a rigorous study conducted by an independent evaluator (York, 2005; Zinn, 2017). Evaluators are an interesting position in that they value more-rigorous approaches requested by funders, but also have a realistic viewpoint on which methods are feasible, given the limited capacity of some nonprofits. The objectives of this Roundtable Discussion are as follows: 1. Learn and discuss some of the disparities of goals between nonprofits and their financial sponsors, with regards to program evaluation 2. Discuss practical ways in which evaluation advisors can work with nonprofits to build their evaluation capacity 3. Brainstorm ideas for how to bridge the gap between funders and nonprofits and discuss what an evaluator’s role is in this process 4. Develop a list of best practices and discuss resources to guide future evaluations of nonprofits



FULL SCHEDULE: FRIDAY, SEPT 28 Registration 9:00-9:30 in the First Floor Foyer Opening Remarks 9:30-9:45 KRH Commons Keynote Speaker 9:45 - 10:45 KRH Commons Keynote Address: Thinking in Systems to Enrich Evaluation Beverly Parsons, PhD, President and Executive Director of InSites Locally and globally, our world is interconnected and complex. Understanding and addressing the nature of these entanglements is critical to useful evaluation. Thinking in systems is one way to do this. No matter where you are in the evaluation process, thinking in systems can increase the insights that can be gained from your evaluation. Coffee Break First Floor Fountain Lobby 10:45-11:00 Breakout Sessions IV (Sessions K-M) 11:00-11:45 Session K KRH 409 Presentations K1: In Their Own Words: Using Photovoice to Evaluate Programs Involving Marginalized Groups Debbie Gowensmith, Vice President, Groundswell Services and Doctoral Student in the Research Methods and Statistics Program, University of Denver The value of community reentry programs for persons released from correctional facilities is judged, frequently, against empirically validated criteria such as those described as the “Central Eight” criminogenic factors. Little attention, however, has been given to the perspectives of the returning citizens themselves. Firsthand perspectives from these individuals could provide insight into potentially critical yet understudied issues in the evaluation of reentry programs. Photovoice is a qualitative, participatory research method developed to help marginalized people assert their own voices. This session will share insights from combining outcome evaluation with Photovoice in reentry programs in two contexts (Denver, Colorado and Cape Town, South Africa). Attendees will consider how they might use Photovoice as part of evaluations to ensure that marginalized people’s experiences are given primacy.


K2: Evaluating a Collaborative Initiative in the Hawaiian Context Debbie Gowensmith, Vice President, Groundswell Services and Doctoral Student in the Research Methods and Statistics Program, University of Denver How is evaluation of a collaborative effort--multiple actors spanning boundaries to effect change--different from evaluation of a single-actor initiative? Collaboration goes above and beyond simple coordination. The partners need a common agenda, and the activities in which they engage should add value to the collective effort (Kania and Kramer, 2011). Research indicates that successful collaboration requires high-quality processes. Partners must perceive that processes are fair and effective; otherwise, partners will become apathetic, non-contributors, and eventually drop out altogether (Hicks, Larson, Nelson, Olds & Johnston, 2008). Effective collective impact evaluation should touch upon four elements (Plastrik, Taylor & Cleveland, 2014): • How people feel about the network. Is it adding value to the work they are trying to accomplish, and is it doing so equitably, fairly, and authentically? • The amount and degree of connectivity between partners. Which partners are connecting, and to what degree is the collective connecting outside formal network events? • Accomplishments as a collaborative. In what stages are collaborative activities from the agreed-upon strategies and activities being implemented? What is the collective accomplishing together, both as a unit and through the site-to-site collaborations the partners generate? • Accomplishments at partner sites. In what stages are site-specific activities from the agreed-upon strategies and activities being implemented? How does the collective influence what happens at a site-based level? Are the changes that the partners desire happening? • Are conditions in communities improving? In addition, evaluation must be responsive to the cultural context in which it occurs. I will share my work to assist Kua’aina Ulu ‘Auamo, an organization that facilitates multiple networks of grassroots groups practicing cultural and community-based natural resources management in Hawaii, to develop a theory of change, evaluation plan, and measures to assess its work and utilize lessons learned. I also will share the culturally aware, community-based process I used to help one of the networks, E Alu Pu, to develop shared measures. We used a participatory approach to empower and build capacity for evaluation among those involved, and to encourage groups to participate in sharing their results.



FRIDAY, SEPT 28 continued Session L KRH 302 Teaching Session Using Excel and Access to Systematically Restructure Data: A Collaborative Teaching Session on Preparing Data for Longitudinal Analysis Margaret Schultz Patel, Principal, Schultz Patel Evaluation and Doctoral Student in the Research Methods and Statistics Program, University of Denver This teaching session will contain practical tips for evaluators to use when handling longitudinal data. The toughest part about using longitudinal data is not the analysis, it’s the cleaning and restructuring! This 45-minute teaching session will focus on getting your data ready for analysis. Sharing hard-won knowledge earned in nearly ten years on the job, the presenter will share tips for restructuring longitudinal data that go beyond SPSS’ restructuring wizard. Attendees should expect to leave the session with a few tips on how to use Microsoft Access and Excel to systematically restructure data. The session will be structured as follows: a brief discussion, followed by a demonstration, time to practice and ask questions, and then finally a sharing of resources. Session M KRH 105 Problem Solving Roundtable Evaluation of Policy, System and Environment Changes Implemented In Early Childhood Education Settings Jamie Powers, Senior Health Promotion Coordinator, Culture of Wellness in Preschools, University of Colorado and Doctoral Student in the Research Methods and Statistics Program, University of Denver The Culture of Wellness in Preschools (COWP) program aims to increase physical activity and fruit and vegetable consumption of preschoolers through the implementation of five components implemented in early childcare education (ECE) centers. One component focuses on implementation of evidence-based policy, system, and environment (PSE) changes in ECE centers. University researches help centers form a wellness team consisting of interdisciplinary ECE staff. The COWP PSE process is derived from intervention mapping and community-based participatory research methods, and includes assessing strengths and needs, rating importance and feasibility of PSE changes, and action planning. The COWP PSE process is evidence-based and has resulted in implementation of an average of 5 PSE changes per ECE center. To assess the sustainability of PSE changes, the wellness team members complete an outcome evaluation survey 1 month, 6 month, and 1 year after completion of the PSE process. The team records the status of implementation for each PSE change and additional notes on their plans to ensure full implementation. The PSE process has been implemented in rural and urban ECE centers throughout the state of Colorado and outcome data is compared across counties. (Contd. on next page)


COWP researchers are holding this problem-solving round table to seek feedback on how to improve objective evaluation of the implementation and sustainability of PSE changes in ECE centers. COWP researchers have had challenges collecting valid PSE data due to school closures, subjective discrepancies related to the status of PSE changes, and/or slow response times. One attempt to capture more objective PSE sustainability data was to utilize observational data collected by COWP staff on ECE playgrounds, and compare PSE changes during outdoor play with physical activity-related PSE changes that the wellness team reported as fully implemented. Discrepancies between the observational data and subjective reports were apparent. Lunch 11:45-1:00 Location: Outdoor Classroom or KRH Commons Workforce Panel 1:00-2:30 in KRH Commons Part I: Evaluation Workforce Needs: Perspectives from Different Sectors Cindy Eby, Chief Executive Officer, ResultsLab Kenzie Strong, Senior Consultant, ResultsLab With this session we will hear from different sectors about common uses of evaluation, envisioned needs for evaluation, and challenges experienced in utilizing evaluation in these sectors. The panel will be comprised of representatives from higher education, government, business, philanthropy, and nonprofits. Evaluation knowledge and skills are needed broadly in many sectors to understand our work, ask the right questions, capture data, and make meaning of it to inform our work. Various sectors have need for new and experienced professionals to engage in evaluative processes. Join us to hear different industry perspectives in higher education, business, government, philanthropy, and nonprofits about what evaluation skills they need to support their work. Our panel members will provide insights into their needs, and ideas about how to build a ready workforce to meet the demand. Panel Objectives: 1. Understand how evaluation is used in different sectors 2. Understand the evaluation skill and knowledge needs in different sectors 3. Understand challenges to using evaluation/evaluative processes in different sectors Part II: Developing Future Evaluators: A Human Centered Design Session Cindy Eby, Chief Executive Officer, ResultsLab Kenzie Strong, Senior Consultant, ResultsLab Following the panel session, and utilizing themes and ideas presented by the panel, we will facilitate a second session to develop workforce development solutions. This Design Session will encompass a rapid design process to develop prototypes for evaluation workforce development. Using human centered design techniques, participants will gather into teams, engage in a rapid design cycle to solve evaluation workforce development challenges, and present those designs in a rapid pitch back to panel members. Panel members will identify the most viable solution to evaluation workforce development challenges at the close of the session. Design Session Objectives: 1. Provide an opportunity for innovative thinking about evaluation workforce development 2. Begin development of viable solutions to workforce challenges 3. Create energy and buzz to solve the workforce challenge beyond the session



FRIDAY, SEPT 28 continued Coffee Break First Floor Fountain Lobby 2:30-3:00 Breakout Sessions V 3:00-3:45 Session N KRH 409 Presentations N1: Experimental Analysis on a Short Timeline: Evaluating the Effect of the Child Support Call-a-Thon Michael Martinez-Schiferl, Research & Evaluation Supervisor, Colorado Department of Human Services Michelle Rove, Performance and Strategic Analyst, Colorado Department of Human Services Those who attend this presentation will learn about the Colorado Department of Human Services’ (CDHS) efforts to evaluate, through experimental analysis and on a short timeline, the effects of an intervention to reduce the number of persons not paying any child support. The presenters will also briefly explain some typical pressures and constraints of evaluating the effects of interventions like this within a State government agency. Description: In State Fiscal Year 2017-18, CDHS set a goal to “Improve the economic security of families by reducing the Percent of Child Support Cases with no Payment from 29.5% to 28.0%, which represents a 5% reduction from baseline levels, by June 30, 2018.” CDHS staff identified several interventions intended to improve the State’s performance on this goal. One such intervention was termed the Call-a-Thon. Through the Call-a-Thon, State child support staff sought to provide direct assistance to county staff in Colorado’s Large 10 counties in order to assist them in making direct calls to zero payers in the month prior. CDHS research and evaluation staff were tasked with designing an evaluation that would measure the effect of this Calla-Thon intervention. CDHS leadership needed the results of the evaluation back within a short time frame, approximately 6 months from design to results. Michelle Rove and Michael Martinez-Schiferl will present on CDHS’s work to evaluate the effect of the Call-a-Thon through experimental analysis. In brief, the initially anticipated effect size of the intervention was expected to be large and so the design sought to measure the short-term effects of the Call-a-Thon by randomizing the Large 10 counties into treatment and control groups and using a difference-in-differences approach to estimate the effect. The results indicated that the shortterm effect size of the Call-a-Thon was relatively small and not significant. The bigger success was the fact that State staff sought to evaluate the effect of this intervention in the first place. The team also learned some useful lessons about applying difference-in-differences to such a small sample.


N2: Predictive Factors for Fall Risk in the Veterans Community Living Centers William Schumann, Performance Analyst, Colorado Department of Human Services Michael Martinez-Schiferl, Research & Evaluation Supervisor, Colorado Department of Human Services Those who attend this presentation will learn about the Colorado Department of Human Services’ (CDHS) efforts to apply predictive analytics in improving outcomes for our clients. They will also learn about our research internship program. Across the nation, half of all nursing home residents fall within a year and a quarter of all residents fall multiple times. Falls are a serious risk for our elderly population as they lead to increased medical costs, loss of independence, emotional stress, injury, more falls, and potentially death. Preventing falls is crucial to ensuring residents maintain a high quality of life. Fall prevention and intervention is a complex and nuanced problem with many individualized factors to consider. This makes a multifactorial approach to analysis ideal. Willy Schumann and Michael Martinez-Schiferl will present on CDHS’s work to identify risk factors that lead to falls for residents of the state-run Veterans Community Living Centers (VCLC) and on the current context of VCLC practice for identifying fall risk and preventing falls. They will discuss the process by which the fall risk model was developed using secondary data and logistic regression and share the model results. They will also discuss plans for how the model will be piloted to evaluate its effectiveness. They will also briefly discuss the State Human Services Applied Research Practicum (SHARP) Fellowship, a research internship program at CDHS through which the predictive model was developed. Session O KRH 302 Teaching Session Small, But Mighty: Helping Smaller Nonprofits Evaluate Their Impact Anne Marie Runnels, Evaluation Advisor with Research Evaluation Consulting Small nonprofit organizations often struggle with evaluating their programs and assessing impact (Morariu et al., 2016). This can be due to multiple factors including a limited budget, limited staff capacity, lack of knowledge regarding the benefits of evaluation, and time constraints (Morariu et al., 2016). As a result, many small nonprofits choose not to pursue evaluation or do so in a limited capacity. These organizations desperately need outputs and outcomes to demonstrate their impact, but often lack evaluation capacity. This creates a vicious cycle where nonprofits lack funding because they lack evidence, but they cannot obtain evidence without first building capacity. Often, these difficulties come from a lack of clarity regarding which type of evaluation is most appropriate for the organization at the given time (Centers for Disease Control, 2017). Furthermore, nonprofits are attracted to so-called impact evaluation but may not have laid the groundwork for this type of endeavor (Harding, 2014). The purpose of this interactive teaching session will be to review some of the hurdles faced by small nonprofits as they pursue evaluation, discuss how to build evaluation momentum and capacity, learn how to determine if projects are feasible, help nonprofits take initial steps forward, and practice applying these practices to real-world scenarios of existing small nonprofit organizations. Attendees will gain practical ideas for conducting affordable, yet effective evaluations with small nonprofit organizations. Attendees will learn: 1. Some of the current realities of the small nonprofit sector which affect program evaluation 2. How to communicate the difference between various types of evaluations (e.g., formative, outcome, impact) and be able to discern which ones are appropriate for given situations 3. Best practices for guiding smaller nonprofits in developing evaluation capacity 4. How to develop evaluation strategies that fit real-world scenarios



FRIDAY, SEPT 28 continued Session P KRH 105 Problem Solving Roundtable Size Matters: What Approaches Are Useful With Small Sample Sizes? Debbie Gowensmith, Vice President, Groundswell Services and Doctoral Student in the Research Methods and Statistics Program, University of Denver Most of the organizations I assist with evaluation want evaluations that result in quantitative data with statistical analyses, mostly to please and appease their funders. However, most organizations I assist with evaluation are also small, community-based organizations that yield small sample sizes that cannot meet the assumptions-- normality, no multicollinearity, linearity, etc.--required by most statistical approaches. For this problem-solving roundtable, I will present a couple of examples of small organizations with which I work. I will facilitate a discussion with the participants around the table about possible approaches--quantitative and otherwise--that evaluators can consider when working with small sample sizes. The goal is for everyone participating to walk away with new ideas from other attendees that meet the need for rigor in evaluation when working with small sample sizes. Conference Closing KRH Commons 3:45 - 4:00


EVALUATION AT MORGRIDGE Our M.A. and Ph.D. programs offer dedicated courses focused on evaluation practice, applied experiences with local and national evaluation practitioners, and advanced study in evaluation theory and contemporary issues facing the field. • Learn how to apply sophisticated quantitative, qualitative, and mixed methods approaches. • Develop both technical and soft skills while working on projects that serve community partners. • Conduct, present, and publish research under dedicated faculty mentorship.

Learn More at: http://portfolio.du.edu/evaluation



PRESENTER BIOS Sarah Blackwell Sarah Blackwell is a Panel Survey Methodologist with the Center for Health and Environmental Data at the Colorado Department of Public Health and Environment. Since assuming this role in January 2017, Sarah has been tasked with integrating innovative web methods into ongoing state public health surveillance and evaluation. As part of this effort, Sarah has collaborated with partners to design and launch Health eMoms, an online, longitudinal, maternal and child health data collection system that surveys recent mothers every 6-8 months from shortly after birth up until their child’s third birthday. Sarah was formerly a maternal and child health epidemiologist, Pregnancy Risk Assessment Monitoring System project director, and CDC/CSTE Applied Epidemiology Fellow with the Wisconsin Department of Health Services from 2012-2017. Sarah received an MPH in Epidemiology from the Tulane School of Public Health and Tropical Medicine in 2012 and received her BA in Anthropology from the Tulane University School of Liberal Arts in 2009. Yen Chau Dr. Yen Chau is a Senior Learning & Evaluation Officer at the Colorado Health Foundation. Since joining the Foundation in 2014, Yen has made it her personal mission to advance the Foundation’s learning. She spends her days asking questions about how we can iterate and improve our work to help us truly understand the impact we make on the health of Colorado communities. One of the highlights of Yen’s work at the Foundation is getting community partners excited about data and helping them understand that data is just a small part of the story to be shared. Yen earned a bachelor’s degree in psychology from the University of Houston and holds a doctoral degree in Human Development and Family Studies/ Demography from Pennsylvania State University.

Cindy Eby Cindy Eby is the founder of ResultsLab, a Denver-based firm that focuses on improving program quality, evidence based practice, and organizational effectiveness for Colorado nonprofits. She has over 20 years of experience at the international, national, and local levels in research, evaluation, evidence based practice, program management, performance management, and strategic planning. She has held leadership roles in national and local nonprofits focused on implementation of evidence-based programming and increasing the quality of program practice and outcomes for children and youth. Cindy launched ResultsLab in 2015, based on a deep desire to support service-providing organizations to reach impact. Having led multiple evidence building initiatives and the implementation of a gold-star evidence based program, she has deep experience in how data can support program effectiveness and transform organizational outcomes. ResultsLab’s approach works with organizations to build systems and capacity internally to maximize program impact for the highest return on investment. Charlotte Farewell Charlotte Farewell, M.P.H., Ph.D. candidate, is a Senior Professional Research Assistant with the Rocky Mountain Research Center at the University of Colorado Anschutz. Charlotte serves as a program manager and supervisor for various grant-funded, community-based participatory research projects, and oversees the policy, system, and environment component of the Culture of Wellness in Preschools program. She also supports various research and evaluation activities. She received her MPH in Social and Behavioral Sciences from the Tulane School of Public Health and Tropical Medicine in 2012, and is currently in her fifth and final year of a doctoral program in Health and Behavioral Sciences at the University of Colorado Denver. Her research interests include maternal stress, risk and resiliency, and early childhood obesity. In her free time, Charlotte enjoys hiking and trail running, yoga, and traveling!


Christine Garver-Apgar Dr. Christine Garver-Apgar is an Assistant Professor within the Department of Psychiatry at the University of Colorado Denver School of Medicine, where she also serves as the Research and Evaluation Director of the Behavioral Health and Wellness Program. She has over 15 years of experience in behavioral science research including social cognition, relationship dynamics, behavioral endocrinology, biometrical and psychiatric genetics related to healthy development, substance use research including tobacco cessation, chronic disease prevention, and health disparities research. Dr. GarverApgar brings her background in basic science to BHWP, where for the past 5 years she has had the opportunity to develop a range of experience in more applied settings. She is currently a co-investigator and/or program evaluator on multiple grants, all of which seek to facilitate healthy behavior change among patients, providers, and organizations through curriculum development, training, program implementation, and systems change. Dr. Garver-Apgar’s primary goal in her evaluation work is to strike a balance between acknowledging capacity limitations of the organizations and systems she works within, while encouraging funders and stakeholders to adopt appropriately rigorous methodology for evaluating programs and initiatives. Lauren Gase Lauren Gase is a Senior Researcher at Spark Policy Institute. She has expertise in quantitative, qualitative, and mixed-methods research and evaluation, having led a number of studies at the national- and local-level to improve health, educational, and social outcomes. She enjoys using approaches, methods, and data analysis techniques that meaningfully facilitate stakeholder engagement, influence policy and program design making, and foster organizational improvement. She holds a Master’s in Public Health from Emory University and a Ph.D. in Health Policy from the University of California-Los Angeles.

Debbie Gowensmith Debbie Gowensmith is a third-year doctoral student in Research Methods and Statistics at the University of Denver, with broad interest in the role research and evaluation can play in social equity movements. Debbie also is vice-president of Groundswell Services, Inc., providing evaluation and research consultation to nonprofit organizations. She specializes in culturally responsive, participatory evaluation methods as well as evaluation of collaborative networks. Neil Gowensmith Neil Gowensmith, Ph.D., is an associate professor in the Graduate School of Professional Psychology at the University of Denver. His specialty is in forensic psychology, in which he helps the criminal justice and mental health systems make sound decisions for individuals with criminal histories. His research emphasizes real-world, practical applicability to improve forensic mental health evaluations and systems of care. He is also passionate about social justice and finding ways to reduce discrimination and stigma for people with concurrent mental health issues and legal encumbrances. Elena Harman Dr. Elena Harman is the CEO of Vantage Evaluation. Elena specializes in helping purpose-driven organizations implement meaningful evaluation strategies that generate actionable information. She has dedicated her life to Colorado and evaluation as a means to improve the lives of state residents. She brings a deep expertise of systems, nonprofits, and foundations in Colorado and how to engage diverse audiences in a productive conversation about evaluation. Elena holds a bachelor’s degree in brain and cognitive sciences, with a minor in political science, from M.I.T. and both a master’s and Ph.D. in evaluation and applied research methods from Claremont Graduate University. She currently serves as the president-elect of the Colorado Evaluators Network.

Heather Gilmartin Heather Gilmartin, Ph.D., N.P. is an investigator and nurse scientist with the Seattle-Denver Center of Innovation for Veteran-Centered and Value Driven Care, VA Eastern Colorado Healthcare System. She is an assistant professor at the University of Colorado, School of Public Health and adjunct faculty at the University of Colorado, School of Nursing. Her research focuses on understanding and optimizing the culture of healthcare to enhance patient safety and facilitate organizational learning. 29


PRESENTER BIOS Bonnie Hernandez Bonnie Hernandez brings 15 years of human-centered, evidence-based program design and evaluation to ResultsLab. Prior to joining the firm, Bonnie developed partnerships and projects for undergraduate engineering programs at the Colorado School of Mines, cultivating unique opportunities for students to support communities and to leave a positive mark on the world. Before Colorado School of Mines, Bonnie spent over a decade in nonprofit and government organizations, supporting and leading evaluation efforts to assess effectiveness and efficiency of international and community development programs using a variety of quantitative and qualitative methods. Her work spanned three continents and more than a dozen U.S. states, and contributed to reform efforts in international conflict resolution, global food security, public health emergency response planning, clean water initiatives, climate change adaptation, refugee resettlement, among others. Early in her career, Bonnie proudly served as a U.S. Peace Corps Volunteer in Paraguay, South America. She holds an undergraduate degree in political science from the University of Arkansas, and a graduate degree in global policy from the LBJ School of Public Affairs at the University of Texas at Austin. Michael Martinez-Schiferl Michael Martinez-Schiferl is a research and evaluation supervisor at the Colorado Department of Human Services, where he supports and coordinates the Department’s research and evaluation activities and manages a small team that provides a centralized research capacity. Prior to returning home to Colorado, Michael worked as a research associate at the Urban Institute in Washington, DC where he contributed to the Transfer Income Model 3 (TRIM3) microsimulation project, enhancing TRIM3’s modules on food and nutrition programs, medical assistance, unemployment compensation, and poverty. Michael earned a master’s of public policy from Georgetown University.

Marina McCreight Marina McCreight, M.P.H., is a health science specialist at the Seattle-Denver Center of Innovation for VeteranCentered and Value-Driven Care, VA Eastern Colorado. She is a qualitative analyst and implementation coordinator. Her research interests include process improvement, implementation, and evaluation of evidence-based health services interventions. Laura Meyer Dr. Laura Meyer is a graduate of the Morgridge College of Education (MCE), with a focus on quantitative research methods. She teaches introductory and advanced statistics at the Graduate School of Professional Psychology and has taught courses in mixed methods at MCE. Maggie Miller Maggie Miller is an evaluator, facilitator, and educator with over 25 years of experience in nonprofit agencies and educational institutions. Clients have included the Colorado Department of Education Dropout Prevention and Student Re-Engagement Unit, Colorado Refugee Services Program, Emily Griffith Technical College, Impact on Education, and multiple museums in Denver. The mission of Maggie Miller Consulting is to help staff in nonprofit organizations clarify their ideas, bring out the best in their people, and align their actions with their goals so that they may best fulfill their agency’s mission. When not working with clients on program evaluation, Maggie likes to hang out with her husband and her friends, talk to her grown son on the phone, do yoga, read books, see movies, get out in nature, and go to protests. Chelsea Leonard Dr. Chelsea Leonard is an anthropologist and qualitative methodologist at the Seattle-Denver Center of Innovation for Veteran-Centered and Value-Driven Care, VA Eastern Colorado Healthcare System. Her research interests include testing implementation methods for transitional care interventions, determining innovative and effective ways to measure program success, and understanding the motivations behind various health related decisions.


Mark Leveling Mark Leveling is a first year research methods & statistics doctoral student at the Morgridge College of Education. Recently relocated from Rock Island, Illinois. Mark previously worked in community mental health as a social worker and administrator. His research interests include program evaluation, latent variables, scale development, and all things mental health. Mark spends a fair amount of time in the kitchen and thoroughly enjoys playing the drums and bass guitar. Jamie Powers Jamie Powers, M.P.H., doctoral student, is a senior health promotion coordinator with the Culture of Wellness in Preschools program in the Rocky Mountain Prevention Research Center at the University of Colorado Anschutz. Jamie leads the workplace wellness component and utilizes a strategic planning process to implement policy, system, and environment best practices in early childhood programs. She also supports evaluation activities for COWP. Jamie received her master’s in public health at the University of Missouri – Columbia with an emphasis in health promotion and policy. She is currently a first year student at the Morgridge College of Education’s Research Methods and Statistics Ph.D. program. Michelle Rove Michelle Rove is the performance and strategic analyst for the State of Colorado Child Support Services Division. Michelle currently serves as a member on the IV-D Administrators Group and the Enforcement Task Group and several other state work groups. Prior to Michelle’s career in child support, Michelle spent 6 years in healthcare as a business analyst and clinical data analyst where she worked on process improvement teams to streamline healthcare performance and clinical outcomes. Michelle has a bachelor’s of science in mathematics and a master’s degree in public health – applied biostatistics from the University of Colorado. In her spare time Michelle enjoys running, swimming, and spending time with family and friends.

Anne Marie Runnels Anne Marie Runnels is an evaluation advisor with Research Evaluation Consulting (REC) and also works as a communications coach for nonprofits. She has 10+ years of nonprofit experience, working as a communications director, research director, and research analyst. She has extensive experience with building evaluation capacity from the ground up, particularly in the context of nonprofit evaluation. Anne Marie helps organizations, schools, and nonprofits measure their effectiveness and communicate these outcomes to their partners and the larger community. Jean Scandlyn Dr. Jean Scandlyn is Associate Clinical Professor in the Departments of Health and Behavioral Sciences and Anthropology at the University of Colorado Denver. Dr. Scandlyn is a medical anthropologist and ethnographer who has worked in a variety of communities in the U.S. In addition to research on many aspects of the transition from adolescence to adulthood, her practice has included a variety of evaluation projects that complement her research and teaching. She has managed evaluation for HIV prevention programs under contract to the Colorado Department of Public Health and Environment, community participatory evaluation in preparation for a project to improve physical activity in several communities in Denver and Aurora, program evaluation at The Spot, an arts-based program serving former and current gang-affiliated youth in Denver, and served as a local ethnographer under contract to a private research firm evaluating a Department of Labor program in Denver. Maggie Schultz Patel Maggie Schultz Patel is a professional program evaluator with over eight years of experience evaluating a variety of human service programs with social justice aims. In this work, Maggie has evaluated efforts ranging from workforce development, systems-change, and adult education to homelessness, criminal justice, and restorative justice. Past clients have included state and local governments, nonprofits, and foundations. As a trained methodologist, Maggie has expertise in program evaluation methods, research methods, and both quantitative and qualitative analyses. Maggie is the principal of Schultz Patel Evaluation, LLC, a boutique evaluation firm based in Denver.



PRESENTER BIOS William Schumann Willy Schumann works at the Colorado Department of Human Services as a performance analyst for the Office of Community Access and Independence where he collaborates with the Office to collect, analyze, and interpret data so he can suggest strategies on how to improve the Office’s services. Willy started as a CDHS SHARP Fellowship intern where he developed a predictive model that identified the factors that contribute to falls within the state-run Veteran Community Living Centers. He is passionate about using data to make more informed policy decisions and help those who are most vulnerable in our society. Willy received a M.S. in biology from University of Colorado, Denver and decided to use his research and analytical skills to better serve his community. Annette Shtivelband Dr. Annette Shtivelband is founder and principal consultant of Research Evaluation Consulting. Dr. Shtivelband has worked with more than 50 different organizations as a researcher, evaluator, and consultant. She works with her clients to systematically, strategically, and thoroughly measure the impact of their organizations. She excels in program evaluation, scale development and validation, training, employee engagement and retention, and organizational change. She received her Ph.D. in applied social psychology with a specialization in occupational health psychology from Colorado State University.

Kate France Smiles Kate France Smiles, M.P.P., currently serves as Director of Research and Evaluation at Reading Partners. In that role, she manages Reading Partners’ internal and external evaluation portfolio, leads external reporting, and serves as the Student Data Privacy Officer for the organization. She is responsible for ensuring that evaluation and applied research projects conducted by or on behalf of Reading Partners are focused on informing program quality and continuous learning, and aligned with organizational goals. Prior to joining Reading Partners, Kate was a senior researcher at the OMNI Institute and an analyst at the U.S. Government Accountability Office (GAO). While her evaluation career has focused primarily on examining interventions in K-12 education, she has conducted evaluation work in content areas as varied as public diplomacy, contracting, grant accountability, and tax policy. Kate holds a Master of Public Policy (MPP) degree from The George Washington University and a Bachelor of Arts in social psychology from Cornell University. Kenzie Strong Kenzie Strong, Senior Consulting Partner, has spent the past 15 years developing, implementing, and evaluating programs on a local, national, and global level. She has designed and led a wide array of capacity building initiatives, strengthening the evaluative practice of small-scale and large-scale nonprofits, spanning 20 countries. In her most recent role as Senior Director, Evaluation and Learning at Mile High United Way, Kenzie has developed an intensive capacity building initiative, called Impact United. Through this effort, she has designed and continues to implement individual and cohort-based training and coaching for a wide array of nonprofit partners. Working with over 70 nonprofit organizations in the Denver Metro area, she has had an opportunity work in depth with youth serving and mentor organizations, including: Florence Crittenton Services, Denver Urbans Scholars, Girls Inc. of Metro Denver, Goodwill Industries, and Big Brothers Big Sisters. She currently serves on the board of Denver Urban Scholars, leading the Program Committee. She holds a master’s degree in public policy from Duke University and bachelor’s in finance from the University of Colorado Boulder. Kenzie has been recognized as a Denver Top 40 Under 40 2017 Awardee.


Laura Sundstrom Laura Sundstrom, M.S.W., is an evaluator with Vantage Evaluation. Laura specializes in building evaluation capacity among Colorado’s nonprofits. She excels at helping individuals new to evaluation understand the “why” behind evaluation tasks and learn the skills they need to implement evaluations well. Laura holds a bachelor’s in women’s studies from Beloit College and a Master’s of Social Work from the University of Michigan. She currently serves as the program committee chair of the Colorado Evaluators Network. Laureen Trainer Laureen Trainer, M.S., is a Denver-based consultant. Laureen has 17 years of experience in the museum and informal learning field. Ms. Trainer started Trainer Evaluation in 2012 to build evaluation capacity in museums, libraries, and non-profits, and worked as an internal evaluator at the Denver Museum of Nature and Science for six years. Prior to moving to Colorado, Ms. Trainer was the director of education at the Autry National Center in Los Angeles and the acting curator of education at the Jane Voorhees Zimmerli Art Museum at Rutgers University. She received her master’s in museum studies from the University of Colorado Boulder in 2007 and her master’s in art history from the University of Arizona in 1997.

Valerie L. Williams Dr. Valerie L. Williams is Senior Program Evaluator at the University Corporation of Atmospheric Sciences (UCAR) in Boulder, Colorado, where she works as an internal evaluator with a number of UCAR Community Programs. As an internal evaluator, she provides evaluation expertise to postdoctoral research training programs, K-12 environmental education programs, and informal science learning projects. Prior to joining UCAR, Dr. Williams worked at the RAND Corporation in Arlington, Virginia where she served as an external evaluator on government strategic planning and evaluation projects, K-12 science education projects, and public health projects. She received her doctorate in chemistry at New York University and bachelor’s degree in chemistry at Emory University. Kristin Yeager Kristin Yeager, M.A., is a graduate of the master’s in forensic psychology program at DU. She works in crisis and emergency mental health at local hospital emergency rooms and has day-to-day experience working with individuals in severe mental health crises. She is keenly interested in issues regarding race, mental illness, and the legal system. Kristin has served as a critical member of this research project through the formulation of variables, collection of data, and interpretation of results.



Morgridge College of Education Katherine A. Ruffatto Hall 1999 E. Evans Ave Denver, CO 80208 303.871.2509 mce@du.edu • morgridge.du.edu


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.