Cultivating Evaluation Capacity A Guide for Programs Addressing Sexual and Domestic Violence Nancy Smith and Charity Hope, Edited by Alice Chasan Vera Institute of Justice Cultivating Evaluation Capacity A Guide for Programs Addressing Sexual and Domestic Violence Nancy Smith and Charity Hope Edited by Alice Chasan Table of Contents From the Center Director page 1 Introduction page 2 Section I Assessing Your OrganizationÕs Readiness to Evaluate page 4 WhatÕs Your Evaluation Capacity? page 6 Section II Enhancing Your Evaluation Capacity page 16 A. Creating a Culture of Evaluation in Your Organization page 16 B. Describing Your ProgramÕs Plan page 20 C. Budgeting for Evaluation page 26 D. Building Your Evaluation Team page 30 E. Partnering with External Evaluators page 34 F. Measuring Success in Domestic Violence and Sexual Assault Programs: Challenges and Considerations,by Cris M. Sullivan, PhD page 40 Section III Building Your Evaluation Knowledge and Skills page 44 Resources page 46 Bibliography page 48 From the Center Director In recent years, public and private funders have developed a keen sense of the importance of measuring the value of the programs they support. At the same time, many social service providers have started to recognize the benefit of evaluating their programs. Among grantors and grant recipients alike, economic belt-tightening and the resulting need to optimize spending allocations have fostered an appreciation for data-driven rather than impressionistic or anecdotal assessments that social service programs deliver what they promise. As pressures to evaluate have mounted, many social service agencies jumped into assessments of their programs with.out first developing their capacity to do so. As many of them discovered, without the underlying infrastructure necessary to support evaluation, their efforts were unsuccessful, or at a minimum, hard to sustainÑand emerging research supports their claim. Common infrastructure short falls include insufficient staff training and inadequate budgeting to support evaluation activities. For programs serving survivors of sexual and/or domestic violence, the challenges to meaningful evaluation are particularly complex. While the majority of social service programs focus on changing clientsÕ behaviors, programs for survivors of sexual and domestic violence serve people affected by othersÕ behavior. Given the nature of the problems these agencies seek to address, there is less clarity than in other fields about how to determine whether programs make a significant difference in the lives of survivors, or even how to safely engage survivors in evaluation efforts. This guide is designed to help programs serving survivors of sexual and/or domestic violence assess their evaluation capacity and identify areas of strength, as well as areas for improvement. Whether an organization is just starting to grapple with how to determine success for its programs or seeking to reassess its evaluation efforts, it can turn to the principles outlined in these pages to support the work of cultivating a culture of evaluation. We hope this guide will be a valuable resource. Nancy Smith Director, Center on Victimization and Safety, Vera Institute of Justice Introduction Social service organizations now view evaluation as an underpinning of their success, and for good reason. By examining its work, an organization can better tailor its outreach, services, and financial supports to the people or geographic regions most in need; determine which program elements work well and which ones do not; and identify gaps in staff training, as well as areas of exemplary staff performance. For programs addressing domestic and sexual violence, evaluation is an essential tool to ensure that all survivors receive vital services and support, and ultimately to end these forms of violence. In addition, the information gathered in preparing for and performing an evaluation can help an organization be better positioned to meet fundersÕ benchmarks and to demonstrate to funders that it is meeting its program goalsÑessential measures in the current economic climate. This guide is designed to help organizations addressing domestic and/or sexual violence prepare for meaningful evaluations. You probably already collect some information about your program and use it as the basis for improvements. More than likely, staff members discuss what they think is working well and what challenges they see and they share ideas on how pro.grams and services could be improved. Evaluation builds from your current information-gathering procedures to create a more systematic and intentional ongoing data-collection and analysis process. At its core, evaluation is the process of answering a set of questions about your programs and servicesÑexamining their functioning and effectiveness in comparison to their design. The evaluation, in turn, allows you to report your findings to key stakeholders and to use the information to enhance your programs. Undeniably, there are costs to evaluating your program, including staff time and financial resources. Some people see these expenditures of time and money as diverting critical assets away from essential services for survivors. However, evaluation is an investment in your program. Ultimately, this investment helps you to: ¥ Give survivors, staff, and other key stakeholders the opportunity to provide input into programs. ¥ Remain accountable to survivors, staff, communities, and funders. ¥ Give staff a deeper understanding of programs through revisiting goals, intended program impact, and underlying theories of change. ¥ Better understand if your program goals and objectives are being achieved. ¥ Discover what works in your program, what doesnÕt, why, and new ways to improve your program. ¥ Allocate resources in a more informed way. ¥ Make more strategic planning decisions, including ways to customize programs and services for emerging populations or those with unmet needs. ¥ See how people are affected by your efforts and to better your organizationÕs impact. ¥ Document what you are doing well and report this information to your funders and community supporters. ¥ Promote your program and seek new funding opportunities. Because of evaluationÕs many benefits, policymakers, funders, and practitioners are calling for domestic and sexual violence programs to evaluate their work. Nevertheless, organizations often cannot find the necessary supportÑwhether in the form of funding, training, or technical assistanceÑto help prepare them for this task. In response to this support gap, a variety of social scientists and practitioners have published self-help resources to guide an organization through the steps of creating, designing, and implementing an evaluation. But research has shown that an organization needs to have a number of elements in place before it can begin a meaningful evaluation. This guide is designed to help your organization assess its readiness and capacity to take on evaluation activities, with the ultimate goal of integrating sustainable evaluation efforts into your operations. Section I describes the key factors within your organization that affect its readiness to conduct evaluation activities and pro.vides a tool and process designed to help you better understand and assess your evaluation capacity. Section II provides practical information, tips, and resources on topics essential to enhancing your organizationÕs evaluation capacity, including creating a culture of evaluation; how to ensure that your evaluation is aligned with your program goals; budgeting for evaluation; how to staff your evaluation team; whatÕs involved in working with an external evaluator; and considerations for defining and measuring success. Section III lists resources for additional information and training on evaluation, as well as the guideÕs bibliography. You know that the work that you do is important. Programs that address sexual and/or domestic violence can be life saving, and you see the impact of your programs and services every day. Evaluation can help you demonstrate the importance of this work to the public. If individuals and organizations are not ready to engage in evaluation, progress is slow and success is unlikely. Section I: Assessing Your OrganizationÕs Readiness to Evaluate Organizations addressing domestic and/or sexual violence are facing increased pressures to measure their effectiveness, compelling leaders to jump-start an evaluation when a funder requires it. Many organizations that have made this leap without sufficient preparation find them.selves struggling at various points. Conducting an evaluation requires every.thing from finding funding to articulating goals in measurable ways to ensuring staff participation in the evaluation to using the information culled from the data to inform a programÕs daily work. With.out the necessary groundwork for these challenges, evaluation efforts may fail or become unsustainable, which can sour practitioners toward evaluation. But when an organization has a structure conducive to evaluation, it can avoid these pitfalls and sustain enthusiasm for the process. Enhancing the evaluation capacity of your organization as part of your planning can help to prevent negative outcomes. Several factors comprise an organizationÕs evaluation capacity: ¥ organizational culture and practice around evaluation, including the extent to which an organization values evaluation, is willing to be evaluated, and promotes learning and improvement; ¥ commitment and support, starting at the leadership level; the availability of financial resources; data-collection tools and practices; and the time and opportunity to participate in evaluation; ¥ staffÕs prior experience, including their knowledge of basic concepts of evaluation, experience collecting and interpreting data, and the ability to make changes based on findings; and ¥ articulation of your programÕs foundational plan, including essential methods for serving your clients, how change is expected to occur, goals, and the resources and activities that contribute to that change. Many large organizations run a number of programs; others provide services in a single program. But for any direct-service organization interested in undertaking evaluation activities, an essential first step is to assess its capacity in each of these areas. Does your organization promote learning and reflection as part of its day-to-day practice? Has your organization articulated how your program and services create social and individual change? Can your staff easily articulate important outcomes? There are several evaluation-capacity assessment tools that organizations can use to assess their current capacity and identify areas for improvement. For example, Informing ChangeÕs Evaluation Capacity Diagnostic Tool is designed to help an organization assess its readiness to take on many types of evaluation activities. The results can help your organization develop a plan to enhance its evaluation capacity in the areas where you most need to do so. WhatÕs Your Evaluation Capacity? Evaluation Capacity Diagnostic Tool This Evaluation Capacity Diagnostic Tool, created by Informing Change, is designed to help organizations assess their readiness to take on many types of evaluation activities. It captures information on organizational context and the evaluation experience of staff and can be used in various ways. For example, the tool can pinpoint particularly strong areas of capacity as well as areas for improvement, and can also calibrate changes over time in an organizationÕs evaluation capacity. In addition, this diagnostic can encourage staff to brainstorm about how their organization can enhance evaluation capacity by building on existing evaluation experience and skills. Finally, the tool can serve as a precursor to evaluation activities with an external evaluation consultant. This tool is intended to be completed by the person within your organization who is most familiar with your evaluation efforts. Within small organizations, it is possible that the director or CEO might be the most appropriate person. This tool can be self-administered, but could also be completed with the assistance of an external evaluation consultant. Ideally, your organization should plan to self-ad.minister the diagnostic and then have a follow-up conversation with an external consultant to determine the areas that your organization might focus its evaluation capacity building efforts. This tool can be administered at a certain point in time or at multiple points in time to determine changes in evaluation capacity. Note: Quantifying the dimensions of capacity is very difficult. In addition, self-assessments often indicate a higher level of capacity than actually exists; respondents are not always aware of how much room there is for improvement. For example, an organization might think that it has effective knowledge, systems, and practices in place, but once it learns about other tools or practices, it might realize that its current capacity is not as strong as it originally thought. The results of this exercise should also be interpreted in the context of the organizationÕs scope and stage of development. The tool is designed to be used by organizations to better understand their evaluation capacity and to spur dialogue, reflection, and growth in that area. It is not designed to be used for evaluative purposes. Instructions Choose your level of agreement with the following statements. After each section, add up your total score. Organizational Context Organizational Culture & Practice Around Evaluation 1. Our organization sees evaluation as a tool that is integral to our work. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 2. Our organization models a willingness to be evaluated by ensuring that evaluations, both their process and findings, are routinely conducted and visible to others within and outside of our organization. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 3. Our organization has an effective communication and reporting capability to explain evaluation processes and disseminate findings, both positive and negative, within and outside of our organization. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 4. Our organization promotes and facilitates internal staff membersÕ learning and reflection in meaningful ways in evaluation planning, implementation, and discussion of findings (Òlearning by doingÓ). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 5. Our organization values learning, as demonstrated by staff actively asking questions, gathering information, and thinking critically about how to improve their work. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree Add your total score in each row here. Total Score Organizational Context (continued) Organizational Commitment & Support for Evaluation 6. Key leaders in our organization support evaluation. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 7. Our organization has established clear expectations for the evaluation roles of different staff. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 8. Our organization ensures that staff have the information and skills that they need for successful participation in evaluation efforts (e.g., access to evaluation resources through websites and professional organizations, relevant training). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 9. Our organization allows adequate time and opportunities to collaborate on evaluation activities, including, when possible, being physically together in an environment free from interruptions. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 10. Our organization provides financial support (beyond what is allocated for evaluation through specific grants) to integrate evaluation into program activities. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 11. Our organization has a budget line item to ensure ongoing evaluation activities. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 12. Our organization has existing evaluation data collection tools and practices that we can apply/adapt to subsequent evaluations. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 13. Our organization has integrated evaluation processes purposefully into ongoing organizational practices. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree Add your total score in each row here. Total Score Organizational Context (continued) Using Data to Inform Ongoing Work 14. Our organization modifies its course of action based on evaluation findings (e.g., changes to specific programs or organizational-wide changes). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 15. Evaluation findings are integrated into decision making when deciding what policy options and strategies to pursue. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 16. Managers look to evaluation as one important input to help them improve staff performance and manage for results. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree Add your total score in each row here. Total Score Evaluation Experience of Staff Existing Evaluation Knowledge & Experience 17. Our organization has staff that have a basic understanding of evaluation (e.g., key evaluation terms, concepts, theories, assumptions). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 18. Our organization has staff that are experienced in designing evaluations that take into account available resources, feasibility issues (e.g., access to and quality of data, timing of data collection), and information needs of different evaluation stakeholders. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 19. Our organization can identify which data collection methods are most appropriate for different outcome areas (e.g., changes in norms require determining what people think about particular issues, so surveys, focus groups, and interviews are appropriate). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 20. Our organization has staff with experience developing data collection tools and collect.ing data utilizing a variety of strategies, such as focus group sessions, interviews, surveys, observations, and document reviews. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 21. Our organization has staff that know how to analyze data and interpret what the data mean. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 22. Our organization has staff that are knowledge.able about and/or experienced at developing recommendations based on evaluation findings. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree Add your total score in each row here. Total Score Evaluation Experience of Staff (continued) Developing a Conceptual Model for the Policy Process / Designing Evaluation 23. Our organization has articulated how we expect change to occur and how we expect specific activities to contribute to this change. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 24. Our organization has clarity about what we want to accomplish in the short term (e.g., one to three years) and what success will look like. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 25. Our organization has articulated how our policy change goals connect to broader social change. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 26. Our organizationÕs evaluation design has the flexibility to adapt to changes in the policy environment and our related work as needed (e.g., benchmarks and indicators can be modified as the project evolves). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 27. Our organization has tools and methods for evaluating the unique and dynamic nature of advocacy work. 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 Add your total score in each row here. Total Score Evaluation Experience of Staff (continued) Defining Benchmarks & Indicators 28. Our organization measures outcomes, not just outputs. Outputs are quantifiable activities, services, or events, while outcomes are measurable results or changes a program/organization would like to see take place over time and that stem directly from the intended result of specific strategies (e.g., an output might be the number of legislators attending a briefing event while an outcome would be the change in the legislatorsÕ behavior as a result of attending the event). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 29. Our organization can identify outcome indicators that are important/relevant for our work. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 30. Our organization has identified what indicators are appropriate for measuring the impact of our work (e.g., Did our work change attitudes or policy? Did it raise money or increase volunteer hours? Did it result in more children in schools?). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 31. Our organization can identify what indicators are appropriate for measuring how we do our work (e.g., has our organization strengthened its relationships with elected officials?). 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree 32. Since policy goals can take years to achieve, our organization identifies and tracks interim outcomes that can be precursors of policy changeÑsuch as new and strengthened partnerships, new donors, greater public support, and more media coverageÑthat tell us if we are making progress and are on the right track. 4 Strongly Agree 3 Strongly Disagree 2 Agree 1 Disagree Add your total score in each row here. Total Score Scoring Instructions & Interpretation Calculating your score: Write your total score for each section in the appropriate row and divide by the number of questions in each section to come up with your sectional score. Then, add up your sectional scores and divide by 32 to get your overall score. Round your scores to the nearest hundredth (i.e., two decimal points). Organizational Context Organizational Culture & Practice Around Evaluation Score ____ Ö 5 = Sectional Score _____ Organizational Commitment & Support for Evaluation Score ____ Ö 8 = Sectional Score _____ Using Data to Inform Ongoing Work Score ____ Ö 3 = Sectional Score _____ Evaluation Experience of Staff Existing Evaluation Knowledge & Experience Score ____ Ö 6 = Sectional Score _____ Developing a Conceptual Model for the Policy Process/Designing Evaluation Score ____ Ö 5 = Sectional Score _____ Defining Benchmarks & Indicators Score ____ Ö 5 = Sectional Score _____ Overall Score ____ Ö 32 = Sectional Score _____ Interpreting Your Score 1.00Ð1.51 Need for increased capacity 1.52Ð2.49 Emerging level of capacity in place 2.50Ð3.48 Moderate level of capacity in place 3.49Ð4.0 Significant level of capacity in place Need for increased capacity There is low or uneven strength in your organizationÕs evaluation expertise. There may be very limited measurement and tracking of performance, and most of your evaluation is based on anecdotal evidence. While your organization collects some data on program activities and out.puts (e.g., number of children served), there are few measurements of social impact (e.g., drop-out rate lowered). Emerging level of capacity in place You have the essential elements of evaluation in place, but there is room for improvement. Your performance is partially measured and your progress is partially tracked. While your organization collects solid data on program activities and out.puts (e.g., number of children served) it lacks data-driven, externally validated social impact measurement. Moderate level of capacity in place Your organization has a very respectable evaluation capacity. You regularly mea.sure your performance and track your progress in multiple ways to consider the social, financial, and organizational impacts of program and activities. You also use a multiplicity of performance indicators, and while you measure your social impact, an external, third-party evaluation perspective is often missing. Significant level of capacity in place Your organization has an exemplary level of organizational evaluation capacity. You have a well-developed comprehensive, integrated system for measuring your organizationÕs performance and progress on a continual basis, including the social, financial, and organizational impacts of program and activities. You also focus on a small number of clear, measurable, and meaningful key performance indicators. You strategically use external, third-party experts to measure your social impact. For More Information A Checklist for Building Organizational Evaluation Capacity http://www.wmich.edu/evalctr/archive_checklists/ecb.pdf Building Capacity in Evaluating Outcomes, A Teaching and Facility Resource for Community-Based Programs and Organizations http://www.uwex.edu/ces/pdande/evaluation/bceo/pdf/bceoresource.pdf Evaluation Capacity Assessment Tool, Centers for Disease Control and Prevention http://library.capacity4health.org/category/topics/monitoring-and-evaluation-me/evaluation-basics/evaluation-capacity-assessment-tool Assessing Evaluative Capacity, The Bruner Foundation http://evaluativethinking.org/sub_page.php?page=assesset State of Evaluation: Evaluation Practice and Capacity in the Non Profit Sector, Innovation Network, Inc. http://stateofevaluation.org/ The Readiness for Organizational Learning and Evaluation (ROLE) Instrument, Evaluative Inquiry for Learning Organizations http://www.fsg.org/Portals/0/Uploads/Documents/ImpactAreas/ROLE_Survey.pdf The Evaluation Capacity Diagnostic Tool by Informing Change is shared within this publication with their permission and is licensed under a Creative Commons Attribution-NonCommerical-NoDervis 3.0 Unported license.Informing Change is a woman-owned strategic consulting firm that partners with foundations and nonprofit organizations to improve their effectiveness and inform organizational learning. Our information-based services include evaluation, applied research, and program and organizational strategy development. Our work is guided by our core valuesÑintegrity, intelligence and compassionÑand our experience extends across diverse contexts, populations and content areas, including education, health, youth engagement, leadership and philanthropy. For more information visit: www.informingchange.com. Section II: Enhancing Your Evaluation Capacity A. Creating a Culture of Evaluation in Your Organization Once you have assessed your evaluation capacity, you likely will identify some areas for improvement. For instance, you may learn that there is some apprehension among staff when it comes to evaluation, or that there is no shared understanding of desired objectives. As you work on strengthening your capacity, what are your hopes and expectations about the effect of evaluating your organization? Do you want staff to simply engage in evaluation activities or would you like them to fully embrace the value of evaluation? Do you want to improve problem solving and decision making in one program, or throughout the organization? Do you want specific staff members to increase their capacity and interest in learning, or do you want this growth to be organization-wide? Whatever your expectations, here are a few strategies to help you get started in realizing your evaluation vision: ¥ Identify evaluation championsÑthose who care about evaluation. Nurture and grow your pool of champions. ¥ Establish and use a common language around evaluation; make sure it is practitioner-friendly and works within your field and paradigm. ¥ Communicate consistently and continually about your evaluation efforts, sharing your hopes and expectations. Make evaluation an integral part of your programming and services. ¥ Make your evaluation commitment and expectations explicit; promote your commitment to evaluation in your agency description, annual reports, and grant proposals; and outline your expectations around evaluation in job and program descriptions. To assist you, this section provides practical information, tips, and resources for enhancing your organizationÕs capacity, as well as basic information on evaluation. It addresses creating a culture of evaluation; describing your programÕs plan; budget.ing for evaluation; building your evaluation team; partnering with an external evaluator; and defining and measuring success. Evaluation is not just important for accountability: It is essential for innovation. A. Creating a Culture of Evaluation in Your Organization Your organizationÕs readiness for evaluation is determined, in large part, by its culture: its norms, values, assumptions, and behaviors. Does your organization view evaluation as expensive and time-consuming? A tool to increase efficiency and save money? Both, or something in between? As you can imagine, your organizationÕs experience with evaluation will vary significantly depending on how you answered that question. Organizations that value evaluation and learning, and express those values throughout their operations, are more likely to successfully evaluate than those that do not. An organizational culture that is conducive to evaluation is characterized by: ¥ strong leadership that nurtures evaluation as a priority and is commit.ted to creating opportunities to build evaluation capacity; ¥ evaluation champions: staff who approach evaluation with interest, enthusiasm, determination, and caring; ¥ a commitment to continuous improvement; ¥ a spirit of inquiry that encourages staff to ask questions about whatÕs happening in the organization and to seek answers; ¥ an openness and willingness to discuss whatÕs not working and why; ¥ a positive, non-judgmental environment where mistakes are reframed as learning opportunities; and ¥ visible indicators of the ways in which the organization, its staff, and the people it serves benefit from evaluation. Creating an organizational culture that supports evaluation does not happen overnight. It is a process that requires commitment, intentionality, and patience. Inevitably, your organizationÕs culture will affect its experience with evaluation. Taking a look at the perception of and practice around evaluation is an essential first step to assessing your evaluation capacity (see page 6 for evaluation capacity assessment). Once youÕve taken the following steps, there are a few suggested activities that can spur your organizationÕs culture-change process: ¥ Host a kick-off meeting with staff to help set a positive tone and to ensure that everyone is apprised of your evaluation project. ¥ Promote learning and reflection among staff by encouraging them to share their successes and struggles without penalty; to identify new strategies to prevent mistakes from reoccurring; and to carry forward the lessons learned in their work. ¥ Engage staff in discussions on how they have benefited from collecting and analyzing data in their work, where they see opportunities to use data in the future to strengthen their work, and the type of data collection and analysis they would need to make improvements. ¥ Identify staff who express an interest in evaluation and using data to inform their work. ¥ Actively involve staff in the design, implementation, and use of your evaluation. ¥ Model the use of data in day-to-day decision making. For example, when sharing important decisions, also share how you used data to inform that decision. ¥ Communicate about your evaluation activities in various ways. For example, you can hold celebratory events at key points during the evaluation, provide evaluation updates during staff meetings, and include information on your evaluation activities in your newsletters. For More Information Building an Evaluative Culture for Effective Evaluation and Results Management, Institutional Learning and Change Initiative http://www.cgiar-ilac.org/files/publications/working_papers/ILAC_WorkingPaper_No8_EvaluativeCulture_Mayne.pdf Integrating Evaluative Capacity into Organizational Practice, The Bruner Foundation http://www.evaluativethinking.org/docs/Integ_Eval_Capacity_Final.pdf An Evaluation Culture, Research Methods Knowledge Base http://www.socialresearchmethods.net/kb/evalcult.php Section II: Enhancing Your Evaluation Capacity B. Describing Your ProgramÕs Plan Domestic and sexual violence services were developed, and continue to evolve, to respond to the pressing need to help survivors with emergent safety issues and their experiences of trauma. These programs are responsible for not only providing services that respond to immediate health and safety crises, but also for supporting survivors through a difficult time in their lives, and ultimately, addressing the larger social issues of domestic and sexual violence. Most of these organizations, born out of necessity and built with a Òdo whatever it takesÓ attitude, have grown into a complex array of programs and services incorporating everyday lessons learned and emerging best practices. While you and your colleagues may be operating on the assumption that every.one is working from the same playbook and with similar goals, itÕs crucial when assessing your readiness to evaluate your program to determine if thatÕs true. Unless your programÕs planÑincluding its target population, goals, objectives, methods, and desired outcomesÑis captured in writing and shared throughout the organization, there can be fuzziness about whatÕs happening and why and confusion about where youÕre headed. Clarifying your goals will help you identify elements of your program that will be useful in measuring your programÕs progress and success. A written program plan is your roadmap to change. It describes the problem your program is addressing, the resources your are investing, what you are doing to address the problem, and the short and long-term changes or outcomes you expect from your efforts. Most program planners and evaluators use whatÕs known as a logic model to develop and graphically depict a programÕs plan and development. Having a logic model or some other means of describing your programÕs plan is essential for your evaluation efforts to be successful. What is a Logic Model? A logic model is a visual representation of the relationship between a given set of activities and the outcomes (or change) expected as a result of those activities. There are several different commonly used templates for logic models. But typically, as is the case with the W.K. Kellogg Foundation Logic Model in Figure 1, they have five components: (1) resources (also called inputs), (2) program activities, (3) outputs, (4) short and long-term out.comes, and (5) impacts. 1. Resources -Resources include the human, financial, organizational, and community resources your program has available to direct toward the work of the program. Examples include trained staff, facilities, or equipment. 2. Program activities- Program activities are what the program does with the resources. Activities are the processes, tools, events, technology, and actions that are an intentional part of your program implementation. These also may be referred to as interventions used to bring about the intended program changes or results. Examples include counseling, crisis line, or shelter services. 3. Outputs- Outputs are the direct products of your program activities and may include deliverables, products, and the services to be delivered by your program. Examples include the number of shelter nights, or hours of counseling provided. 4. Short- and long-term outcomes-Outcomes are the specific changes in your program participantsÕ behavior, knowledge, skills, status, and level of functioning. You can also think of outcomes as the goals of your program. What does your program hope to achieve? Short-term outcomes focus on a one-to-three-year period, while long-term outcomes focus on a four-to-six-year period. Examples include survivors experiencing decreased isolation or identifying strategies for enhancing their safety. 5. Impact-Impact is the fundamental intended or unintended change occurring in organizations, communities, or systems as a result of program activities within seven to 10 years. The best measure of impact may occur after the programÕs completion. Why Should You Develop a Logic Model? ¥ A logic model clarifies what your program hopes to achieve and documents the intended purpose. ¥ A logic model illustrates the rationale of your program. ¥ A logic model can help you design a meaningful program evaluation. ¥ A logic model can be used to monitor program activities. Logic models are also very useful tools to demonstrate to potential funders that your program is based on sound reasoning because they clarify the links between various aspects of your program. For example, one of the goals of a residential domestic violence program may be to ensure the safety of survivors fleeing from abuse. A logic model would demonstrate how, exactly, the program accomplishes that goal or outcome by articulating the programÕs activities (emergency hotline, 24/7 shelter, crisis intervention, support services, etc.) and the resources the program invests to support those activities (staff time, facilities, equipment, policies, etc.). Developing Your Logic Model Determine whom you should involve in these discussions. Involve a diverse range of stakeholders in your process. This will give you unique perspectives from different constituents, including survivors, front-line staff, board members, funders, and volunteers. People affected by the problem you are address.ing and the program you are providing should be involved in your evaluation process, so why not involve them from the beginning? Determine the scope and the period of time youÕre assessing with your logic model. Think about how your program has evolved. Over time, programs, goals, and expectations change. As you create a logic model, reflect upon how your pro.gram has changed since its initial conception and implementation. Decide what template to use. There are many templates and tools, as well as detailed instructions, for creating logic models. Logic model templates provide you with a structure for organizing your programÕs information. Most templates contain the same core elements but use different language to describe these elements. Choose a template that makes the most sense for your organization. Work backwards! Start your logic model by discussing the ultimate goal of your program, or the need it addresses, and work backwards describing what the program does to reach its goal. Stuck? If you are having difficulty using the logic model format, stop and just tell the story of your program. Why does the program exist? What is the purpose of the program? What does the program accomplish? Ask someone else to write down the story and start to sketch out the relation.ships between activities and outcomes as you talk about them. Double-Check Your Logic Model. Describe your logic model out loud; talk it through, describing it in narrative form. Does it make sense? Are the links between the components clear? Are you able to clearly articulate the logic model and relationship between the components? Are the activities action-oriented? Are outcomes specific and measurable? A Note About Outcomes As part of your logic model development, you will need to identify specific and measurable outcomes. To help you do so, visualize the before and after vision of a person benefiting from your program. ¥ What can that person achieve now that she could not achieve before? ¥ How is her life now different? ¥ How might her future be different? ¥ If this person had not participated in your program (or a similar organization), what might have happened? These questions can help you identify specific changes a person may experience as a result of your program. Figure 1. Logic Model Example W.K. Kellogg Foundation Resources In order to accomplish our set of activities we will need the following: Activities In order to address our problem or asset we will accomplish the following activities: Outputs We expect that once accomplished, these activities will produce the following evidence or service delivery:Short and Long Term Outcomes We expect that if accomplished, these activities will lead to the following changes in 1Ð3, and then 4Ð6 years: Impact We expect that if accomplished, these activities will lead to the following changes in 7Ð10 years: Many funding agencies require applicants to submit a logic model as a part of a funding request or to develop one in the course of their work, so, before you take the steps to create a new one, check to see if your program already has one in place. If you do not have a logic model, there are several great resources to help your program develop one. For More Information Example Logic Model for a Domestic Violence Program, National Center on Domestic and Sexual Violence http://www.ncdsv.org/images/NRCDV_FVPSA%20Outcomes%20APP%20A-Logic.pdf Enhancing Program Performance with Logic Models, University of Wisconsin-Extension http://www.uwex.edu/ces/lmcourse/ Logic Model Development Guide, W.K. Kellogg Foundation http://www.wkkf.org/knowledge-center/resources/2006/02/wk-kellogg-foundation-logic-model-development-guide.aspx Logic Model Workbook, Innovation Network http://www.innonet.org/client_docs/File/logic_model_workbook.pdf The Advocacy Progress Planner, The Aspen Institute http://planning.continuousprogress.org/ Section II: Enhancing Your Evaluation Capacity C. Budgeting for Evaluation One of the most common barriers to evaluation is a lack of financial resources. The costs, and the resources required, vary widely and are difficult to estimate. They depend on the goals of your evaluation, the complexity of your design, the size of your organization, and your internal capacity, including the extent to which you will be working with an external evaluator, among other things. General cost estimates for evaluations range anywhere from 5 to 20 percent of your programÕs total budget. Where your evaluation budget falls within this range depends upon your evaluation goals. For example, if you want to obtain basic information such as the number of program participants and units of service, as well as demographic information, you will fall within the lower budget range. Even on a low budget, you will be able to gather basic information about participant satisfaction. If you increase your evaluation budget slightly, a low-to-moderate cost evaluation will allow you to begin to collect in-depth information about your programÕs implementation and begin to determine whether or not there has been a change in your participantsÕ knowledge, attitudes, or behaviors. For example, you might be able to conduct a pre and post-test survey to see if knowledge or attitudes have changed. Moderate-to-high cost evaluations will allow you to use a comparison or control groupÑa group of people who did not participate in your program but share most of the participantsÕ characteristics, such as race, age, risks, or needsÑwhich in turn, allows you to attribute any changes in your participants to your programming. The highest-cost evaluations will allow you to obtain longer-term outcome information on your program participants. As you increase your budget for evaluation, you increase the amount of information that you can collect and the sophistication of the data analysis, which allows your organization to gain a more nuanced assessment of its work. Depending upon the economic climate, you may not be able to increase your evaluation budget on a regular basis. One way to lower evaluation costs over time is to integrate evaluation activities into staff responsibilities, which may also sustain your evaluation efforts in the long term. For example, it may be effective for you to use your limited resources to hire an external expert to help you design the evaluation (including identifying your goals or measures of success and the kind of data you need to systematically collect to achieve those measurements) and train your staff to implement it. Your staff would then collect the pertinent data, enter it into a database, and analyze it on a regular basis. In this scenario, the bulk of your expenses would be one-time start-up costs, including the external evaluator, training, and technology for storing and analyzing the data. Of course, your organization would have to free up staff time to implement the evaluation activities on an ongoing basis. If you are interested in this option, be sure to double-check your existing and potential funding requirements to ensure that no restrictions exist on using grant funds for your evaluation and learn what expenses are allowable. Common Costs Associated With Evaluation Efforts ¥ Personnel costs: the amount of staff time and/or external evaluator costs ¥ Training: for staff on evaluation protocols and/or technology. ¥ Materials and supplies: clerical supplies, paper, postage, etc. ¥ Printing and duplication: surveys, reports, etc. ¥ Equipment: computers, computer software, phones, etc, ¥ Compensation: for evaluation interviewees and focus group participants. Avoiding Common Budgeting Pitfalls ¥ Not sure what, or even if, evaluation activities are supported by your current funding sources? When in doubt, seek guidance from your funder to improve your understanding and to allocate resources appropriately. ¥ Not sure how much you should budget for your evaluation or if external evaluator fees are reasonable? Contact your state domestic or sexual violence coalition to find out which direct-service programs have conducted an evaluation. Reach out to these organizations for cost comparisons. A Note About Funding Be sure to check with your funders to see if monies can be used for your evaluation efforts, and if there are any restrictions or requirements that apply to your evaluation. For More Information Checklist for Evaluation Budget, Western Michigan University Evaluation Center http://www.wmich.edu/evalctr/archive_checklists/evaluationbudgets.pdf Developing an Effective Evaluation Plan: Setting the Course for Effective Program Evaluation, Centers for Disease Control and Prevention http://www.cdc.gov/obesity/downloads/CDC-Evaluation-Workbook-508.pdf Section II: Enhancing Your Evaluation Capacity D. Building Your Evaluation Team Having a team of interested and knowledgeable people behind your evaluation activities is an essential ingredient to successfully integrating and sustaining evaluation in your day-to-day operations. Making deliberate choices about the design and composition of your team is, thus, a necessary step before launching any evaluation. Will your team be comprised exclusively of staff, a mix of staff and external evaluators (for example, consultants), or exclusively external evaluators? Each of these options has pros and cons that you will have to weigh to determine whatÕs best given the nature of your organization, your evaluation goals, and your resources, as well as import.ant considerations for ensuring an effective and successful process. Figure 2. Pros and Cons of Using Internal Staff and External Evaluators Internal Staff Pros Deep understanding of organization and its theory of change Credibility (based upon in-depth program knowledge) Potential to enhance internal capacity Cons May lack evaluation expertise May not have time May fear being critical about whatÕs not working Potential for lack of objectivity May raise participantsÕ discomfort level and sense of vulnerability External Evaluators Pros Knowledge and skills Independence and objectivity Credibility (based upon an unbiased, outside perspective) Fresh perspective on your organizationÕs work Cons Additional cost Potential lack of subject-matter expertise Lack of knowledge of organization Require staff time to orient evaluator to the organization May reduce participantsÕ willingness to participate in the evaluation A Hybrid Approach Because both external evaluators and staff bring unique expertise to bear on your evaluation activities, your organization may want to consider a hybrid approach. You could bring in an external evaluator to design your evaluation and build an infrastructure your staff can implement, to assist with foundational activities such as creating a logic model and measures to determine success, or to train your staff so they can conduct evaluation activities on an ongoing basis. External evaluators can serve as a resource on an as-needed basis as your organization continues evaluation efforts. They can help address new challenges that may arise and adjust evaluation techniques as necessary. For instance, external evaluators can work with you to determine the methodology of your evaluation, or they can work with you to determine how to safely include the voices of survivors. Engaging and Supporting Staff If you decide to partner with an external evaluator, your staff have an integral role to play in all stages of the process of determining what works in your organization from planning to implementation and interpretation of results to sustaining the culture of evaluation. Your staff each has a unique understanding of your operations and programming. Also, all staff will play a role in collecting, interpreting, and using your evaluation information. For these reasons, you will want staff members to serve on your evaluation team or working group and, equally important, you will need to engage staff at all levels of your organizationÑfrom front-line staff to board membersÑin the planning and implementation processes. Section II: Enhancing Your Evaluation Capacity E. Partnering with External Evaluators Few advocates working to end domestic and/or sexual violence have had the opportunity to learn how to conduct assessments and evaluations. For this reason, many of you will likely use an evaluation team or working group including both staff who understand the organization and issues involved in its work and consultants with expertise in evaluation. How an External Evaluator Can Help A good evaluator can help your organization: ¥ develop a logic model to document your organizationÕs plan, including vision, methods, and objectives; ¥ design an evaluation to meet your needs; ¥ develop ways to measure whether your organization is meeting its goals; ¥ design user-friendly forms to collect data, and efficient processes and technologies to enter and store data; ¥ analyze data and identify findings; ¥ interpret findings; ¥ develop policy and practice recommendations based on findings; ¥ write reports; ¥ identify mechanisms to review and use data on a regular basis to inform decision making; ¥ identify organizational needs around evaluation, including staff training; and ¥ assist in filling those needs so your organization can conduct its own evaluation activities on an ongoing basis. Finding External Evaluators You can find evaluators at local colleges and universities, as well as research and policy institutes. In addition, your community may have evaluation experts who work as independent consultants. One of the best ways to identify an external evaluator is by reaching out to your net.work of contacts, other social service organizations in your area, your city and county government, and your funders for recommendations. Choosing the Right Evaluator Just as when you hire a staff member, you will want to check the formal education, experience, approach, and style of potential evaluators. Key questions to consider when vetting candidates include: ¥ What is the personÕs philosophical approach? Does s/he see evaluation as a collaborative endeavor with your organization, or a solo expedition of an outsider looking in? ¥ Has the person worked with other social service or advocacy organizations? What about other organizations that address domestic violence or sexual assault? ¥ How familiar is the person with domestic and sexual violence and stalking, generally, and your organization, specifically? ¥ Does the person work with an institutional review board (IRB)? (See Section II, page 38, for more information on IRBs) If so, what will this process entail for your organization? Will it require you to build additional time into your evaluation work plan? ¥ What are the candidateÕs initial ideas for your organization, and how do they fit within your philosophy, values, goals, and style? ¥ Does the candidate have strong communication skills? ¥ Is the candidate personable? Would you, other staff members, and the people you serve be comfortable working with the candidate? In addition to exploring these questions, be sure to review potential evaluatorsÕ work and writing samples and get references from similar organizations that have hired them in the past. Working with Your Evaluator to Develop a Shared Vision Clear lines of communication, a collaborative relationship, and a detailed scope of work are essential. You will need to ensure that your goals and expectations, and the evaluatorÕs goals and interests are aligned. Critical questions to discuss with your evaluator include: ¥ What is the purpose of the evaluation efforts? What design makes the most sense given your goals and organizational composition and resources? ¥ What are the roles and responsibilities of the evaluator versus staff roles and responsibilities? ¥ What materials and resources will your organization have to supply? What materials and resources will the evaluator provide? ¥ What are the key deliverables (for example, trainings, the evaluation design, survey instruments, databases, reports), and who has editorial authority over these products? ¥ If reports are included in the scope of work, what will they contain? When are they due? Who has final authority to use and release reports? ¥ What is the confidentiality agreement between the organization and the evaluator? ¥ If data will be collected, how will informed consent be obtained and how will the confidentiality of participants and other sensitive information be ensured? Who will have access to the data and where will it be stored? ¥ What is the budget for the evaluation? What is the payment schedule and conditions for payment? Tips for a Successful Working Relationship Any time people from different fields come together to collaborate on a project, there is tremendous benefit to intentionally building and nurturing the relationship. While everyone may agree on the overarching goals of the project, each field typically has its own culture, values, language, and standards or assumptions about how work should be done, among other things. Practitioners and researchers are no exception. Thus, developing a shared vision with an external evaluator will go a long way toward forging a strong collaboration. Additional tips for a strong and successful working relationship include: ¥ clearly defining roles and responsibilities; ¥ proactively determining decision-making authority; ¥ involving program staff and external evaluators in foundational planning sessions; ¥ communicating openly and frequently; and ¥ defining key terms and agreeing upon a shared language. Institutional Review Boards An institutional review board (IRB) is a group of people that monitors research designed to obtain information from or about human subjects. Members of an IRB come from multiple research disciplines from the communities in which the research is conducted. When direct-service programs conduct research with participants in programs funded by federal or state governments, they may be required to submit materials to federal or state IRBs. Many institutions that conduct research regularly, such as large universities and hospitals, have established their own IRBs. Working with an IRB helps ensure that your evaluation process protects your participants. You will need to work with an IRB if your organization is governed by one, your outside evaluator is affiliated by one, or your funding requires you to do so. Research and the Protection of Human Subjects Department of Justice (DOJ) regulations (28 CFR Part 46) protect the human subjects of federally funded research. In brief, 28 CFR Part 46 requires that most research involving human subjects that is conducted or supported by a federal department or agency be reviewed and approved by an IRB, in accordance with the regulations, before federal funds are expended for that research. As a rule, persons who participate in federally funded research must provide their informed con.sent and must be permitted to terminate their participation at any time. For DOJ grantees, research does not include program evaluations and assessments used only for quality improvements to a program or service or quality assurance purposes. 28 C.F. R ¤ 46.102(d). For more information on determining whether or not an activity constitutes research involving human subjects, visit: http://ojp.gov/funding/Apply/Resources/ResearchDecisionTree.pdf For general information regarding data confidentiality and protection of human research subjects (and model privacy certificates and other forms), visit:http://ojp.gov/ovc/grants/pdftxt/Privacy_certificate_model1.pdf For More Information Decision Tree for Determining Whether an Activity Constitutes Research, Office of Justice Programs http://ojp.gov/funding/Apply/Resources/ResearchDecisionTree.pdf E-Consortium of University Centers and Researchers for Partnership with Justice Practitioners, George Mason University http://gmuconsortium.org/ Human Subjects and Privacy Protections, National Institute of Justice, Office of Justice Programs http://www.nij.gov/nij/funding/human.subjects/welcome.htm Protecting Human Subject Research Participants Training, National Institute of Health, Office of Extramural Research http://phrp.nihtraining.com/users/login.php Research Involving Human Subjects, National Institute of Health http://grants.nih.gov/grants/policy/hs/ Section II: Enhancing Your Evaluation Capacity F. Measuring Success in Domestic Violence and Sexual Assault Programs: Challenges and Considerations Cris M. Sullivan, PhD Programs working with survivors of domestic violence or sexual assault have been under increasing scrutiny from policy.makers and funders to demonstrate that they are making a significant difference in the lives of those with whom they work. Unfortunately, this growing demand for programs to establish their impact has been difficult to meet for many service providers and advocates. No new funds have accompanied the demands for program evaluation, and organization staff typically lack the time and expertise needed to evaluate their work. Furthermore, some of the practices that funders are request.ing from programs could endanger the very survivors they are trying to help (for instance, when funders expect program staff to follow clients over time to gather outcome data). Measure of success should focus on programsÕ effectiveness in helping survivors create changes that they have determined are important to them, and that lead to their increased well-being. Most pressing, however, is the issue of defining program success. There is no consensus on what constitutes improvement in the lives of survivors and their children. For example, some funders think that appropriate outcomes of domestic violence programs should be that clients will never suffer abuse again, or that they wind up leaving the relationship in which the abuse occurred. Some funders of sexual assault programs are looking for outcomes related to women avoiding situations or behaviors in order to avoid risk of re-assault. Such projected out.comes run the risk of disregarding the complexity of survivorsÕ lives, as well as overlooking the responsibility of perpetrators and our communities in preventing violence. It is critical that programsÕ projected outcomes avoid contributing to victim-blaming myths and focus on the reality that survivors come to programs with different experiences and needs. Domestic violence and sexual assault programs should not dictate to survivors what decisions they should make. Their role is to provide safe conditions within which survivors can restore their sense of self, to help them receive justice and sup.port from their communities, and to assist them in achieving their own goals toward greater well-being. Program outcomes can be derived from the larger objectives of well-being and justice, but they must be easily measurable, as well as tied to program activities. So, for example, while programs promote legal justice for survivors by educating them about the legal system, accompany.ing them through the legal process, help.ing them obtain legal remedies (when desired), and advocating on their behalf within the legal systems, they are not in control of whether the system will do what is needed to adequately protect the survivor. Program staff, then, might be responsible for helping a survivor get a restrain.ing order if she both wants and is eligible for one, but they are not responsible for whether the police enforce the order. Another important consideration of domestic violence and sexual assault programs when thinking about success is that they honor the fact that each survivor receiving their help has her own particular life experiences, needs, and concerns. While some nonprofits have a singular goal (such as improving literacy, increasing graduation rates, or preventing drug abuse), domestic violence and sexual assault programs attempt to provide services that affect many aspects of a survivorÕs life. Some survivors might want or need legal assistance, for example, while others do not. Some are looking for counseling, while others are not. While this flexibility in service provision is a strength of these programs, it makes creating standardized outcomes very challenging. Choosing outcomes on which to judge the work of domestic violence and sexual assault programs is also problematic because traditional outcome evaluation trainings and manuals focus on programs designed to change clientsÕ behaviors. Literacy programs are designed to increase reading and writing skills, addiction pro.grams are designed to help people stay clean and sober, and parenting programs help parents develop more effective skills to raise their children. By contrast, domestic violence and sexual assault programs are working with victims of someone elseÕs behavior. The people they serve are not responsible for the abuse they experienced, and therefore the pro.grams do not focus on changing their participantsÕ behavior. These programs, then, need to take a broader view of what constitutes an outcome. It is helpful for domestic violence and sexual assault programs to remember that an outcome can be more than a change in behavior. More broadly, an out.come is a change in knowledge, attitude, skill, behavior, expectation, emotional status, or life circumstance as a result of the services the program provides. Once programs consider accept this more comprehensive definition, it becomes far easier to choose outcomes they would expect to see as a result of their services. For example, there are numerous examples of domestic violence and sexual assault programs increasing survivorsÕ knowledge (about typical trauma responses, say, or how various systems work). They also often work to change survivorsÕ attitudes if they enter programs blaming themselves for their victimization. Staff also teach numerous new skills to survivors, such as coping skills related to their traumas or how to behave during court proceedings. Some clients do want to change their behaviors (for instance, if they enter programs with addiction issues) and staff can help here as well. Domestic violence and sexual assault programs may work to change peopleÕs expectations about the kinds of help avail.able from their communities, and certainly these programs focus on improving the emotional status of their clients. Finally, some programs may focus on improving survivorsÕ life circumstance by assist.ing them in obtaining safe and affordable housing, becoming employed, going back to school, or gaining citizenship. It is not realistic to ask domestic violence and sexual assault programs to examine the long-term impact of their effortsÑthat is what research is for. If programs can demonstrate the positive short-term outcomes that have been shown to lead to longer-term impacts on the safety and well-being of survivors, this should help satisfy funders that the services they provide are worthwhile, and provide pro.grams helpful information about what is and is not working within their services. An outcome is a change in knowledge, attitude, skill, behavior, expectation, emotional status, or life circumstance as a result of the service the program provides. Section III: Building Your Evaluation Knowledge and Skills Building Your Evaluation Knowledge and Skills An important part of an organizationÕs evaluation capacity is the level of knowledge staff have about program evaluation. While it might not be realistic to think that all staff will become experts in evaluation, providing staff with a basic foundation for understanding evaluation can be valuable. This knowledge can enable staff to help select an external evaluator and weigh in on important decisions about an evaluationÕs design. This guide is not meant to be a comprehensive resource for designing or conducting an evaluation; rather, it focuses on increasing your evaluation capacity. Many additional resources are available to guide you on conducting an evaluation, or to help you explore other evaluation concepts not covered in this guide. Evaluation Resources Specific to Domestic and/or Sexual Violence Advocacy Evaluation Mini-Toolkit Learning for Action http://www.lfagroup.com/wp/wp-content/uploads/2013/04/Advocacy-Evaluation-Mini-Toolkit.pdf Domestic Violence Evidence Project National Resource Center on Domestic Violence http://dvevidenceproject.org/ Evaluation Toolkit-Evaluating the Work of Sexual Assault Nurse Examiner (SANE) Programs in the Criminal Justice System: A Toolkit for Practitioners https://www.ncjrs.gov/pdffiles1/nij/grants/240917.pdf Outcome Evaluation Strategies for Domestic Violence Service Programs Receiving FVPSA Funding: A Practical Guide National Resource Center on Domestic Violence http://www.ocjs.ohio.gov/FVPSA_Outcomes.pdf Outcome Evaluation Strategies for Sexual Assault Service Programs: A Practical Guide, Michigan Coalition Against Sexual Assault http://www.wcasa.org/file_open.php?id=883 General Evaluation Resources and Guides American Evaluation Association http://www.eval.org/ Basic Guide to Program Evaluation Free Management Library http://managementhelp.org/evaluation/program-evaluation-guide.htm Better Evaluation Rockefeller Foundation http://betterevaluation.org/ Building Our Understanding: Key Concepts of Evaluation Center for Disease Control http://www.cdc.gov/nccdphp/dch/programs/healthycommunitiesprogram/tools/pdf/eval_planning.pdf Center for Program Evaluation and Performance Measurement Bureau of Justice Assistance, Office of Justice Programs https://www.bja.gov/evaluation/ Evaluation CYFERnet Children, Youth and Families Education and Research Network http://www.cyfernet.org/ The Evaluation Center Western Michigan University http://www.wmich.edu/evalctr/ The Evaluation Exchange: A Periodical on Emerging Strategies in Evaluation Harvard Family Research Project, Harvard Graduate School of Education http://www.hfrp.org/evaluation/the-evaluation-exchange Guide to Performance Measurement and Program Evaluation Office for Victims of Crime Training and Technical Assistance https://www.ovcttac.gov/taResources/OVCTAGuides/PerformanceMeasurement/welcome.html Measuring Success: A Guide to Becoming an Evidence-Based Practice Vera Institute of Justice http://www.vera.org/sites/default/files/resources/downloads/measuring-success.pdf Multicultural Health Evaluation The California Endowment http://www.calendow.org/uploadedFiles/Publications/Evaluation/Multicultural_Health_Evaluation/TCE0510-2004_Commissioning_.pdf Participatory Evaluation Essentials The Bruner Foundation http://www.evaluationservices.co/uploads/Evaluation.Essentials.2010.pdf Program Development and Evaluation University of Wisconsin Ð Extension http://www.uwex.edu/ces/pdande/evaluation/ Program Evaluation The U.S. Centers for Disease Control and Prevention http://www.cdc.gov/eval/ Project Evaluation Guide for Nonprofit Organizations Imagine Canada http://www.imaginecanada.ca/files/www/en/library/misc/projectguide_final.pdf Program Evaluation: Principles and Practices A Northwest Health Foundation Handbook http://www.northwesthealth.org/resource/2005/9/22/program-evaluation-handbook-a-free-resource-for-nonprofits?rq=program%20evaluation Tools and Resources for Assessing Social Impact Foundation Center http://trasi.foundationcenter.org/ Bibliography American Evaluation Association http://www.eval.org/(accessed January 24, 2013). Baker, Anita M. and Beth Bruner. Participatory Evaluation Essentials. New York: The Bruner Foundation, 2010. http://www.evaluationservices.co/uploads/Evaluation.Essentials.2010.pdf (accessed January 24, 2013). Bureau of Justice Assistance, Office of Justice Programs. Center for Program Evaluation and Performance Measurement. https://www.bja.gov/evaluation (accessed January 24, 2013). Bronte-Tinkew, Jacinta, Tiffany Allen, and Krystle Joyner. ÒInstitutional Review Boards (IRBs): What are they and why are they important?,Ó Research-to-Results 2008, no. 09 (2008). Washington, DC: Child Trends. http://www.childtrends.org/wp-content/uploads/2008/02/Child_Trends-2008_02_19_Evaluation7IRBs.pdf (accessed January 24, 2013). Campbell, Rebecca, Megan Greeson, Nidal Karim, Jessica Shaw, and Stephanie Townsend. Evaluating the Work of Sexual Assault Nurse Examiner (SANE) Programs in the Criminal Justice System: A Toolkit for Practitioners, 2013. https://www.ncjrs.gov/pdffiles1/nij/grants/240917.pdf (accessed January 24, 2013). Center for Evidence-Based Crime Policy at George Mason University. E-Consortium of University Centers and Researchers for Partnership with Justice Practitioners. http://gmuconsortium.org/ (accessed January 24, 2013). Centers for Disease Control and Prevention. Capacity Building Evaluation Guide, 2010.Centers for Disease Control and Prevention. CDCÕs Evaluation Efforts. http://www.cdc.gov/eval/ (accessed January 24 2013). Coalition for Evidence-Based Policy, ÒRigorous Program Evaluations on a Budget: How Low-Cost Randomized Trials Are Responsible in Many Areas of Social Policy,Ó 2012. http://coalition.4evidence.org/wp-content/uploads/Rigorous-Program-Evaluations-on-a-Budget-March-2012.pdf (accessed January 24, 2013). Coffman, Julia ed. The Evaluation Exchange, A Periodical on Emerging Strategies in Evaluating Child and Family Services, Harvard Family Research Project XI, no. 2 (2005) Cambridge, MA: Harvard Graduate School of Education. http://www.hfrp.org/var/hfrp/storage/original/application/d6517d4c8da2c9f1fb3dffe3e8b68ce4.pdf (accessed January 24, 2013). Coryn, Chris L. S., Lindsay A. Noakes, Carl D. Westine, and Daniela C. Schršter. ÒA Systematic Review of Theory-Driven Evaluation Practice From 1990 to 2009,Ó American Journal of Evaluation 32, no.2 (2011): 199Ð226. CYFERnet Children, Youth and Families Education and Research Network. Evaluation. http://www.cyfernet.org/models/ebp.html (accessed January 24, 2013). Edleson, Jeffrey L. Evaluating Domestic Violence Programs. Edited by Carol Frick. Minneapolis: Domestic Abuse Project, 1997. Evaluation Capacity Development Group. ÒThe New ECDG Guide to Evaluation Capacity Development,Ó 2012, http://www.ecdg.net/ecd-knowledgbase/the-new-ecdg-guide-to-evaluation-capacity-development-based-on-the-iwa-on-ecd/ (accessed January 24, 2013). Farell, K., M. Kratzmann, S. McWilliam, N. Robinson, S. Saunders, J. Ticknor, and K. White. Evaluation Made Very Easy, Accessible, and Logical. Halifax, Nova Scotia: Atlantic Centre of Excellence for WomenÕs Health, 2002. http://www.rosecharities.info/forms/Evaluation/Evaluation%20Made%20Very%20Easy%202002.pdf (accessed January 24, 2013). Fratello, Jennifer, Tarika Daftary Kapor, and Alice Chasan. Measuring Success: A Guide to Becoming an Evidence-Based Practice. New York: Vera Institute of JusticeÕs Center on Youth Justice for the MacArthur FoundationÕs Models for Change Initiative, 2013. Gelmon, Sherril B., Anna Foucek, and Amy Waterbury. Program Evaluation: Principles and Practices. 2nd ed. Portland: Northwest Health Foundation, 2005. http://www.northwesthealth.org/resource/2005/9/22/program-evaluation-handbook-a-free-resource-for-nonprofits (accessed January 24, 2013). Harvard Family Research Project, Harvard Graduate School of Education. The Evaluation Exchange: A Periodical on Emerging Strategies in Evaluation. http://www.hfrp.org/evaluation/the-evaluation-exchange(accessed January 24, 2013). Hopson, Rodney. Multicultural Health Evaluation, Overview of Multicultural and Culturally Competent Program Evaluation Issues, Challenges and Opportunities. Los Angeles: The California EndowmentÕs Diversity in Health Evaluation Project, 2003. http://www.calendow.org/uploadedFiles/Publications/Evaluation/overview_multicultural_competent_program.pdf (accessed January 24, 2013). Horn, Jerry A Checklist for Develop.ing and Evaluating Evaluation Budgets, 2001.http://www.wmich.edu/evalctr/archive_checklists/evaluationbudgets.pdf (accessed January 24, 2013). Innovation Network, Inc. Logic Model Workbook Washington, DC: Innovation Network, Inc. 2010. http://www.innonet.org/client_docs/File/logic_model_workbook.pdf (accessed January 24, 2013). Innovation Network, Inc. ÒState of Evaluation: Evaluation Practice and Capacity in the Non Profit Sector.Ó http://stateofevalu. ation.org/ (accessed January 24, 2013). International Network on Strategic Philanthropy. Theory of Change Manual, 2005.http://www.dochas.ie/Shared/Files/4/Theory_of_Change_Tool_Manual.pdf (accessed January 24, 2013). Lennie, June. ÒAn Evaluation Capacity-Building Process for Sustainable Community IT Initiatives,Ó Evaluation 11 no. 4 (2005): 390Ð414. Lyon, Eleanor and Cris M. Sullivan.ÒDeveloping A Logic Model,Ó Outcome Evaluation Strategies for Domestic Violence Service Programs Receiving FVPSA Funding: A Practical Guide. Harrisburg, PA: National Resource Center on Domestic Violence 2007. http://www.ncdsv.org/images/NRCDV_FVPSA%20Outcomes%20APP%20A-Logic.pdf (accessed January 24, 2013). Lyon, Eleanor and Cris M. Sullivan. Outcome Evaluation Strategies for Domestic Violence Service Programs Receiving FVPSA Funding: A Practical Guide. Harrisburg, PA: National Resource Center on Domestic Violence 2007. http://www.ocjs.ohio.gov/FVPSA_Outcomes.pdf(accessed January 24, 2013). Mackinnon, Anne, Natasha Amott, and Craig McGarvey. ÒMapping Change, Using Theory of Change to Guide Planning and Evaluation,Ó GrantCraft 2006. http://portals.wi.wur.nl/files/docs/ppme/Grantcraftguidemappingchanges_1.pdf (accessed January 24, 2013). Macy, R. J., Giattina, M., Sangster, T. H., Crosby, C., & Montijo, N. J. (2009). Domestic violence and sexual assault services: Inside the black box. Aggression and Violent Behavior, 14, 359Ð373. McCawley, Paul F. The Logic Model for Program Planning and Evaluation University of Idaho Extension, The College of Agricultural and Life Sciences, 2001. http://www.cals.uidaho.edu/edcomm/pdf/CIS/CIS1097.pdf (accessed January 24, 2013). McGarvey, Craig. ÒMaking Measures Work for You, Outcomes and Evaluation,Ó Grant Craft 2006. http://www.grantcraft.org/assets/content/resources/guide_out. come.pdf (accessed January 24, 2013). McNamara, Carter. Basic Guide to Program Evaluation. Free Management Library. http://managementhelp.org/evaluation/program-evaluation-guide.htm (accessed January 24, 2013). Miller, Thomas I., Michelle M. Kobayashi, and Paula M. Noble. ÒInsourcing, Not Capacity Building, a Better Model for Sustained Program Evaluation,Ó American Journal of Evaluation 27, no.1 (2006): 83Ð94. Morariu, Johanna. ÒEvaluation Capacity Building: Examples and Lessons from the Field,Ó Innovation Network, 2012. http://www.innonet.org/client_docs/tear_sheet_ecb-innovation_network.pdf (accessed January 24, 2013). Mouradian, Vera E., Mindy B. Mechanic, and Linda M. Williams. (2001). Recommendations for Establishing and Maintaining Successful Researcher-Practitioner Collaborations. Wellesley, MA: National Violence Against Women Prevention Research Center, Wellesley College, 2001. http://www.musc.edu/vawprevention/general/recomreport.pdf (accessed January 24, 2013). National Center for Chronic Disease Prevention and Health Promotion. Developing an Effective Evaluation Plan. Atlanta: Centers for Disease Control and PreventionÕs Office on Smoking and Health; Division of Nutrition, Physical Activity, and Obesity, 2011. http://www.cdc.gov/obesity/downloads/CDC-Evaluation-Workbook-508.pdf (accessed January 24, 2013). National Institute of Health, Office of Extramural Research. Protecting Human Subject Research Participants Training. http://phrp.nihtraining.com/users/login.php(accessed January 24, 2013). National Institute of Health, Office of Extramural Research. Research Involving Human Subjects, 2012. http://grants.nih.gov/grants/policy/hs/ (accessed January 24, 2013). National Institute of Justice, Office of Justice Programs. Human Subjects and Privacy Protections, 2010. http://www.nij.gov/nij/funding/humansubjects/welcome.htm (accessed January 24, 2013). Office of Justice Programs. Decision Tree for Determining Whether an Activity Constitutes Research http://ojp.gov/funding/Apply/Resources/ResearchDecisionTree.pdf (accessed January 24, 2013). Office for Victims of Crime Training and Technical Assistance. Guide to Performance Measurement and Program Evaluation, 2010. https://www.ovcttac.gov/taResources/OVCTAGuides/PerformanceMeasurement/welcome.html (accessed January 24, 2013). Patton, Michael Q., ÒUtilization-Focused Evaluation,Ó in Evaluation Models, edited by Daniel L. Stufflebeam, George F. Madaus, and Thomas Kellaghan. Boston: Kluwer Academic Publishers, 2000. http://www.wmich.edu/evalctr/archive_checklists/ufe.pdf (accessed January 24, 2013). Pejsa, Laura J. ÒImproving Evaluation in the Nonprofit Sector: The Promise of Evaluation Capacity Building for Nonprofit Social Service Organizations in an Age of Accountability.Ó PhD diss., University of Minnesota, 2011. Pew Fund for Health and Human Services ÒWorking Smarter, Not Just Harder: Using Outcome Data to Improve Performance,Ó Programs Adjusting to a Changing Environment (PACE) information series seminar sponsored by the Pew Charitable Trusts, November 17, 2006. http://www.pewtrusts.org/~/media/legacy/uploadedfiles/wwwpewtrustsorg/reports/pew_fund_for_hhs_in_phila/Working20Smarter20summary20finalpdf.pdf (accessed January 24, 2013). Preskill, Hallie, and Shanelle Boyle. ÒA Multidisciplinary Model of Evaluation Capacity Building,Ó American Journal of Evaluation 29, no. 4 (2008): 443Ð459. Preskill, Hallie and Rosalie T. Torres, (1999). ÒThe Readiness for Organizational Learn.ing and Evaluation (ROLE) Instrument,Ó Evaluative Inquiry for Learning in Organizations. Thousand Oaks, CA: Sage, 1999. http://www.fsg.org/Portals/0/Uploads/Documents/ImpactAreas/ROLE_Survey.pdf (accessed January 24, 2013). Program Evaluation Tool Kit, A Blueprint for Public Health Management, http://pacificaidsnetwork.org/wp-content/uploads/2012/01/Program-Evaluation-Toolkit1.pdf (accessed January 24, 2013). Puddy, Richard W. and Natalie Wilkins. ÒUnderstanding Evidence, Part 1: Best Available Research Evidence, A Guide to the Continuum of Evidence of Effectiveness,Ó Atlanta: Centers for Disease Control and Prevention 2011. Riger, Stephanie, et al. Evaluating Services for Survivors of Domestic Violence and Sexual Assault. Thousand Oaks, CA: Sage, 2002. Rizo, C.F., Macy, R.J., Ermentrout, D.M., & Johns, N.B. (2011). A review of family interventions for intimate partner violence with a child focus or child component. Aggression and Violent Behavior, 16, 144Ð166. Russon, Karen and Craig Russon.Evaluation Capacity Development Toolkit. Evaluation Capacity Development Group, 2011. http://www.ecdg.net/wp-content/uploads/2011/12/ECDG-Toolkit.pdf (accessed January 24, 2013). Segone, Marco, ed. From Policies to Results: Developing Capacities for Country and Monitoring Evaluation Systems. New York: UNICEF, 2010. Stevenson, John F., Paul Florin, Dana S. Mills, and Marco Andrade. ÒBuilding Evaluation Capacity in Human Service Organizations: A Case Study,Ó Evaluation and Program Planning 25 no. 3, (2002): 233Ð243. Sullivan, Cris M. ÒEvaluating Domestic Violence Support Service Programs: Waste of Time, Necessary Evil, or Opportunity for Growth?Ó Aggression and Violent Behavior 16, no. 4 (2011): 354Ð360.Sullivan, C.M. (2012). Examining the work of domestic violence programs within a Òsocial and emotional well-being promotionÓ conceptual framework. www.dvevidenceproject.org Sullivan, Cris M. and Suzanne Coats. Outcome Evaluation Strategies for Sexual Assault Service Programs: A Practical Guide. Okemos, MI: Michigan Coalition Against Domestic and Sexual Violence. 2000. http://www.wcasa.org/file_open.php?id=883 (accessed January 24, 2013). Suvedi, Murari and Shawn Morford. ÒConducting Program and Project Evaluations: A Primer for Natural Resource Program Managers in British Columbia,Ó Forrex Series 6, 2003. Kamloops, B.C.: Forrex, Forest Research Extension Partnership. https://www.msu.edu/~suvedi/Resources/Documents/4_1_FS6.pdf (accessed January 24, 2013). Taylor-Powell, Ellen and Heather H. Boyd. ÒEvaluation Capacity Building in Complex Organizations,Ó New Directions for Evaluation 2008, no. 120 (2008): 55Ð69. Taylor-Powell, Ellen, Sara Steele and Mohammad Douglah, ÒPlanning a Program Evaluation,Ó Program Development and Evaluation Madison, WI: Cooperative Extensions, 1996. http://learningstore.uwex.edu/assets/pdfs/g3658-1.pdf (accessed January 24, 2013). University of Wisconsin-Extension, Cooperative Extension. Building Capacity in Evaluating Outcomes: A Teaching and Facilitating Resource for Community-Based Programs and Organizations. Madison, WI: UW- Extension, Program Development and Evaluation, 2008. http://www.uwex.edu/ces/pdande/evaluation/bceo/pdf/bceoresource.pdf (accessed January 24, 2013). University of Wisconsin-Extension Cooperative Extension. ÒEvaluation,Ó Program Development and Evaluation. http://www.uwex.edu/ces/pdande/evaluation/ (accessed January 24, 2013). Volkov, Boris B. and Jean A. King A Checklist for Building Organizational Evaluation Capacity, 2007. http://www.wmich.edu/evalctr/archive_checklists/ecb.pdf (accessed January 24, 2013). Weigel, Dan, Randy Brown, and Sally Martin. ÒWhat Cooperative Extension Professionals Need to Know About Institutional Review Boards: Obtaining Consent,Ó Journal of Extension 42 no. 5 (2004) http://www.joe.org/joe/2004october/tt1.php (accessed January 24, 2013). Welsh, Myia and Johanna Morariu. ÒEvaluation Capacity Building, Funder Initiatives to Strengthen Grantee Evaluation Capacity and Practice,Ó Innovation Network, 2011. http://www.innonet.org/client_docs/funder_ecb_final.pdf (accessed January 24, 2013). Westat, Joy F. The 2010 User-Friendly Handbook for Project Evaluation. Directorate for Education and Human Resources. Arlington, VA: National Science Foundation, 2010. http://informalscience.org/documents/TheUserFriendlyGuide.pdf (accessed January 24, 2013). Western Michigan University. The Evaluation Center. http://www.wmich.edu/evalctr/ (accessed January 24, 2013). W.K. Kellogg Foundation. Logic Model Development Guide. Battle Creek, MI: W.K. Kellogg Foundation, 2004. http://www.epa.gov/evaluate/pdf/eval-guides/logic-model-development-guide.pdf (accessed January 24, 2013). Zarinpoush, Fataneh. Project Evaluation Guide for Nonprofit Organizations. Toronto, Ontario: Imagine Canada, 2006. http://www.imaginecanada.ca/files/www/en/library/misc/projectguide_final.pdf(accessed January 24, 2013). © Vera Institute of Justice 2014. All rights reserved. The Vera Institute of Justice is an independent nonprofit organization that combines expertise in research, demonstration projects, and technical assistance to help leaders in government and civil society improve the systems people rely on for justice and safety. The Center on Victimization and Safety works with communities around the country to fashion services that reach, appeal to, and benefit all victims. Our work focuses on communities of people who are at elevated risk of harm but often marginalized from victim services and the criminal justice system. We combine research, technical assistance, and training to equip policymakers and practitioners with the information, skills, and resources needed to effectively serve all victims. For more information on the Center on Victimization and Safety, please contact cvs@vera.org or 212.376.3096. This document was supported by Grant No. 2011-TA-AX-K125 awarded by the Office on Violence Against Women, U.S. Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women.