How to write an Effective Evaluation Plan

how to develop evaluation plan and how to design evaluation plan and how to do evaluation for lesson plan and how to write an evaluation plan for a proposal
JuliyaMadenta Profile Pic
JuliyaMadenta,Philippines,Researcher
Published Date:15-07-2017
Your Website URL(Optional)
Comment
Developing an Effective Evaluation Plan Setting the course for effective program evaluationPart I: Developing Your Evaluation Plan WHO IS THE AUDIENCE FOR THIS WORKBOOK? The purpose of this workbook is to help public health program managers, administrators, and evaluators develop a joint understanding of what constitutes an evaluation plan, why it is important, and how to develop an effective evaluation plan in the context of the planning process. This workbook is intended to assist in developing an evaluation plan but is not intended to serve as a complete resource on how to implement program evaluation. Rather, it is intended to be used along with other evaluation resources, such as those listed in the Resource Section of this workbook. The workbook was written by the staff of the Office on Smoking and Health (OSH) and the Division of Nutrition, Physical Activity, and Obesity (DNPAO) at the Centers for Disease Control and Prevention (CDC). However, the content and steps for writing an evaluation plan can be applied to any public health program or initiative. Part I of this workbook defines and describes how to write an effective evaluation plan. Part II of this workbook includes exercises, worksheets, tools, and a Resource Section to facilitate program staff and evaluation stakeholder workgroup (ESW) thinking through the concepts presented in Part I of this workbook. WHAT IS AN EVALUATION PLAN? An evaluation plan is a written An evaluation plan is a written document that describes document that describes how how you will monitor and evaluate your program, as well you will monitor and evaluate as how you intend to use evaluation results for program your program, so that you will improvement and decision making. The evaluation plan be able to describe the “What”, clarifies how you will describe the “What,” the “How,” the “How”, and the “Why It and the “Why It Matters” for your program. Matters” for your program and use evaluation results for ƒ The “What” reflects the description of your program improvement and program and how its activities are linked with the decision making. intended effects. It serves to clarify the program’s purpose and anticipated outcomes. ƒ The “How” addresses the process for implementing a program and provides information about whether the program is operating with fidelity to the program’s design. Additionally, the “How” (or process evaluation), along with output and/or short-term outcome information, helps clarify if changes should be made during implementation. 1ƒ The “Why It Matters” provides the rationale for your program and the impact it has on public health. This is also sometimes referred to as the “so what” question. Being able to demonstrate that your program has made a difference is critical to program sustainability. An evaluation plan is similar to a roadmap. It clarifies the steps needed to assess the processes and outcomes of a program. An effective evaluation plan is more than a column of indicators added to your program’s work plan. It is a dynamic tool (i.e., a “living document”) that should be updated on an ongoing basis to reflect program changes and priorities over time. An evaluation plan serves as a bridge between evaluation and program planning by highlighting program goals, clarifying measurable program objectives, and linking program activities with intended outcomes. WHY DO YOU WANT AN EVALUATION PLAN? Just as using a roadmap facilitates progress on a long journey, an evaluation plan can clarify what direction your evaluation should take based on priorities, resources, time, and skills needed to accomplish the evaluation. The process of developing an evaluation plan in cooperation with an evaluation workgroup of stakeholders will foster collaboration and a sense of shared purpose. Having a written evaluation plan will foster transparency and ensure that stakeholders are on the same page with regards to the purpose, use, and users of the evaluation results. Moreover, use of evaluation results is not something that can be hoped or wished for but must be planned, directed, and intentional (Patton, 2008). A written plan is one of your most effective tools in your evaluation tool box. A written evaluation plan can— ƒ create a shared understanding of the purpose(s), use, and users of the evaluation results, ƒ foster program transparency to stakeholders and decision makers, ƒ increase buy-in and acceptance of methods, ƒ connect multiple evaluation activities—this is especially useful when a program employs different contractors or contracts, ƒ serve as an advocacy tool for evaluation resources based on negotiated priorities and established stakeholder and decision maker information needs, ƒ help to identify whether there are sufficient program resources and time to accomplish desired evaluation activities and answer prioritized evaluation questions, ƒ assist in facilitating a smoother transition when there is staff turnover, ƒ facilitate evaluation capacity building among partners and stakeholders, 2 Developing an Effective Evaluation Plan ƒ provide a multi-year comprehensive document that makes explicit everything from stakeholders to dissemination to use of results, and ƒ facilitate good evaluation practice. There are several critical elements needed to ensure that your evaluation plan lives up to its potential. These elements include ensuring (1) that your plan is collaboratively developed with a stakeholder workgroup, (2) that it is responsive to program changes and priorities, (3) that it covers multiple years if your project is ongoing, and (4) that it addresses your entire program rather than focusing on just one funding source or objective/activity. You will, by necessity, focus the evaluation based on feasibility, stage of development, ability to consume information, and other priorities that will be discussed in Steps 3 and 4 in this workbook. However, during the planning phase, your entire program should be considered by the evaluation group. HOW DO YOU WRITE AN EVALUATION PLAN? This workbook is organized by describing the elements of the evaluation plan within the context of using the CDC’s Framework for Program Evaluation in Public Health (http://www.cdc.gov/eval/) and the planning process. The elements of an evaluation plan that will be discussed in this workbook include: ƒ Title page: Contains an easily identifiable program name, dates covered, and basic focus of the evaluation. ƒ Intended use and users: Fosters transparency about the purpose(s) of the evaluation and identifies who will have access to evaluation results. It is important to build a market for evaluation results from the beginning. Clarifying the primary intended users, the members of the stakeholder evaluation workgroup, and the purpose(s) of the evaluation will help to build this market. ƒ Program description: Provides the opportunity for building a shared understanding of the theory of change driving the program. This section often includes a logic model and a description of the stage of development of the program in addition to a narrative description. ƒ Evaluation focus: Provides the opportunity to document how the evaluation focus will be narrowed and the rationale for the prioritization process. Given that there are never enough resources or time to answer every evaluation question, it is critical to work collaboratively to prioritize the evaluation based on a shared understanding of the theory of change identified in the logic model, the stage of development 3of the program, the intended uses of the evaluation, as well as feasibility issues. This section should delineate the criteria for evaluation prioritization and include a discussion of feasibility and efficiency. ƒ Methods: Identifies evaluation indicators and performance measures, data sources and methods, as well as roles and responsibilities. This section provides a clear description of how the evaluation will be implemented to ensure credibility of evaluation information. ƒ Analysis and interpretation plan: Clarifies how information will be analyzed and describes the process for interpretation of results. This section describes who will get to see interim results, whether there will be a stakeholder interpretation meeting or meetings, and methods that will be used to analyze the data. ƒ Use, dissemination, and sharing plan: Describes plans for use of evaluation results and dissemination of evaluation findings. Clear, specific plans for evaluation use should be discussed from the beginning. This section should include a broad overview of how findings are to be used as well as more detailed information about the intended modes and methods for sharing results with stakeholders. This is a critical but often neglected section of the evaluation plan. WHAT ARE THE KEY STEPS IN DEVELOPING AN EVALUATION PLAN USING CDC’S FRAMEWORK FOR PROGRAM EVALUATION? CDC’s Framework for Program Evaluation in Public Health (1999) is a guide to effectively evaluate public health programs and use the findings for program improvement and decision making. While the framework is described in terms of steps, the actions are not always linear and are often completed in a back-and-forth effort that is cyclical in nature. Similar to the framework, the development of an evaluation plan is an ongoing process. You may need to revisit a step during the process and complete other discrete steps concurrently. Within each step of the framework, there are important components that are useful to consider in the creation of an evaluation plan. 4 Developing an Effective Evaluation Plan Figure 1: CDC Framework for Program Evaluation in Public Health Steps: 1. Engage stakeholders. CDC’s Framework for Program Evaluation 2. Describe the program. 3. Focus the evaluation design. 4. Gather credible evidence. 5. Justify conclusions. 6. Ensure use and share lessons learned. In addition to CDC’s Framework for Program Evaluation in Public Health there are evaluation standards that will enhance the quality of evaluations by guarding against potential mistakes or errors in practice. The evaluation standards are grouped around four important attributes: utility, feasibility, propriety, and accuracy as indicated by the inner circle in Figure 1. ƒ Utility: Serve information needs of intended users. ƒ Feasibility: Be realistic, prudent, diplomatic, It is critical to remember that and frugal. these standards apply to ƒ Propriety: Behave legally, ethically, and with due all steps and phases of the regard for the welfare of those involved and those evaluation plan. affected. ƒ Accuracy: Evaluation is comprehensive and grounded in the data. (The Joint Committee on Standards for Educational Evaluation, 1994) 51 2 3 4 5 6 THE PROCESS OF PARTICIPATORY EVALUATION PLANNING Step 1: Engage Stakeholders Defining the Purpose in the Plan Identifying the purpose of the evaluation is equally as important as identifying the end users or stakeholders who will be part of a consultative group. These two aspects of the evaluation serve as a foundation for evaluation planning, focus, design, and interpretation and use of results. The purpose of an evaluation influences the identification of stakeholders for the evaluation, selection of specific evaluation questions, and the timing of evaluation activities. It is critical that the program is transparent about intended purposes of the evaluation. If evaluation results will be used to determine whether a program should be continued or eliminated, stakeholders should know this up front. The stated purpose of the evaluation drives the expectations and sets the boundaries for what the evaluation can and cannot deliver. In any single evaluation, and especially in a multi-year plan, more than one purpose may be identified; however, the primary purpose can influence resource allocation, use, stakeholders included, and more. Purpose priorities in the plan can help establish the link between purposes and intended use of evaluation information. While there are many ways of stating the identified purpose(s) of the evaluation, they generally fall into three primary categories: 1. Rendering judgments—accountability 2. Facilitating improvements—program development 3. Knowledge generation—transferability (Patton, 2008) An Evaluation Purpose identification tool/worksheet is provided in Part II, Section 1.2 to assist you with determining intended purposes for your evaluation. 6 Developing an Effective Evaluation Plan 1 2 3 4 5 6 The ESW: Why should you engage stakeholders in The ESW is comprised developing the evaluation plan? of members who have a A primary feature of an evaluation plan is the identification of stake or vested interest an ESW, which includes members who have a stake or vested in the evaluation interest in the evaluation findings, those who are the intended findings and can most users who can most directly benefit from the evaluation directly benefit from (Patton, 2008; Knowlton, Philips, 2009), as well as others who the evaluation. These have a direct or indirect interest in program implementation. members represent Engaging stakeholders in the ESW enhances intended users’ the primary users understanding and acceptance of the utility of evaluation of the evaluation information. Stakeholders are much more likely to buy into and results and generally support the evaluation if they are involved in the evaluation act as a consultative process from the beginning. Moreover, to ensure that the group throughout information collected, analyzed, and reported successfully the entire planning meets the needs of the program and stakeholders, it is best process, as well as to work with the people who will be using this information the implementation throughout the entire process. of the evaluation. Additionally, members sometimes facilitate the A Stakeholder Information Needs identification exercise is implementation and/or provided in Part II, Section 1.4 to assist you with determining the dissemination of stakeholder information needs. results. Examples include promoting responses to surveys, Engaging stakeholders in an evaluation can have many in-kind support benefits. In general, stakeholders include people who will use for interviews, and the evaluation results, support or maintain the program, or who interpretation meetings. are affected by the program activities or evaluation results. The members can even Stakeholders can help— identify resources to ƒ determine and prioritize key evaluation questions, support evaluation ƒ pretest data collection instruments, efforts. The exact ƒ facilitate data collection, nature and roles of ƒ implement evaluation activities, group members is up ƒ increase credibility of analysis and interpretation o f to you, but roles should evaluation information, and be explicitly delineated ƒ ensure evaluation results are used. and agreed to in the evaluation plan. 71 2 3 4 5 6 Several questions pertaining to stakeholders may arise among program staff, including: ƒ Who are the program’s stakeholders? ƒ How can we work with all of our stakeholders? ƒ How are stakeholders’ role(s) described in the plan? This section will help programs address these and other questions about stakeholders and their roles in the evaluation to guide them in writing an effective evaluation plan. Who are the program’s stakeholders? The first question to answer when the program begins to write its evaluation plan is to decide which stakeholders to include. Stakeholders are consumers of the evaluation results. As consumers, they will have a vested interest in the results of the evaluation. In general, stakeholders are those who are 1) interested in the program and would use evaluation results, such as clients, community groups, and elected officials; 2) those who are involved in running the program, such as program staff, partners, management, the funding source, and coalition members; and 3) those who are served by the program, their families, or the general public. Others may also be included as these categories are not exclusive. How do you use an ESW to develop an evaluation plan? It is often said of public health programs, “everyone is your stakeholder.” Stakeholders will often have diverse and, at times, competing interests. Given that a single evaluation cannot answer all possible evaluation questions raised by diverse groups it will be critical that the prioritization process is outlined in the evaluation plan and that the stakeholder groups represented are identified. It is suggested that the program enlist the aid of an ESW of 8 to 10 members that represents the stakeholders who have the greatest stake or vested interest in the evaluation (Centers for Disease Control, 2008). These stakeholders, or primary intended users, will serve in a consultative role on all phases of the evaluation. As members of the ESW, they will be an integral part of the entire evaluation process from the initial design phase to interpretation, dissemination, and ensuring use. Stakeholders will play a major role in the program’s evaluation, including consultation and possibly even data collection, interpretation, and decision making based on the evaluation results. Sometimes stakeholders can have competing interests that may come to light in the evaluation planning process. It is important to explore agendas in the beginning and come to a shared 8 Developing an Effective Evaluation Plan 1 2 3 4 5 6 understanding of roles and responsibilities, as well as the purposes of the evaluation. It is important that both the program and the ESW understand and agree to the importance and role of the workgroup in this process. In order to meaningfully engage your stakeholders, you will need to allow time for resolving conflicts and coming to a shared understanding of the program and evaluation. However, the time is worth the effort and leads toward a truly participatory, empowerment approach to evaluation. How are stakeholder’s roles described in the plan? It is important to document information within your written evaluation plan based on the context of your program. For the ESW to be truly integrated into the process, ideally, they will be identified in the evaluation plan. The form this takes may vary based on program needs. If it is important politically, a program might want to specifically name each member of the workgroup, their affiliation, and specific role(s) on the workgroup. If a workgroup is designed with rotating membership by group, then the program might just list the groups represented. For example, a program might have a workgroup that is comprised of members that represent funded programs (three members), non-funded programs (one member), and national partners (four members) or a workgroup that is comprised of members that represent state programs (two members), community programs (five members), and external evaluation expertise (two members). Being transparent about the role and purpose of the ESW can facilitate buy-in for evaluation results from those who did not participate in the evaluation—especially in situations where the evaluation is implemented by internal staff members. Another by-product of workgroup membership is that stakeholders and partners increase their capacity for evaluation activities and increase their ability to be savvy consumers of evaluation information. This can have downstream impacts on stakeholder’s and partner’s programs such as program improvement and timely, informed decision making. A stakeholder inclusion chart or table can be a useful tool to include in your evaluation plan. A Stakeholder Mapping exercise and engagement tool/worksheet is provided in Part II, Sections 1.1 and 1.1b to assist you with planning for your evaluation workgroup. 91 2 3 4 5 6 The process for stakeholder engagement should also be described in other steps related to the development of the evaluation plan, which may include: Step 2: Describe the program. A shared understanding of the program and what the evaluation can and cannot deliver is essential to the success of implementation of evaluation activities and use of evaluation results. The program and stakeholders must agree upon the logic model, stage of development description, and purpose(s) of the evaluation. Step 3: Focus the evaluation. Understanding the purpose of the evaluation and the rationale for prioritization of evaluation questions is critical for transparency and acceptance of evaluation findings. It is essential that the evaluation address those questions of greatest need to the program and priority users of the evaluation. Step 4: Planning for gathering credible evidence. Stakeholders have to accept that the methods selected are appropriate to the questions asked and that the data collected are credible or the evaluation results will not be accepted or used. The market for and acceptance of evaluation results begins in the planning phase. Stakeholders can inform the selection of appropriate methods. Step 5: Planning for conclusions. Stakeholders should inform the analysis and interpretation of findings and facilitate the development of conclusions and recommendations. This in turn will facilitate the acceptance and use of the evaluation results by other stakeholder groups. Stakeholders can help determine if and when stakeholder interpretation meetings should be conducted. Step 6: Planning for dissemination and sharing of lessons learned. Stakeholders should inform the translation of evaluation results into practical applications and actively participate in the meaningful dissemination of lessons learned. This will facilitate ensuring use of the evaluation. Stakeholders can facilitate the development of an intentional, strategic communication and dissemination plan within the evaluation plan. 10 Developing an Effective Evaluation Plan 1 2 3 4 5 6 EVALUATION PLAN TIPS FOR STEP 1 ƒ Identify intended users who can directly benefit from and use the evaluation results. ƒ Identify a evaluation stakeholder workgroup of 8 to 10 members. ƒ Engage stakeholders throughout the plan development process as well as the implementation of the evaluation. ƒ Identify intended purposes of the evaluation. ƒ Allow for adequate time to meaningfully engage the evaluation stakeholder workgroup. EVALUATION TOOLS AND RESOURCES FOR STEP 1: ƒ 1.1 Stakeholder Mapping Exercise ƒ 1.1b Stakeholder Mapping Exercise Example ƒ 1.2 Evaluation Purpose Exercise ƒ 1.3 Stakeholder Inclusion and Communication Plan Exercise ƒ 1.4 Stakeholder Information Needs AT THIS POINT IN YOUR PLAN, YOU HAVE— ƒ identified the primary users of the evaluation, ƒ created the evaluation stakeholder workgroup, and ƒ defined the purposes of the evaluation. 111 2 3 4 5 6 Step 2: Describe the Program Shared Understanding of the Program The next step in the CDC Framework and the evaluation plan is to describe the program. A program description A program description clarifies clarifies the program’s purpose, stage of development, the program’s purpose, stage activities, capacity to improve health, and implementation of development, activities, context. A shared understanding of the program and what capacity to improve health, the evaluation can and cannot deliver is essential to the and implementation context. successful implementation of evaluation activities and use of evaluation results. The program and stakeholders must agree upon the logic model, stage of development description, and purpose(s) of the evaluation. This work will set the stage for identifying the program evaluation questions, focusing the evaluation design, and connecting program planning and evaluation. Narrative Description A narrative description helps ensure a full and complete shared understanding of the program. A logic model may be used to succinctly synthesize the main elements of a program. While a logic model is not always necessary, a program narrative is. The program description is essential for focusing the evaluation design and selecting the appropriate methods. Too often groups jump to evaluation methods before they even have a grasp of what the program is designed to achieve or what the evaluation should deliver. Even though much of this will have been included in your funding application, it is good practice to revisit this description with your ESW to ensure a shared understanding and that the program is still being implemented as intended. The description will be based on your program’s objectives and context but most descriptions include at a minimum: ƒ A statement of need to identify the health issue addressed ƒ Inputs or program resources available to implement program activities ƒ Program activities linked to program outcomes through theory or best practice program logic ƒ Stage of development of the program to reflect program maturity ƒ Environmental context within which a program is implemented 12 Developing an Effective Evaluation Plan 1 2 3 4 5 6 Logic Model The description section often includes a logic model to visually show the link between activities and intended outcomes. It is helpful to review the model with the ESW to ensure a shared understanding of the model and that the logic model is still an accurate and complete reflection of your program. The logic model should identify available resources (inputs), what the program is doing (activities), and what you hope to achieve (outcomes). You might also want to articulate any challenges you face (the program’s context or environment). Figure 2 illustrates the basic components of a program logic model. As you view the logic model from left to right, the further away from the intervention the more time needed to observe outcomes. A major challenge in evaluating chronic disease prevention and health promotion programs is one of attribution versus contribution and the fact that distal outcomes may not occur in close proximity to the program interventions or policy change. In addition, given the complexities of dynamic implementation environments, realized impacts may differ from intended impacts. However, the rewards of understanding the proximal and distal impacts of the program intervention often outweigh the challenges. Logic model elements include: Inputs: Resources necessary for program implementation Activities: The actual interventions that the program implements in order to achieve health outcomes Outputs: Direct products obtained as a result of program activities Outcomes (short-term, intermediate, long-term, distal): The changes, impacts, or results of program implementation (activities and outputs) Figure 2: Sample Logic Model 131 2 3 4 5 6 Stage of Development Another activity that will be needed to fully describe your program and prepare you to focus your evaluation is an accurate assessment of the stage of development of the program. The developmental stages that programs typically move through are planning, implementation, and maintenance. In the example of a policy or environmental initiative, the stages might look somewhat like this: 1. Assess environment and assets. 2. Policy or environmental change is in development. 3. The policy or environmental change has not yet been approved. 4. The policy or environmental change has been approved but not implemented. 5. The policy or environmental change has been in effect for less than 1 year. 6. The policy or environmental change has been in effect for 1 year or longer. Steps 1 through 3 would typically fall under the planning stages, Steps 4 and 5 under implementation, and Step 6 under maintenance. It is important to consider a developmental model because programs are dynamic and evolve over time. Programs are seldom fixed in stone and progress is affected by many aspects of the political and economic context. When it comes to evaluation, the stages are not always a “once-and-done” sequence of events. When a program has progressed past the initial planning stage, it may experience occasions where environment and asset assessment is still needed. Additionally, in a multi-year plan, the evaluation should consider future evaluation plans to prepare datasets and baseline information for evaluation projects considering more distal impacts and outcomes. This is an advantage of completing a multi-year evaluation plan with your ESW—preparation The stage of development conceptual model is complementary to the logic model. Figure 3.1 shows how general program evaluation questions are distinguished by both logic model categories and the developmental stage of the program. This places evaluation within the appropriate stage of program development (planning, implementation, and maintenance). The model offers suggested starting points for asking evaluation questions within the logic model while respecting the developmental stage of the program. This will prepare the program and the workgroup to focus the evaluation appropriately based on program maturity and priorities. 14 Developing an Effective Evaluation Plan 1 2 3 4 5 6 Figure 3.1: Stage of Development by Logic Model Category Program Program Program Developmental Stage Planning Implementation Maintenance Outputs and Short-term Intermediate and Logic Model Category Inputs and Activities Outcomes Long-term Outcomes Figure 3.2: Stage of Development by Logic Model Category Example Program Program Program Developmental Stage Planning Implementation Maintenance The policy has been Assess environment passed but not Example: and assets The policy has been implemented Developmental Develop policy in effect for Stages When The policy has been 1 year or longer Passing a Policy The policy has not in effect for yet been passed less than 1 year Is there compliance with the policy? Is there public support Example: for the policy? Is there continued Questions Based What is the or increased public What resources on Developmental health impact support for the policy? will be needed Stage When of the policy? for implementation Passing a Policy Are there major of the policy? exemptions or loopholes to the policy? 151 2 3 4 5 6 Key evaluation questions and needs for information will differ based on the stage of development of the program. Additionally, the ability to answer key evaluation questions will differ by stage of development of the program and stakeholders need to be aware of what the evaluation can and cannot answer. For the above policy program example, planning stage type questions might include: ƒ Is there public support for the policy? ƒ What resources will be needed for implementation of the policy? Implementation stage questions might include: ƒ Is there compliance with the policy? ƒ Is there continued or increased public support for the policy? ƒ Are there major exemptions or loopholes to the policy? Maintenance stage questions might include: ƒ What is the economic impact of the policy? ƒ What is the health impact of the policy? For more on stage of development and Smoke-Free Policies, please see the Evaluation Toolkit for Smoke-Free Policies at http://www.cdc.gov/tobacco/basic_information/ secondhand_smoke/evaluation_toolkit/index.htm. A Program Stage of Development exercise is included in Part II, Section 2.1. 16 Developing an Effective Evaluation Plan 1 2 3 4 5 6 EVALUATION PLAN TIPS FOR STEP 2 ƒ A program description will facilitate a shared understanding of the program between the program staff and the evaluation workgroup. ƒ The description section often includes a logic model to visually show the link between activities and intended outcomes. ƒ The logic model should identify available resources (inputs), what the program is doing (activities), and what you hope to achieve (outcomes). ƒ A quality program evaluation is most effective when part of a larger conceptual model of a program and its development. EVALUATION TOOLS AND RESOURCES FOR STEP 2: ƒ 2.1 Program Stage of Development Exercise ƒ Evaluation Toolkit for Smoke-Free Policies at http://www.cdc.gov/tobacco/ basic_information/secondhand_smoke/evaluation_toolkit/index.htm AT THIS POINT IN YOUR PLAN, YOU HAVE— ƒ identified the primary users of the evaluation, ƒ created the evaluation stakeholder workgroup, ƒ defined the purposes of the evaluation, ƒ described the program, including context, ƒ created a shared understanding of the program, and ƒ identified the stage of development of the program. 171 2 3 4 5 6 Step 3: Focus the Evaluation The amount of information you can gather concerning your program is potentially limitless. Evaluations, however, are always restricted by the number of questions that can be realistically asked and answered with quality, the methods that can be employed, the feasibility of data collection, and the available resources. These are the issues at the heart of Step 3 in the CDC framework: focusing the evaluation. The scope and depth of any program evaluation is dependent on program and stakeholder priorities; available resources, including financial resources; staff and contractor availability; and amount of time committed to the evaluation. The program staff should work together with the ESW to determine the priority and feasibility of these questions and identify the uses of results before designing the evaluation plan. In this part of the plan, you will apply the purposes of the evaluation, its uses, and the program description to narrow the evaluation questions and focus the evaluation for program improvement and decision making. In this step, you may begin to notice the iterative process of developing the evaluation plan as you revisit aspects of Step 1 and Step 2 to inform decisions to be made in Step 3. Useful evaluations are not about special research interests or what is easiest to implement but what information will be used by the program, stakeholders (including funders), and decision makers to improve the program and make decisions. Establishing the focus of the evaluation began with the identification of the primary purposes and the primary intended users of the evaluation. This process was further solidified through the selection of the ESW. Developing the purposeful intention to use evaluation information and not just produce another evaluation report starts at the very beginning with program planning and your evaluation plan. You need to garner stakeholder interests and prepare them for evaluation use. This step facilitates conceptualizing what the evaluation can and cannot deliver. It is important to collaboratively focus the evaluation design with your ESW based on the identified purposes, program context, logic model, and stage of development. Additionally, issues of priority, feasibility, and efficiency need to be discussed with the ESW and those responsible for the implementation of the evaluation. Transparency is particularly important in this step. Stakeholders and users of the evaluation will need to understand why some questions were identified as high priorities while others were rejected or delayed. A Focus the Evaluation exercise is located in Part II, Section 3.1 of this workbook. 18 Developing an Effective Evaluation Plan 1 2 3 4 5 6 Developing Evaluation Questions In this step, it is important to solicit evaluation questions from your various stakeholder groups based on the stated purposes of the evaluation. The questions should then be considered through the lens of the logic model/program description and stage of development of the program. Evaluation questions should be checked against the logic model and changes may be made to either the questions or the logic model, thus reinforcing the iterative nature of the evaluation planning process. The stage of development discussed in the previous chapter will facilitate narrowing the evaluation questions even further. It is important to remember that a program may experience characteristics of several stages simultaneously once past the initial planning stage. You may want to ask yourself this question: How long has your program been in existence? If your program is in the planning stage, it is unlikely that measuring distal outcomes will be useful for informing program decision making. However, in a multi-year evaluation plan, you may begin to plan for and develop the appropriate surveillance and evaluation systems and baseline information needed to measure these distal outcomes (to be conducted in the final initiative year) as early as year 1. In another scenario, you may have a coalition that has been established for 10 years and is in maintenance stage. However, contextual changes may require you to rethink the programmatic approach being taken. In this situation, you may want to do an evaluation that looks at both planning stage questions (“Are the right folks at the table?” and “Are they really engaged?”), as well as maintenance stage questions (“Are we having the intended programmatic impact?”). Questions can be further prioritized based on the ESW and program information needs as well as feasibility and efficiency issues. Often if a funder requires an evaluation plan, you might notice text like this: Submit with application a comprehensive written evaluation plan that includes activities for both process and outcome measures. Distinguishing between process and outcome evaluation can be similar to considering the stage of development of your program against your program logic model. In general, process evaluation focuses on the first three boxes of the logic model: inputs, activities, and outputs (CDC, 2008). This discussion with your ESW can further facilitate the focus of your evaluation. 19