Key points
- Evaluations are influenced by the context in which the evaluation is situated.
- Evaluability assessment that examines evaluation readiness
- Relevant interest holder mapping
- Documentation of place-based context, evaluation capacity assessment, and evaluator readiness assessment.
Overview and Importance
Understanding a program's context sets the stage for meaningful, actionable, and culturally responsive evaluation1. This can include the various features of an evaluation's setting, such as location and environment, people and their cultural values, historical circumstances, ways that power and privilege play, and other pertinent characteristics.
The following sections show how to assess the four components of context:
- Readiness for Evaluation
- People (interest holders)
- Place
- Evaluation Capacity
The main products of this step include:
- Evaluability assessment to examine whether a program is ready to be evaluated
- Relevant interest holder mapping of the four main types of interest holders
- Documentation of place-based context and evaluation capacity assessment
- Evaluator reflection to assess your readiness to conduct evaluation
Refer to the full-length CDC Program Evaluation Framework Action Guide for additional information, examples and worksheets to apply the concepts discussed in this step.
Readiness for Evaluation
Evaluability assessments are a type of pre-evaluation method used to determine which aspects of a program are ready for evaluation.
Evaluability Assessment Components
Program Intent and Logic Model determines program objectives and expectations, and depicts the relationships between/among inputs, activities, and expected outcomes234.
Key questions to consider:
- What is the program?
- Is the program logic clear, rational, and understandable?
- Are there any apparent gaps?
- Do interest holders understand their roles?
Program Plausibility determines if programmatic goals, outcomes, and the feasibility of measuring progress towards programmatic goals are clearly defined234.
Key questions to consider:
- What would success look like?
- Are program expectations realistic?
Data Accessibility determines the likelihood that data potentially needed for the evaluation is acquired feasibly given resource and time constraints234.
Key questions to consider:
- Are you collecting data on what you want to achieve?
- What data are available?
- What new data are feasible to collect for the evaluation?
Program Readiness determines if the program needs to be adjusted and/or if additional resources are needed. If no adjustments are needed, then you are ready for evaluation234.
Key questions to consider:
- What adjustments need to be made to prepare the program for evaluation?
People
Interest holders are people or organizations who are invested in and may be affected by the evaluation. There are four types of interest holders:
People who are served or affected by the program
Examples include:
- Past, current, and future program participants
- Employers or associates of program participants
- Local recipients of your funds
- Populations affected by the problem
Individuals or groups who directly or indirectly receive program services may be most interested in aspects of the evaluation that are related to improvements or modifications in program services5.
People who plan or implement the program
Examples include:
- Local and national professional organizations
- State or local health departments and health commissioners
Individuals or groups who have a professional role in the program may be most interested in how to improve the process for implementing the program's services and the outcomes that are a result of the program5.
People who might use the evaluation findings
Examples include:
- Program designers, implementers, and evaluators
- Local government, state legislators, and governors
- Universities and educational groups
Individuals or groups who have authority to make decisions about the program and individuals and groups who have a general interest in the results because they design, implement, evaluate, or advocate on behalf of the program being evaluated or similar programs5.
People who are skeptical about the program
Examples include:
- Past, current, and future program participants, employers or associates of program participants, and developers of similar, complementary, or competing programs
Individuals or groups who are opposed to the program may be most interested in knowing if the outcomes can be attributed to the program and if there is a cost-benefit of the program.
Each interest holder group may have different interests, needs, concerns, power, priorities, and perspectives that may need to be understood to ensure relevance and use of evaluation findings5. Identify interest holder needs for each step and interest holder's stakes in the evaluation, interest they seek to serve and their desired level of involvement in the evaluation.
Place
It is important to consider the place-based context in which the program and evaluation are conducted. This includes recognizing the program and community history, and power dynamics. Existing systems in a community affect how evaluators engage with interest holders, design the evaluation, and communicate findings, especially in communities that are marginalized.
There are two place-based contexts to consider in Step 1: Programmatic Features and Program Environment. As you think about place, use the questions below as a starting point for understanding programmatic features and the programmatic environment.
Program Features
- What is the program, and why was it developed?
- Who funds the program?
- Who is involved in the program development and implementation?
- What is the program user's commitment to the program?
- What are the demographics of the user group, including income, education, gender, race or ethnicity, and other identities?
- Who has authority and decision-making power?
- What do decision-making processes tend to look like?
Program Environment
- What is the history of the community, program, and organization and of the evaluation within it?
- What are the historical, economic, health and social dimensions of the communities?
- What are the strengths of this context?
- Are there any conditions or circumstances that are problematic?
- What are the ongoing political, social, or economic conditions that might influence the program?
- Are there any conditions or circumstances that are problematic?
- How is power distributed among persons who interact with or influence the program or who might be engaged in the evaluation (funders, planners, implementers)?
- How does the organization typically learn (about what they are doing well, areas for improvement) and what is the general level of receptiveness to learning from mistakes?
- What are the spoken or unspoken rules about evaluation at the organization?
- What are the spoken or unspoken rules about identifying and using data for action at the organization?
- How does the organizational mission support or oppose evaluation?
Evaluation Capacity
Evaluation capacity is the program's existing capacity to "do" and "use" evaluation. Specifically, there are two types of evaluation capacity, Individual and Organizational, to consider.
Individual Evaluation Capacity
- How knowledgeable is the individual, or evaluation team about the different evaluation approaches and methods[6]?
- How much experience does the individual or evaluation team have with developing evaluation tools and templates for use by the organizations[6]?
- How much experience does the individual or evaluation team have with analytical and faciliation skills[6]?
- What is the individual's attitude toward evaluation[6]?
- Does the individual view the evaluation as important and valuable[6]?
Organizational Evaluation Capacity
- What financial resources are available for the evaluation[6]?
- What human resources (time and internal/external evaluation staff) are available for the evaluation[6]?
- What is the organizational culture with respect to evaluation and using evaluation findings[6]?
- What mechanisms already exist to share products from the evaluation with others in the organization who could benefit[6]?
- Are there opportunities within the organization to reflect on insights that arise throughout the course of the evaluation[6]?
Evaluator Readiness
The lens that evaluators use in the evaluation is a direct reflection of their personal experiences and context. These experiences may reflect an individual's privilege and/or unintentional bias and may lead the evaluator to make decisions that perpetuate health inequities6. To help prevent this, evaluators may reflect on how their own personal context influences their evaluation practices.
Applying the Evaluation Standards and Cross-Cutting Actions
There are several key questions that an evaluator can use to make sure that they have successfully integrated the cross-cutting actions and applied the evaluation standards within Step 1 of the Framework. Refer to Table 2 in CDC Program Evaluation Framework, 2024 for how to consider applying these actions and standards to assessing context.
- Kidder DP, Fierro L, Luna E, et al. CDC Program Evaluation Framework, 2024. MMWR Recomm Rep. 2024;73(No. RR-6):1-37. doi: http://dx.doi.org/10.15585/mmwr.rr7306a1.
- Armstead, T. Lee, R. (2013). Evaluability Assessment Guidance for DELTA FOCUS Recipients. Internal report (CDC). Unpublished.
- D'Ostie,R.L., Dagenais, C., Ridde, V. (2013). An evaluability assessment of a West Africa based non-governmental organizations (NGO) progressive evaluation strategy. Evaluation and Program Planning, 36(1), 71-79.
- Farmer, H. (2018). Conversations to have when designing a program: Fostering evaluative thinking. Better Evaluation Blog.
- Bryson, J.M., Patton, M.Q., Bowman, R.A. Working with evaluation stakeholders: A rationale, step-wise approach and toolkit. Evaluation and Program Planning. 2011;34(1):1-12. doi:https://doi.org/10.1016/j.evalprogplan.2010.07.001
- Miranda-Hartsuff, P., Hillard, T., Walters, H., Snipes, A., Reddock, E., Santos, E., Elam P. (2023). Chapter 2: Positionality, Reflexivity, and Strengthening the Process of Conducting Culturally Responsive Racially Equitable Evaluations (CREE) as Practitioners of Color. In Adedoyin, A., Amutah-Onukagha, N., Jones, C. (Eds.). Culturally Responsive & Equitable Evaluation: Visions and Voices of Emerging Scholars. Congella, Inc. 2024. ISBN:978-1-7935-5864-0
- Buetti, D., Bourgeois, I., & Jafary, M. (2023). Examining the competencies required by evaluation capacity builders in community-based organizations. Evaluation and Program Planning, 97, 102242