Evaluation is the systematic process of assessing the value, worth, or quality of something. While evaluation can apply to events, projects, and services, this article focuses on its role in community-based programs.
Rather than rigid rules, the evaluation concepts presented here serve to visually organize and make sense of the complexities involved in program assessment. In practice, different types of evaluations can be conducted separately or combined within a single evaluation effort. The terminology used in this article aligns with common evaluation language, making it easier for readers to explore additional resources using the same terminology in things like web searches.
The goal of presenting this information is to help the reader determine which types of evaluation are most useful at different stages of program development and improvement. As shown in the diagram, these stages include needs and resource assessment, design and planning, implementation, and performance monitoring.
To reinforce understanding of the evaluation types mentioned, the example of a community-based sexually transmitted and blood-borne infections (STBBI) testing program will be used. Examples of suitable evaluation methods are listed after the example.

- FORMATIVE EVALUATION
Conducted early to identify whether a program is warranted, what gaps or issues the program aims to address, who the program will benefit, existing resources, perceived barriers & facilitators, and what capacities might need to be built or bridged. Identifying objectives and anticipated outcomes and results can also be a part. What is discovered in a formative evaluation will inform a program’s design.
Example of an STBBI testing program – Assessing whether a community lacks accessible STBBI testing services and identifying barriers (e.g., stigma, geographic distance).
Suitable evaluation methods – Needs assessment, program user interviews, environmental scans including literature review, focus groups, facilitators & barriers analysis.
- DEVELOPMENTAL EVALUATION
A well-designed program involves determining what the core activities and services that make will be to achieve objectives and anticipated outcomes and results. Developmental evaluations can analyze outcomes and results data from other programs to guide program design. Testing out core activities and refining them based on real-time feedback and observations support program innovation. It is about adapting and modifying to create a solid program plan before fully implementing.
Example of an STBBI testing program – A pilot project for mobile testing initiative adapts in real time by shifting locations based on community feedback about accessibility.
Suitable evaluation methods – Real-time feedback, rapid ethnographic assessment, adaptive case studies, community advisory group discussions, iterative data tracking.
- PROCESS EVALUATION
Once a program is up and running (implemented), a process evaluation examines whether a program is functioning according to plan efficiently, what is working well operationally & what is not, and barriers being encountered.
Example of an STBBI testing program – Reviewing how effectively outreach teams are engaging the community and whether testing sites are set up in locations that maximize access.
Suitable evaluation methods – Workflow analysis, observational studies, fidelity monitoring, service utilization tracking, staff and program user interviews.
- OUTCOMES/RESULTS EVALUATION
After a program has been running for a while, an outcomes/results evaluation can be conducted to look at whether a program is achieving what is set out to do. Measuring the actual outcomes and results will indicate the degree of program effectiveness. A good evaluation of this type outlines program specific outcomes and results that will be measured such as changes in health, attitudes, knowledge, and/or behaviour in people for whom the program is intended to benefit.
Example of an STBBI testing program – Analyzing whether testing rates have increased and if more people are linking to care after a positive diagnosis.
Suitable evaluation methods – Pre/post surveys, behaviour tracking, routine program data analysis, program participant experiences case studies, linkage-to-care audits.
- IMPACT EVALUATION
Assesses long-term, systemic changes resulting from an intervention, often considering broader social, economic, or health effects. The term “systemic” can include micro-systems such as organizations or larger systems such as public healthcare systems.
Impacts are different than outcomes and results. Outcomes tend to be immediate or intermediate program-level changes and impacts are longer term, longer lasting effects. Impacts at the individual program participant can be deeply personal and transformative regarding well-being and quality of life.
Example of an STBBI testing program – Over several years, the community sees reduced STBBI transmission rates and decreased stigma around testing.
Suitable evaluation methods – longitudinal cohort studies, epidemiological trend analysis, policy and system change assessment, social network analysis.
- SUMMATIVE EVALUATION
A summative evaluation reviews the overall effectiveness of a program at its end or after a major phase such as a pilot phase. It can guide decision making whether to expand, significantly modify, or discontinue a program. A summative evaluation can incorporate elements of other types of evaluation (formative, developmental, process, outcomes/results, and impact). In other words, it is comprehensive and synthesizes findings into a cohesive, high-level assessment about a program rather than just presenting & sharing separate evaluation results.
Example of an STBBI testing program – A final evaluation report determines whether the mobile testing program successfully increased early detection and should receive ongoing funding.
Suitable evaluation methods – meta-analysis of program outcomes, cost-benefit analysis, stakeholder debriefs, comparative effectiveness review, final evaluation report.
Summary
Evaluation is a valuable tool for understanding, improving, and sustaining community-based programs. By considering evaluation as part of a program’s natural cycle – from assessing needs to measuring long-term impact – organizations can make informed decisions about what type of evaluation to conduct at each stage.
The visual approach presented in this article offers a way to see how formative, developmental, process, outcomes, impact, and summative evaluations fit together, rather than viewing them as rigid, separate steps. While these evaluation types can be used individually, they are often most effective when combined to provide a comprehensive picture of a program’s effectiveness.
Understanding when and how to use evaluation in relation to the program cycle can strengthen community initiatives and ensure they are meeting their intended goals.
We recognize that conducting program evaluations can be an involved undertaking. Did you know that PAN’s Research and Evaluation Department provides fee-for-service paid evaluation options?
Please contact Joanna Mendell, Director of Research and Evaluation, if you would like to inquiry about PAN’s Impact Solutions Research and Evaluation Consulting.
___________________________________________________________________________________________
Questions? Feedback? Get in touch!
This post was prepared for PAN’s Research and Evaluation Treehouse by:

Monte Strong, Research Coordinator, [email protected]