One of the most common questions an evaluator can encounter is: “Who and how many people should I survey to evaluate our program effectively?” This answer, while seemingly straightforward, involves having a clear understanding of their program evaluation goals, and the methodology applied to achieve those goals. This blog post unpacks some of the complexities and explores how to determine the right sample size to ensure findings are both valid and meaningful.
Art of sampling: why your sampling matters!
The term ‘population’ in sampling program evaluation refers to the entire group of individuals participating in a program. It is often impossible to collect data from every single participant, that is where sample becomes important– a sample is smaller and manageable subset of population, which is strategically selected to represent the larger group. The right sample allows us to draw conclusions about the entire population without needing to collect data from every single participant.
Creating a sampling plan for your survey involves a few key steps:
Understand your population: Who do you need information from and what are their characteristics? Consider factors like ethnicity, gender, sexuality, age, and socio-economic condition. This helps in deciding who exactly makes up your target population, which is crucial for projecting your evaluation results. When selecting your sample, prioritize characteristics that are particularly valuable for achieving your evaluation goals, this might mean focusing on certain geographical locations, age groups, or any other variables of interest to your evaluation questions.
Sampling techniques: Use established sampling methods to select your participants. This step might involve simple random sampling, where everyone has equal chance of being selected, or stratified sampling, which might be an effective technique if your population is diverse and segmented, or sometimes could be sampling based on the convenience etc. (learn more about sampling here).
Determine sample size: If the group you’re studying is small (say, 100 people or less), it might be more effective to survey everyone rather than just a sample. This approach, also called as census, eliminates sampling error, and provides a complete view of your population. However, assess the resources (time, budget, staff) you have, as these will influence the feasibility of your sample.
The more diverse or split your population on key outcomes (e.g., 50% satisfied vs. 50% unsatisfied), the larger your sample size needs to be to accurately capture this diversity.
Not everyone you reach out to will respond to your survey. Estimate how many people are likely to respond based on past surveys or the nature of the issue being surveyed and adjust your sample size accordingly. Read more about six rules to determine sample size.
Ensure representativeness: Just as each sample-sized spoonful of ice cream should reflect the flavor of the entire tub, your evaluation survey your sample should also mirror the important characteristics of your population. This representativeness is extremely important for the validity of your evaluation results. Read more here.
Apply consistent methodology: An inconsistent method could skew the understanding of the population. A consistent systematic approach helps ensure the sample provides a true snapshot of the entire group.
Conclusion
Determining the right sampling technique and sample size is more art than science, involving a balance of statistical principles, practical considerations, and the specific characteristics of your target population. Getting your sampling right considerably improves the validity and reliability of your survey results, providing you with a solid foundation for making informed decisions.
Please note – this post provides a high-level overview of the quantitative aspects of evaluation; qualitative methods are equally important.
Reference:
Lance, P. and A. Hattori. (2016). Sampling and evaluation: A guide to sampling for program
impact evaluation. Chapel Hill, North Carolina: MEASURE Evaluation, University of North
Carolina. Retrieved from https://www.measureevaluation.org/resources/publications/ms-16-112/at_download/document
Questions? Feedback? Get in touch!
This post was prepared for PAN’s Research and Evaluation Treehouse by Janak Bajgai.