Developing an Evaluation Plan
Planning for success starts with development of an evaluation plan. The plan serves as the roadmap for project evaluation and provides a window into the evaluation process. The evaluation plan includes information about the purpose and context of the evaluation, explains what is being evaluated, describes the goals and outcomes of the program being evaluated (which might include a logic model or theory of change), and identifies who should be involved in the evaluation. A key aspect of the plan is the evaluation questions that align with the project’s goals and outcomes and frame the entire evaluation by informing what evaluation approach is used, the sample for the evaluation, and how data will be collected and analyzed.
Evaluation Planning Resources: Handbooks, Guides & Checklists
These handbooks, guides, and checklists provide guidance for planning and carrying out an evaluation. The resources are useful for a wide range of individuals from users of evaluation to professional evaluators.
- The User–Friendly Handbook for Project Evaluation (2010): This handbook is aimed at individuals who need to learn more about the value of evaluation and how to design and carry out an evaluation study. The handbook discusses various types of evaluation, data collection methods, and culturally responsive strategies.
- User-Friendly Handbook for Mixed Methods Evaluations (1997): This handbook is aimed at users looking for practical advice about evaluation methodology, specifically around the use of qualitative techniques to complement quantitative measures in evaluation.
- W.K. Kellogg Foundation Evaluation Handbook (2004): This handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have responsibility for the ongoing evaluation of W.K. Kellogg Foundation–funded projects, but provides useful information for the informal science education field.
- Evaluation Checklists: This site from the Evaluation Center at Western Michigan University provides high-quality checklists targeted to specific aspects of planning, conducting, and managing an evaluation. There are also checklists dedicated to evaluation capacity building, metaevaluation, and evaluation of specific types of educational programs and products.
- Team-Based Inquiry Guide: A Practical Guide for Using Evaluation to Improve Informal Education Experiences (2014): This guide describes a collaborative evaluation process developed by the Nanoscale Informal Science Education (NISE) Network called Team-Based Inquiry. Team-Based Inquiry is an approach to building evaluation capacity within organizations by empowering informal education professionals to be able to get the data they need, when they need it, in order to improve their products and practices.
- Framework for Evaluating Impacts of Informal Science Education Projects (2008): This publication is intended to help those developing and evaluating informal science education projects better articulate and measure projects’ intended public or professional audience impacts.
- Common Guidelines for Education Research and Development (2013): This document from the U.S. Department of Education and the National Science Foundation provides guidelines for improving the quality, coherence, and pace of knowledge development in STEM education. The Guidelines include recommendations for all types of research and development studies that call for external feedback, or evaluation.
- Building capacity in evaluating outcomes: A teaching and facilitating resource for community-based programs and organizations (2008): This resource from the University of Wisconsin Extension provides 93 activities and materials for evaluators to use in building the capacity of individuals, groups, and organizations in evaluating outcomes. It provides, in one place, a complete set of practical resources that can be readily used or modified for evaluation capacity building efforts.
- My Environmental Education Evaluation Resource Assistant (MEERA): MEERA is an online “evaluation consultant” designed by the University of Michigan. It points to a wide range of resources that are helpful when evaluating environmental education programs.
- User's Guide for Evaluating Learning Outcomes in Citizen Science: This NSF-funded guide is designed for project directors and evaluators who want to measure individual learning outcomes from participation in citizen science projects. It covers planning, implementing, and disseminating evaluation and includes numerous tables, worksheets, and templates.
Working with an Institutional Review Board
All evaluators and researchers need to consider the protection of the people they are gathering data from. This might mean working with an Institutional Review Board (IRB). An IRB is a committee that reviews and approves research and evaluation protocols that involve gathering data from human subjects. Below are links to a CAISE-curated blog series that provides resources and an ongoing forum for people who are doing research and evaluation in informal STEM learning environments to help them navigate the complexities of defining appropriate procedures for human subjects protection.
- Navigating the complexities of research on human subjects in informal settings
- Going through the Institutional Review Board (IRB) Process For Informal Education Organizations
- Facilitating the IRB Process: Limiting Risk to Research Participants and Obtaining Implied Consent
- Resources for Dealing with the IRB Process: Sample Applications, Consent Forms, & Organizations
Resources for Culturally Responsive Evaluation
Evaluators interact with a broad range of people from many political, religious, ethnic, language, and racial groups. Therefore, they must be responsive to cultural issues in their work. Frierson, Hood, Hughes, and Thomas state in the The User-Friendly Handbook to Project Evaluation (NSF 2010a, p. 75): "Culturally responsive evaluators honor the cultural context in which an evaluation takes place by bringing needed, shared life experiences and understandings to the evaluation tasks at hand and hearing diverse voices and perspectives. The approach requires that evaluators critically examine culturally relevant but often neglected variables in project design and evaluation. In order to accomplish this task, the evaluator must have a keen awareness of the context in which the project is taking place and an understanding of how this context might influence the behavior of individuals in the project."
The American Evaluation Association (AEA) affirms the significance of cultural competence in evaluation, stating: "To ensure recognition, accurate interpretation, and respect for diversity, evaluators should ensure that the members of the evaluation team collectively demonstrate cultural competence. Cultural competence is a stance taken toward culture, not a discrete status or simple mastery of particular knowledge and skills. A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation throughout the process.” More information from AEA on cultural competence, including essential practices on cultural competence, can be found in the American Evaluation Association Statement on Cultural Competence in Evaluation.
AEA also operates several Topical Interest Groups (TIGS), several of which focus on aspects of cultural competence. TIGS with online information include:
- Disabilities and Other Vulnerable Populations TIG
- International and cross-cultural evaluation TIG
- LGBT Issues TIG
- Multiethnic issues in evaluation
Additional resources for understanding and performing culturally responsive evaluation include:
- The Center for Culturally Responsive Evaluation and Assessment (CREA) at the University of Illinois at Urbana-Champaign. This interdisciplinary endeavor brings together researchers from across the College and University, as well as domestic and international research partners, to address the growing need for policy-relevant studies that consider cultural norms, practices, and expectations in the design, implementation, and evaluation of social and educational interventions.
- Practical strategies for culturally competent evaluation: This guide from the Center for Disease Control provides strategies for approaching evaluation with a critical cultural lens to ensure that evaluation efforts have cultural relevance and generate meaningful findings that stakeholders will use. Although the guide focuses on the evaluation of public health programs, its guidance is more broadly applicable.
- Beyond Rigor: Improving Evaluations With Diverse Populations This website was developed by the Science Museum of Minnesota and Campbell-Kibler Associates to provide tips on designing, implementing, and assessing the quality of evaluations on programs and projects to improve the quality, quantity, and diversity of the STEM workforce.