Setting the Stage for Success: Evaluation and Evaluators
This blog was co-written with Karen Peterman and Cecilia Garibay. See additional author information at the end of the blog.
It’s finally fall—temperatures are starting to cool, the leaves will soon begin to change, and the proposal deadline for the National Science Foundation’s (NSF’s) Advancing Informal STEM Learning (AISL) program is just around the corner. This is the time of year when program developers and evaluators put their heads together to plan for their next great collaboration. Yes, now! In September (if not earlier).
Maybe you’ve been lucky enough to have funding from AISL for a while now, or maybe this will be your first submission. At minimum, all AISL projects are required to have plans and the means to support iterative improvement and to promote accountability (per section V. 3. D in the solicitation). However, some projects may desire to do more.
We’re hoping that this blog post helps get you up to speed with thinking about informal STEM evaluation and the wide range of resources available to guide your planning process. We’ve written it as a “just in time” cheat sheet of resources and updates to help guide your work and inspire new ideas.
How should you evaluate?
Sometimes it’s helpful to have a roadmap of all the “big picture” things to think about when you are planning an evaluation. Both CAISE and STELAR (STEM Learning and Research Center), the NSF-funded resource center for the Innovative Technology Experiences for Students and Teachers (ITEST) program, have created comprehensive guides for evaluation that include suggestions for working with evaluators, developing your evaluation plan, and ensuring that you’re meeting program expectations. EvaluATE, the NSF-funded resource center for the Advanced Technological Education program, also has valuable resources that can guide your process.
- CAISE’s Principal Investigator's Guide: Managing Evaluation in Informal STEM Education Projects includes success stories to inspire you (Chapter 1); ideas for those who are new to evaluation (Chapter 2); and a guide to the evaluation planning process that includes logic models, theories of change, and guidance on identifying and measuring indicators of success (Chapter 5).
- STELAR’s A Program Director’s Guide to Evaluation STEM Education Programs: Lessons Learned from Local, State, and National Initiatives includes a section devoted to evaluation planning. It includes everything from creating logic models, to identifying outcomes, to choosing a strategy for identifying data collection procedures and measures.
- EvaluATE created six checklists, one for each of the six types of research outlined in Common Guidelines for Education Research and Development. The U.S. Department of Education’s Institute of Education Sciences and NSF co-developed the Common Guidelines to clarify the different types of education and learning research and to provide guidance about the purpose, justification, design features, and expected outcomes from these various types. The checklists can help you quickly reference a type of research and determine whether you are following the guideline’s expectations. Also, be sure to review the 2018 Companion Guidelines on Replication and Reproducibility in Education Research, which focuses on the importance of replication and reproducibility of research and provides guidance on steps researchers can take to promote corroboration and build the evidence base.
- In 2016, Leslie wrote a blog post for EvaluATE that includes three tips for crafting a strong NSF proposal. The tips include (1) reading the solicitation carefully, (2) tailoring the evaluation plan to the project activities and outcomes, and (3) developing intentional evaluation plans that have a logical flow from evaluation questions through reporting and dissemination.
What should you evaluate?
These two seminal texts provide definitions of outcomes and can help you to frame evaluation plans. They are especially important for those developing a proposal to the AISL program.
- A report from a 2008 NSF-funded workshop, the Framework for Evaluating Impacts of Informal Science Education Projects, defines learning in relation to five types of outcomes that are of interest to many in the informal STEM learning field. The report also provides examples of how to evaluate a range of informal programs (exhibits, mass media, community and youth programs, technology programs, collaborations, and programs that are a combination of types).
- Learning Science in Informal Environments, a 2009 National Research Council publication, defines learning in relation to six strands. The “Recommendations for Practice and Research” and the “Areas for Future Research” in the final chapter are still relevant today.
The spring 2019 issue of New Directions for Evaluation, a journal of the American Evaluation Association, is dedicated to evaluation in informal STEM education settings. The following two articles are especially useful for planning what to evaluate.
- Sue Allen and Karen provide an updated perspective on current successes and challenges in evaluating informal STEM programs.The “Looking Ahead” section might prove useful for those who want to build on recent trends in evaluation.
- Editors Alice Fu, Archana Kannan, and Richard Shavelson summarize the volume by noting four themes that cut across all the chapters: validity, context, technology, and evaluation capacity building. Strong evaluation plans typically attend to at least some of these topics.
Who should do the evaluation?
Because an evaluation is a requirement of the AISL program (see section V. 3. D in the solicitation), NSF is one of the major audiences for your evaluation. However, it’s important to think about other stakeholders for the evaluation as well. Who might use the findings of your evaluation? Who might learn something from the evaluation process? Who might want to learn about your successes and challenges, and who might be interested in your findings? Both CAISE and EvaluATE have some great resources for thinking about how to work with an evaluator and identifying evaluation stakeholders.
- A 2015 CAISE interview with Kirk Knestis provides an insider’s perspective on partnerships among project teams, researchers, and evaluators for AISL projects. He also shares his ideas about different evaluation approaches for different types of projects.
- An EvaluATE blog post, How can you make sure your evaluation meets the needs of multiple stakeholders?, lists different stakeholders who might be interested in an evaluation, the kinds of information they might need or want from an evaluation, and tips for meeting those needs.
- Another EvaluATE resource, Identifying and Involving Stakeholders in an Evaluation, provides a worksheet to help guide evaluators and program personnel in decision making surrounding evaluation issues.
- Chapter 3 of CAISE’s PI guide, Choosing an Evaluator: Matching Project Needs with Evaluator Skills and Competencies, can help you locate an evaluator who is well matched to your project needs. Chapter 4, Working as a Team: Collaboration Through All Phases of Project Development, outlines what the evaluator needs from you to achieve a successful evaluation and what you should expect from your evaluator.
What about special review criteria for broadening participation?
Every NSF proposal, including those submitted to the AISL program, needs to articulate how the work will contribute to broader impacts for society. If your project has broadening participation of underrepresented or underserved groups as the primary goal, additional review criteria apply. In this case, your proposal must identify the characteristics and needs of the targeted underrepresented groups (public or professional) to be served and include explicit plans or strategies for addressing or accommodating their specific interests, community or cultural perspectives, and educational needs.
Ensuring that your evaluation is aligned with the values and goals of your project is essential to ensuring that your findings will have rigor and will be useful. Check out the following resources about culturally responsive evaluation approaches that can help you design an evaluation aligned with the goals of your proposal.
- This blog post by CAISE alumni Patricia Montano, Culturally Responsive Evaluation in Informal STEM Environments and Settings, serves as a quick primer about culturally responsive evaluation and contrasts it with traditional evaluation approaches. The post also includes links to the American Evaluation Association’s Public Statement on Cultural Competence in Evaluation, which lists five essential practices for cultural competence.
- In the spring 2019 edition of New Directions for Evaluation, Cecilia and Rebecca Teasdale describe the role that evaluation can play in helping address inequities in participation in informal learning, providing current perspectives on the ways that project teams and evaluators should think about broadening participation.
- The NSF INCLUDES National Network website has a library of resources on broadening participation in STEM—the folder “Diversity, Equity and Inclusion” might help project leaders or Principal Investigators think through program plans, and the folder “Evaluation and Assessment” includes a few resources related to culturally responsive evaluation practices. Join the INCLUDES Network to access these resources, and while you’re on the website, see what other Network members are doing to broaden participation in informal STEM learning environments.
Leslie is the Principal Evaluation Director at EDC and leads the network engagement and capacity building for NSF INCLUDES Coordination Hub and the NSF-funded STEM Evaluation Community Project. Karen is president of Karen Peterman Consulting and has recently focused on evaluation for science festivals and common measures for public engagement with science projects. Cecilia is principal and founder of Garibay Group and a contributing author of the Framework for Evaluating Impacts of Informal Science Education Projects. She is also a CAISE co-Principal Investigator and co-led the development of CAISE’s Broadening Participation Task Force toolkit, Broadening Perspectives on Broadening Participation in STEM.