With the line between research and evaluation blurring, more ISE projects are employing evaluation and must now meet formal requirements for protecting human subjects. What ethical and legal considerations do you need to take into account in order to evaluate the effectiveness of your work in informal science education? What steps should you take to deal with the federal government’s requirements? In this workshop, we will discuss a range of topics related to these requirements and how PIs can effectively address them. By reviewing different evaluation scenarios, participants will consider the
Questionnaires are used by faculty develpers, administrators, faculty, and students in higher education to assess need, conduct resarch, and evaluate teaching or learning. While used often, questionnaires, may be the most misused method of collecting information, due to htep toential for sampling error and nonsampling error, which includes questionnaire design, sample selection, nonresponse, wording, social desirability, recall, format, order, and context effects. This article ffers methods and strategies to minimize these errors during questionnaire development, discusses the improtance of
Presented at the 2008 ISE PI Summit, this presentation workshop from the Grant Management Office at NSF introduced participants to best practices and strategies for their NSF grant management practices.
Presented at the 2008 ISE PI Summit, this workshop presentation introduced participants to considerations and strategies for evaluating ISE project websites.
Presented at the 2008 ISE PI Summit, this workshop presentation introduced participants to the four basic phases of evalution, an overview of exhibit and program evaluation and research, and other resources for working with a professional evaluator.
Presented at the 2008 ISE PI Summit, this presenation introduces viewers to evaluation in the NSF ISE (now AISL) program and the Online Project Monitoring System (OPMS).
Presented at the 2008 ISE PI Summit, this presentation from Alan Friedman introduces the Framework for Evaluating the Impacts of Informal science Education Projects.
Based on the National Research Council study, Learning Science in Informal Environments: People, Places, and Pursuits, this book is a tool that provides case studies, illustrative examples, and probing questions for practitioners. In short, this book makes valuable research accessible to those working in informal science: educators, museum professionals, university faculty, youth leaders, media specialists, publishers, broadcast journalists, and many others. Practitioners in informal science settings--museums, after-school programs, science and technology centers, media enterprises, libraries
This Handbook is geared to the experienced researcher who is a novice evaluator. It orients the researcher to evaluation practice, with an emphasis on the use of qualitative techniques to augment quantitative measures.
The March 12-13, 2007 workshop at NSF on informal science education evaluation brought together a distinguished group of experts to discuss how impact categories might be best applied to various types of informal learning projects. This publication is an outcome of that meeting. The authors have strived to make the sections as helpful as possible given the primary focus of this workshop on project impacts. It should be viewed as part of an ongoing process to improve the ways in which evaluation can most benefit ISE projects, NSF, and the field. The publication is intended to help those