You know almost exactly what you want to do to improve the public understanding of science and technology. But you don’t have much of an idea about how to start to evaluate your project, to improve its effectiveness, and then to prove its success. Evaluation 101 to the rescue. This workshop will begin with “Why do an evaluation?” and “What is an evaluation?” and quickly follow with “How would this work with a planetarium show, website, or television show?” We will help participants identify the products or processes in their ISE initiatives. The rationale will include interactive
Questionnaires are used by faculty develpers, administrators, faculty, and students in higher education to assess need, conduct resarch, and evaluate teaching or learning. While used often, questionnaires, may be the most misused method of collecting information, due to htep toential for sampling error and nonsampling error, which includes questionnaire design, sample selection, nonresponse, wording, social desirability, recall, format, order, and context effects. This article ffers methods and strategies to minimize these errors during questionnaire development, discusses the improtance of
Presented at the 2008 ISE PI Summit, this presentation workshop from the Grant Management Office at NSF introduced participants to best practices and strategies for their NSF grant management practices.
Presented at the 2008 ISE PI Summit, this workshop presentation introduced participants to considerations and strategies for evaluating ISE project websites.
Presented at the 2008 ISE PI Summit, this workshop presentation introduced participants to the four basic phases of evalution, an overview of exhibit and program evaluation and research, and other resources for working with a professional evaluator.
Presented at the 2008 ISE PI Summit, this presenation introduces viewers to evaluation in the NSF ISE (now AISL) program and the Online Project Monitoring System (OPMS).
Presented at the 2008 ISE PI Summit, this presentation from Alan Friedman introduces the Framework for Evaluating the Impacts of Informal science Education Projects.
Based on the National Research Council study, Learning Science in Informal Environments: People, Places, and Pursuits, this book is a tool that provides case studies, illustrative examples, and probing questions for practitioners. In short, this book makes valuable research accessible to those working in informal science: educators, museum professionals, university faculty, youth leaders, media specialists, publishers, broadcast journalists, and many others. Practitioners in informal science settings--museums, after-school programs, science and technology centers, media enterprises, libraries
This Handbook is geared to the experienced researcher who is a novice evaluator. It orients the researcher to evaluation practice, with an emphasis on the use of qualitative techniques to augment quantitative measures.