There are a number of places evaluators can share their reports with each other, such as the American Evaluation Association’s eLibrary, the website informalscience.org, and organizations’ own websites. Even though opportunities to share reports online are increasing, the evaluation field lacks guidance on what to include in evaluation reports meant for an evaluator audience. If the evaluation field wants to learn from evaluation reports posted to online repositories, how can evaluators help to ensure the reports they share are useful to this audience? This paper explores this question through the analysis of 520 evaluation reports uploaded to informalscience.org. The researchers created an extensive coding framework to align with features of evaluation reports and evaluators’ needs. It was used to identify how often elements were included or lacking in evaluation reports. This analysis resulted in a set of guiding questions for evaluators preparing reports to share with other evaluators.
Document
TEAM MEMBERS
Citation
Funders
NSF
Award Number:
1010924
If you would like to edit a resource, please email us to submit your request.