Using Scientific Abstracts to Measure Learning Outcomes in the Biological Sciences

Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to b...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Rebecca Giorno, William Wolf, Patrick L. Hindmarsh, Jeffrey V. Yule, Jeff Shultz
Formato: article
Lenguaje:EN
Publicado: American Society for Microbiology 2013
Materias:
Acceso en línea:https://doaj.org/article/e9ae9bd12a3e493d94ce01aae24c8c1e
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to both challenge students in specific subject areas and develop the critical thinking skills upper-level classes and research require. Although students do not create an end product (performance), they must demonstrate proficiency in a specific skill that scientists use on a regular basis: critically evaluating scientific literature via abstract analysis, a direct measure of scientific literacy. Scientific abstracts from peer-reviewed research articles lend themselves to in-class testing, since they are typically 250 words or less in length, and their analysis requires skills beyond rote memorization. To address the effectiveness of particular courses, in five different upper-level courses (Ecology, Genetics, Virology, Pathology, and Microbiology) we performed pre- and postcourse assessments to determine whether students were developing subject area competence and if abstract-based testing was a viable instructional strategy. Assessment should cover all levels in Bloom’s hierarchy, which can be accomplished via multiple-choice questions (2). We hypothesized that by comparing the mean scores of pre- and posttest exams designed to address specific tiers of Bloom’s taxonomy, we could evaluate the effectiveness of a course in preparing students to demonstrate subject area competence. We also sought to develop general guidelines for preparing such tests and methods to identify test- and course-specific problems.