Evaluating the validity and applicability of automated essay scoring in two massive open online courses
The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Athabasca University Press
2014
|
Materias: | |
Acceso en línea: | https://doaj.org/article/0f6eb45b48a24187871450b031ba6f11 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:0f6eb45b48a24187871450b031ba6f11 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:0f6eb45b48a24187871450b031ba6f112021-12-02T16:59:50ZEvaluating the validity and applicability of automated essay scoring in two massive open online courses10.19173/irrodl.v15i5.18571492-3831https://doaj.org/article/0f6eb45b48a24187871450b031ba6f112014-10-01T00:00:00Zhttp://www.irrodl.org/index.php/irrodl/article/view/1857https://doaj.org/toc/1492-3831 The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow students to engage in critical writing and free-response activities. However, there is a lack of research investigating the validity of such systems in MOOCs. This research examined the effectiveness of an AES tool to score writing assignments in two MOOCs. Results indicated that some significant differences existed between Instructor grading, AES-Holistic scores, and AES-Rubric Total scores within two MOOC courses. However, use of the AES system may still be useful given instructors’ assessment needs and intent. Findings from this research have implications for instructional technology administrators, educational designers, and instructors implementing AES learning activities in MOOC courses. Erin Dawna ReillyRose Eleanore StaffordKyle Marie WilliamsStephanie Brooks CorlissAthabasca University Pressarticlemassive open online coursesassessmentautomated essay scoring systemsSpecial aspects of educationLC8-6691ENInternational Review of Research in Open and Distributed Learning, Vol 15, Iss 5 (2014) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
massive open online courses assessment automated essay scoring systems Special aspects of education LC8-6691 |
spellingShingle |
massive open online courses assessment automated essay scoring systems Special aspects of education LC8-6691 Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
description |
The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow students to engage in critical writing and free-response activities. However, there is a lack of research investigating the validity of such systems in MOOCs. This research examined the effectiveness of an AES tool to score writing assignments in two MOOCs. Results indicated that some significant differences existed between Instructor grading, AES-Holistic scores, and AES-Rubric Total scores within two MOOC courses. However, use of the AES system may still be useful given instructors’ assessment needs and intent. Findings from this research have implications for instructional technology administrators, educational designers, and instructors implementing AES learning activities in MOOC courses.
|
format |
article |
author |
Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss |
author_facet |
Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss |
author_sort |
Erin Dawna Reilly |
title |
Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
title_short |
Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
title_full |
Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
title_fullStr |
Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
title_full_unstemmed |
Evaluating the validity and applicability of automated essay scoring in two massive open online courses |
title_sort |
evaluating the validity and applicability of automated essay scoring in two massive open online courses |
publisher |
Athabasca University Press |
publishDate |
2014 |
url |
https://doaj.org/article/0f6eb45b48a24187871450b031ba6f11 |
work_keys_str_mv |
AT erindawnareilly evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT roseeleanorestafford evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT kylemariewilliams evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT stephaniebrookscorliss evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses |
_version_ |
1718382253209288704 |