Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.

Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to tradi...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Matthew J C Crump, John V McDonnell, Todd M Gureckis
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2013
Materias:
R
Q
Acceso en línea:https://doaj.org/article/f44331d5da82474e9a13011ef638cf5c
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:f44331d5da82474e9a13011ef638cf5c
record_format dspace
spelling oai:doaj.org-article:f44331d5da82474e9a13011ef638cf5c2021-11-18T07:53:43ZEvaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.1932-620310.1371/journal.pone.0057410https://doaj.org/article/f44331d5da82474e9a13011ef638cf5c2013-01-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/23516406/?tool=EBIhttps://doaj.org/toc/1932-6203Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers.Matthew J C CrumpJohn V McDonnellTodd M GureckisPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 8, Iss 3, p e57410 (2013)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Matthew J C Crump
John V McDonnell
Todd M Gureckis
Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
description Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers.
format article
author Matthew J C Crump
John V McDonnell
Todd M Gureckis
author_facet Matthew J C Crump
John V McDonnell
Todd M Gureckis
author_sort Matthew J C Crump
title Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
title_short Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
title_full Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
title_fullStr Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
title_full_unstemmed Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.
title_sort evaluating amazon's mechanical turk as a tool for experimental behavioral research.
publisher Public Library of Science (PLoS)
publishDate 2013
url https://doaj.org/article/f44331d5da82474e9a13011ef638cf5c
work_keys_str_mv AT matthewjccrump evaluatingamazonsmechanicalturkasatoolforexperimentalbehavioralresearch
AT johnvmcdonnell evaluatingamazonsmechanicalturkasatoolforexperimentalbehavioralresearch
AT toddmgureckis evaluatingamazonsmechanicalturkasatoolforexperimentalbehavioralresearch
_version_ 1718422803150012416