An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.

Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Evgenia Christoforou, Antonio Fernández Anta, Angel Sánchez
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/974753e867284d2383e4f4cc67f2f518
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:974753e867284d2383e4f4cc67f2f518
record_format dspace
spelling oai:doaj.org-article:974753e867284d2383e4f4cc67f2f5182021-12-02T20:10:35ZAn experimental characterization of workers' behavior and accuracy in crowdsourced tasks.1932-620310.1371/journal.pone.0252604https://doaj.org/article/974753e867284d2383e4f4cc67f2f5182021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0252604https://doaj.org/toc/1932-6203Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers' actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers' behavior. In this work, we attempt to interpret the workers' behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker's level of dedication to a task, according to the task's nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.Evgenia ChristoforouAntonio Fernández AntaAngel SánchezPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 6, p e0252604 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Evgenia Christoforou
Antonio Fernández Anta
Angel Sánchez
An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
description Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers' actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers' behavior. In this work, we attempt to interpret the workers' behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker's level of dedication to a task, according to the task's nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.
format article
author Evgenia Christoforou
Antonio Fernández Anta
Angel Sánchez
author_facet Evgenia Christoforou
Antonio Fernández Anta
Angel Sánchez
author_sort Evgenia Christoforou
title An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
title_short An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
title_full An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
title_fullStr An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
title_full_unstemmed An experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
title_sort experimental characterization of workers' behavior and accuracy in crowdsourced tasks.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/974753e867284d2383e4f4cc67f2f518
work_keys_str_mv AT evgeniachristoforou anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT antoniofernandezanta anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT angelsanchez anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT evgeniachristoforou experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT antoniofernandezanta experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT angelsanchez experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
_version_ 1718375012871700480