Aggregating Reliable Submissions in Crowdsourcing Systems
Crowdsourcing is a cost-effective method that gathers crowd wisdom to solve machine-hard problems. In crowdsourcing systems, requesters post tasks for obtaining reliable solutions. Nevertheless, since workers have various expertise and knowledge background, they probably deliver low-quality and ambi...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/51b6538ea0dd427c878fa4ba81d86588 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Crowdsourcing is a cost-effective method that gathers crowd wisdom to solve machine-hard problems. In crowdsourcing systems, requesters post tasks for obtaining reliable solutions. Nevertheless, since workers have various expertise and knowledge background, they probably deliver low-quality and ambiguous submissions. A task aggregation scheme is generally employed in crowdsourcing systems, to deal with this problem. Existing methods mainly focus on structured submissions and also do not consider the cost incurred for completing a task. We exploit features of submissions to improve the task aggregation for proposing a method which is applicable to both structured and unstructured tasks. Moreover, existing probabilistic methods for answer aggregation are sensitive to sparsity. Our approach uses a generative probabilistic model that incorporates similarity in answers along with worker and task features. Thereafter, we present a method for minimizing the cost of tasks, that eventually leverages the quality of answers. We conduct experiments on empirical data that demonstrates the effectiveness of our method compared to state-of-the-art approaches. |
---|