Methods for Aggregating Crowdsourced Ontology-based Item Annotations

Crowdsourcing plays an important role in modern IT landscape, enabling the use of human information processing abilities to solve problems that are still hard for machines. One of the specific (and most demanded) applications of crowdsourcing is collecting item annotations, i.e., describing the cont...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Andrew Ponomarev
Formato: article
Lenguaje:EN
Publicado: FRUCT 2021
Materias:
owl
Acceso en línea:https://doaj.org/article/1bd44bfe22b0406c955ece0536ea814d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:1bd44bfe22b0406c955ece0536ea814d
record_format dspace
spelling oai:doaj.org-article:1bd44bfe22b0406c955ece0536ea814d2021-11-20T15:59:33ZMethods for Aggregating Crowdsourced Ontology-based Item Annotations2305-72542343-073710.23919/FRUCT53335.2021.9599979https://doaj.org/article/1bd44bfe22b0406c955ece0536ea814d2021-10-01T00:00:00Zhttps://www.fruct.org/publications/fruct30/files/Pon.pdfhttps://doaj.org/toc/2305-7254https://doaj.org/toc/2343-0737Crowdsourcing plays an important role in modern IT landscape, enabling the use of human information processing abilities to solve problems that are still hard for machines. One of the specific (and most demanded) applications of crowdsourcing is collecting item annotations, i.e., describing the contents of complex items with a help of labels (tags). Input received from crowdsourcing participants is typically unreliable, therefore, to increase the quality of annotations, each item is typically processed by several participants and the obtained annotations have to be aggregated. The paper considers a special case of annotating, where a set of possible labels, as well as the set of relationships between the labeled items and the labels are defined by an OWL 2 ontology (OWL QL). Such semantic item annotations turn out to be very useful in organizing large collections of items and enabling semantic search in them. In order to increase annotations quality, the paper proposes two aggregation methods OntoVoting and OntoSB, differing in that the first one is agnostic with respect to participants reliability and the second one accounts for variations in reliability. Simulation experiments with ontology-based annotations of varying quality show that the proposed aggregation methods increase the quality of collected ontology-based item annotations.Andrew PonomarevFRUCTarticleontologyowlannotationlabel aggregationcrowdsourcinguncertaintyreasoningTelecommunicationTK5101-6720ENProceedings of the XXth Conference of Open Innovations Association FRUCT, Vol 30, Iss 1, Pp 177-183 (2021)
institution DOAJ
collection DOAJ
language EN
topic ontology
owl
annotation
label aggregation
crowdsourcing
uncertainty
reasoning
Telecommunication
TK5101-6720
spellingShingle ontology
owl
annotation
label aggregation
crowdsourcing
uncertainty
reasoning
Telecommunication
TK5101-6720
Andrew Ponomarev
Methods for Aggregating Crowdsourced Ontology-based Item Annotations
description Crowdsourcing plays an important role in modern IT landscape, enabling the use of human information processing abilities to solve problems that are still hard for machines. One of the specific (and most demanded) applications of crowdsourcing is collecting item annotations, i.e., describing the contents of complex items with a help of labels (tags). Input received from crowdsourcing participants is typically unreliable, therefore, to increase the quality of annotations, each item is typically processed by several participants and the obtained annotations have to be aggregated. The paper considers a special case of annotating, where a set of possible labels, as well as the set of relationships between the labeled items and the labels are defined by an OWL 2 ontology (OWL QL). Such semantic item annotations turn out to be very useful in organizing large collections of items and enabling semantic search in them. In order to increase annotations quality, the paper proposes two aggregation methods OntoVoting and OntoSB, differing in that the first one is agnostic with respect to participants reliability and the second one accounts for variations in reliability. Simulation experiments with ontology-based annotations of varying quality show that the proposed aggregation methods increase the quality of collected ontology-based item annotations.
format article
author Andrew Ponomarev
author_facet Andrew Ponomarev
author_sort Andrew Ponomarev
title Methods for Aggregating Crowdsourced Ontology-based Item Annotations
title_short Methods for Aggregating Crowdsourced Ontology-based Item Annotations
title_full Methods for Aggregating Crowdsourced Ontology-based Item Annotations
title_fullStr Methods for Aggregating Crowdsourced Ontology-based Item Annotations
title_full_unstemmed Methods for Aggregating Crowdsourced Ontology-based Item Annotations
title_sort methods for aggregating crowdsourced ontology-based item annotations
publisher FRUCT
publishDate 2021
url https://doaj.org/article/1bd44bfe22b0406c955ece0536ea814d
work_keys_str_mv AT andrewponomarev methodsforaggregatingcrowdsourcedontologybaseditemannotations
_version_ 1718419410196103168