Methods for Aggregating Crowdsourced Ontology-based Item Annotations
Crowdsourcing plays an important role in modern IT landscape, enabling the use of human information processing abilities to solve problems that are still hard for machines. One of the specific (and most demanded) applications of crowdsourcing is collecting item annotations, i.e., describing the cont...
Guardado en:
Autor principal: | |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
FRUCT
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/1bd44bfe22b0406c955ece0536ea814d |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Crowdsourcing plays an important role in modern IT landscape, enabling the use of human information processing abilities to solve problems that are still hard for machines. One of the specific (and most demanded) applications of crowdsourcing is collecting item annotations, i.e., describing the contents of complex items with a help of labels (tags). Input received from crowdsourcing participants is typically unreliable, therefore, to increase the quality of annotations, each item is typically processed by several participants and the obtained annotations have to be aggregated. The paper considers a special case of annotating, where a set of possible labels, as well as the set of relationships between the labeled items and the labels are defined by an OWL 2 ontology (OWL QL). Such semantic item annotations turn out to be very useful in organizing large collections of items and enabling semantic search in them. In order to increase annotations quality, the paper proposes two aggregation methods OntoVoting and OntoSB, differing in that the first one is agnostic with respect to participants reliability and the second one accounts for variations in reliability. Simulation experiments with ontology-based annotations of varying quality show that the proposed aggregation methods increase the quality of collected ontology-based item annotations. |
---|