Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometim...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Niharika Kumari, Verena Ruf, Sergey Mukhametov, Albrecht Schmidt, Jochen Kuhn, Stefan Küchemann
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/5dfd33239b7d44419913d5c70d053186
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:5dfd33239b7d44419913d5c70d053186
record_format dspace
spelling oai:doaj.org-article:5dfd33239b7d44419913d5c70d0531862021-11-25T18:58:20ZMobile Eye-Tracking Data Analysis Using Object Detection via YOLO v410.3390/s212276681424-8220https://doaj.org/article/5dfd33239b7d44419913d5c70d0531862021-11-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/22/7668https://doaj.org/toc/1424-8220Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.Niharika KumariVerena RufSergey MukhametovAlbrecht SchmidtJochen KuhnStefan KüchemannMDPI AGarticleeye movementseye trackingobject detectionYOLOFaster R-CNNphysics experimentsChemical technologyTP1-1185ENSensors, Vol 21, Iss 7668, p 7668 (2021)
institution DOAJ
collection DOAJ
language EN
topic eye movements
eye tracking
object detection
YOLO
Faster R-CNN
physics experiments
Chemical technology
TP1-1185
spellingShingle eye movements
eye tracking
object detection
YOLO
Faster R-CNN
physics experiments
Chemical technology
TP1-1185
Niharika Kumari
Verena Ruf
Sergey Mukhametov
Albrecht Schmidt
Jochen Kuhn
Stefan Küchemann
Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
description Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.
format article
author Niharika Kumari
Verena Ruf
Sergey Mukhametov
Albrecht Schmidt
Jochen Kuhn
Stefan Küchemann
author_facet Niharika Kumari
Verena Ruf
Sergey Mukhametov
Albrecht Schmidt
Jochen Kuhn
Stefan Küchemann
author_sort Niharika Kumari
title Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_short Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_full Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_fullStr Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_full_unstemmed Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_sort mobile eye-tracking data analysis using object detection via yolo v4
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/5dfd33239b7d44419913d5c70d053186
work_keys_str_mv AT niharikakumari mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT verenaruf mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT sergeymukhametov mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT albrechtschmidt mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT jochenkuhn mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT stefankuchemann mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
_version_ 1718410465075265536