Human Activity Recognition: A Comparative Study to Assess the Contribution Level of Accelerometer, ECG, and PPG Signals

Inertial sensors are widely used in the field of human activity recognition (HAR), since this source of information is the most informative time series among non-visual datasets. HAR researchers are actively exploring other approaches and different sources of signals to improve the performance of HA...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: Mahsa Sadat Afzali Arani, Diego Elias Costa, Emad Shihab
Format: article
Langue:EN
Publié: MDPI AG 2021
Sujets:
Accès en ligne:https://doaj.org/article/52985a7a19c74919903522baac9f6dfa
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Inertial sensors are widely used in the field of human activity recognition (HAR), since this source of information is the most informative time series among non-visual datasets. HAR researchers are actively exploring other approaches and different sources of signals to improve the performance of HAR systems. In this study, we investigate the impact of combining bio-signals with a dataset acquired from inertial sensors on recognizing human daily activities. To achieve this aim, we used the PPG-DaLiA dataset consisting of 3D-accelerometer (3D-ACC), electrocardiogram (ECG), photoplethysmogram (PPG) signals acquired from 15 individuals while performing daily activities. We extracted hand-crafted time and frequency domain features, then, we applied a correlation-based feature selection approach to reduce the feature-set dimensionality. After introducing early fusion scenarios, we trained and tested random forest models with subject-dependent and subject-independent setups. Our results indicate that combining features extracted from the 3D-ACC signal with the ECG signal improves the classifier’s performance F1-scores by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>2.72</mn><mo>%</mo></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>3.00</mn><mo>%</mo></mrow></semantics></math></inline-formula> (from <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>94.07</mn><mo>%</mo></mrow></semantics></math></inline-formula> to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>96.80</mn><mo>%</mo></mrow></semantics></math></inline-formula>, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>83.16</mn><mo>%</mo></mrow></semantics></math></inline-formula> to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>86.17</mn><mo>%</mo></mrow></semantics></math></inline-formula>) for subject-dependent and subject-independent approaches, respectively.