Industry 4.0-Oriented Deep Learning Models for Human Activity Recognition

According to the Industry 4.0 vision, humans in a smart factory, should be equipped with formidable and seamless communication capabilities and integrated into a cyber-physical system (CPS) that can be utilized to monitor and recognize human activity via artificial intelligence (e.g., deep learning)...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Saeed Mohsen, Ahmed Elkaseer, Steffen G. Scholz
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/bb6dc818c9354e2c892c56a200131b9b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:According to the Industry 4.0 vision, humans in a smart factory, should be equipped with formidable and seamless communication capabilities and integrated into a cyber-physical system (CPS) that can be utilized to monitor and recognize human activity via artificial intelligence (e.g., deep learning). Recent advances in the accuracy of deep learning have contributed significantly to solving the human activity recognition issues, but it remains necessary to develop high performance deep learning models that provide greater accuracy. In this paper, three models: long short-term memory (LSTM), convolutional neural network (CNN), and combined CNN-LSTM are proposed for classification of human activities. These models are applied to a dataset collected from 36 persons engaged in 6 classes of activities &#x2013; downstairs, jogging, sitting, standing, upstairs, and walking. The proposed models are trained using TensorFlow framework with a hyper-parameter tuning method to achieve high accuracy. Experimentally, confusion matrices and receiver operating characteristic (ROC) curves are used to assess the performance of the proposed models. The results illustrate that the hybrid model CNN-LSTM provides a better performance than either LSTM or CNN in the classification of human activities. The CNN-LSTM model provides the best performance, with a testing accuracy of 97.76&#x0025;, followed by the LSTM with a testing accuracy of 96.61&#x0025;, while the CNN shows the least testing accuracy of 94.51&#x0025;. The testing loss rates for the LSTM, CNN, and CNN-LSTM are 0.236, 0.232, and 0.167, respectively, while the precision, recall, <inline-formula> <tex-math notation="LaTeX">$F1$ </tex-math></inline-formula>-Measure, and the area under the ROC curves (AUC<sub>S</sub>) for the CNN-LSTM are 97.75&#x0025;, 97.77&#x0025;, 97.76&#x0025;, and 100&#x0025;, respectively.