Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion

Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human–machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, ma...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Jacob Tryon, Ana Luisa Trejos
Formato: article
Lenguaje:EN
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://doaj.org/article/9f82f98056e54f37a83739890c551b55
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:9f82f98056e54f37a83739890c551b55
record_format dspace
spelling oai:doaj.org-article:9f82f98056e54f37a83739890c551b552021-11-30T12:05:33ZEvaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion1662-521810.3389/fnbot.2021.692183https://doaj.org/article/9f82f98056e54f37a83739890c551b552021-11-01T00:00:00Zhttps://www.frontiersin.org/articles/10.3389/fnbot.2021.692183/fullhttps://doaj.org/toc/1662-5218Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human–machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, made possible through advancements in machine learning technology, is the use of bioelectrical signals, such as electroencephalography (EEG) and electromyography (EMG), to classify the user's actions and intentions. While classification using these signals has been demonstrated for many relevant control tasks, such as motion intention detection and gesture recognition, challenges in decoding the bioelectrical signals have caused researchers to seek methods for improving the accuracy of these models. One such method is the use of EEG–EMG fusion, creating a classification model that decodes information from both EEG and EMG signals simultaneously to increase the amount of available information. So far, EEG–EMG fusion has been implemented using traditional machine learning methods that rely on manual feature extraction; however, new machine learning methods have emerged that can automatically extract relevant information from a dataset, which may prove beneficial during EEG–EMG fusion. In this study, Convolutional Neural Network (CNN) models were developed using combined EEG–EMG inputs to determine if they have potential as a method of EEG–EMG fusion that automatically extracts relevant information from both signals simultaneously. EEG and EMG signals were recorded during elbow flexion–extension and used to develop CNN models based on time–frequency (spectrogram) and time (filtered signal) domain image inputs. The results show a mean accuracy of 80.51 ± 8.07% for a three-class output (33.33% chance level), with an F-score of 80.74%, using time–frequency domain-based models. This work demonstrates the viability of CNNs as a new method of EEG–EMG fusion and evaluates different signal representations to determine the best implementation of a combined EEG–EMG CNN. It leverages modern machine learning methods to advance EEG–EMG fusion, which will ultimately lead to improvements in the usability of wearable robotic exoskeletons.Jacob TryonAna Luisa TrejosAna Luisa TrejosFrontiers Media S.A.articleconvolutional neural networksEEG signalsEMG signalshuman-machine interfacessensor fusionNeurosciences. Biological psychiatry. NeuropsychiatryRC321-571ENFrontiers in Neurorobotics, Vol 15 (2021)
institution DOAJ
collection DOAJ
language EN
topic convolutional neural networks
EEG signals
EMG signals
human-machine interfaces
sensor fusion
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
spellingShingle convolutional neural networks
EEG signals
EMG signals
human-machine interfaces
sensor fusion
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
Jacob Tryon
Ana Luisa Trejos
Ana Luisa Trejos
Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
description Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human–machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, made possible through advancements in machine learning technology, is the use of bioelectrical signals, such as electroencephalography (EEG) and electromyography (EMG), to classify the user's actions and intentions. While classification using these signals has been demonstrated for many relevant control tasks, such as motion intention detection and gesture recognition, challenges in decoding the bioelectrical signals have caused researchers to seek methods for improving the accuracy of these models. One such method is the use of EEG–EMG fusion, creating a classification model that decodes information from both EEG and EMG signals simultaneously to increase the amount of available information. So far, EEG–EMG fusion has been implemented using traditional machine learning methods that rely on manual feature extraction; however, new machine learning methods have emerged that can automatically extract relevant information from a dataset, which may prove beneficial during EEG–EMG fusion. In this study, Convolutional Neural Network (CNN) models were developed using combined EEG–EMG inputs to determine if they have potential as a method of EEG–EMG fusion that automatically extracts relevant information from both signals simultaneously. EEG and EMG signals were recorded during elbow flexion–extension and used to develop CNN models based on time–frequency (spectrogram) and time (filtered signal) domain image inputs. The results show a mean accuracy of 80.51 ± 8.07% for a three-class output (33.33% chance level), with an F-score of 80.74%, using time–frequency domain-based models. This work demonstrates the viability of CNNs as a new method of EEG–EMG fusion and evaluates different signal representations to determine the best implementation of a combined EEG–EMG CNN. It leverages modern machine learning methods to advance EEG–EMG fusion, which will ultimately lead to improvements in the usability of wearable robotic exoskeletons.
format article
author Jacob Tryon
Ana Luisa Trejos
Ana Luisa Trejos
author_facet Jacob Tryon
Ana Luisa Trejos
Ana Luisa Trejos
author_sort Jacob Tryon
title Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
title_short Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
title_full Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
title_fullStr Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
title_full_unstemmed Evaluating Convolutional Neural Networks as a Method of EEG–EMG Fusion
title_sort evaluating convolutional neural networks as a method of eeg–emg fusion
publisher Frontiers Media S.A.
publishDate 2021
url https://doaj.org/article/9f82f98056e54f37a83739890c551b55
work_keys_str_mv AT jacobtryon evaluatingconvolutionalneuralnetworksasamethodofeegemgfusion
AT analuisatrejos evaluatingconvolutionalneuralnetworksasamethodofeegemgfusion
AT analuisatrejos evaluatingconvolutionalneuralnetworksasamethodofeegemgfusion
_version_ 1718406639466315776