Automatic diagnosis of COVID-19 disease using deep convolutional neural network with multi-feature channel from respiratory sound data: Cough, voice, and breath

The problem of respiratory sound classification has received good attention from the clinical scientists and medical researcher’s community in the last year to the diagnosis of COVID-19 disease. The Artificial Intelligence (AI) based models deployed into the real-world to identify the COVID-19 disea...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Kranthi Kumar Lella, Alphonse Pja
Formato: article
Lenguaje:EN
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://doaj.org/article/9c9968a2b5af41ddb9e6ef3a2fe26217
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:The problem of respiratory sound classification has received good attention from the clinical scientists and medical researcher’s community in the last year to the diagnosis of COVID-19 disease. The Artificial Intelligence (AI) based models deployed into the real-world to identify the COVID-19 disease from human-generated sounds such as voice/speech, dry cough, and breath. The CNN (Convolutional Neural Network) is used to solve many real-world problems with Artificial Intelligence (AI) based machines. We have proposed and implemented a multi-channeled Deep Convolutional Neural Network (DCNN) for automatic diagnosis of COVID-19 disease from human respiratory sounds like a voice, dry cough, and breath, and it will give better accuracy and performance than previous models. We have applied multi-feature channels such as the data De-noising Auto Encoder (DAE) technique, GFCC (Gamma-tone Frequency Cepstral Coefficients), and IMFCC (Improved Multi-frequency Cepstral Coefficients) methods on augmented data to extract the deep features for the input of the CNN. The proposed approach improves system performance to the diagnosis of COVID-19 disease and provides better results on the COVID-19 respiratory sound dataset.