Performance and Efficiency Evaluation of ASR Inference on the Edge

Automatic speech recognition, a process of converting speech signals to text, has improved a great deal in the past decade thanks to the deep learning based systems. With the latest transformer based models, the recognition accuracy measured as word-error-rate (WER), is even below the human annotato...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Santosh Gondi, Vineel Pratap
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
ASR
Acceso en línea:https://doaj.org/article/588485d4c2fa48e7833512d9c2f772e6
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:588485d4c2fa48e7833512d9c2f772e6
record_format dspace
spelling oai:doaj.org-article:588485d4c2fa48e7833512d9c2f772e62021-11-25T19:00:41ZPerformance and Efficiency Evaluation of ASR Inference on the Edge10.3390/su1322123922071-1050https://doaj.org/article/588485d4c2fa48e7833512d9c2f772e62021-11-01T00:00:00Zhttps://www.mdpi.com/2071-1050/13/22/12392https://doaj.org/toc/2071-1050Automatic speech recognition, a process of converting speech signals to text, has improved a great deal in the past decade thanks to the deep learning based systems. With the latest transformer based models, the recognition accuracy measured as word-error-rate (WER), is even below the human annotator error (4%). However, most of these advanced models run on big servers with large amounts of memory, CPU/GPU resources and have huge carbon footprint. This server based architecture of ASR is not viable in the long run given the inherent lack of privacy for user data, reliability and latency issues of the network connection. On the other hand, on-device ASR (meaning, speech to text conversion on the edge device itself) solutions will fix deep-rooted privacy issues while at same time being more reliable and performant by avoiding network connectivity to the back-end server. On-device ASR can also lead to a more sustainable solution by considering the energy vs. accuracy trade-off and choosing right model for specific use cases/applications of the product. Hence, in this paper we evaluate energy-accuracy trade-off of ASR with a typical transformer based speech recognition model on an edge device. We have run evaluations on Raspberry Pi with an off-the-shelf USB meter for measuring energy consumption. We conclude that, in the case of CPU based ASR inference, the energy consumption grows exponentially as the word error rate improves linearly. Additionally, based on our experiment we deduce that, with PyTorch mobile optimization and quantization, the typical transformer based ASR on edge performs reasonably well in terms of accuracy and latency and comes close to the accuracy of server based inference.Santosh GondiVineel PratapMDPI AGarticleautomatic speech recognitionASRedge inferenceRaspberry PitransformersPyTorchEnvironmental effects of industries and plantsTD194-195Renewable energy sourcesTJ807-830Environmental sciencesGE1-350ENSustainability, Vol 13, Iss 12392, p 12392 (2021)
institution DOAJ
collection DOAJ
language EN
topic automatic speech recognition
ASR
edge inference
Raspberry Pi
transformers
PyTorch
Environmental effects of industries and plants
TD194-195
Renewable energy sources
TJ807-830
Environmental sciences
GE1-350
spellingShingle automatic speech recognition
ASR
edge inference
Raspberry Pi
transformers
PyTorch
Environmental effects of industries and plants
TD194-195
Renewable energy sources
TJ807-830
Environmental sciences
GE1-350
Santosh Gondi
Vineel Pratap
Performance and Efficiency Evaluation of ASR Inference on the Edge
description Automatic speech recognition, a process of converting speech signals to text, has improved a great deal in the past decade thanks to the deep learning based systems. With the latest transformer based models, the recognition accuracy measured as word-error-rate (WER), is even below the human annotator error (4%). However, most of these advanced models run on big servers with large amounts of memory, CPU/GPU resources and have huge carbon footprint. This server based architecture of ASR is not viable in the long run given the inherent lack of privacy for user data, reliability and latency issues of the network connection. On the other hand, on-device ASR (meaning, speech to text conversion on the edge device itself) solutions will fix deep-rooted privacy issues while at same time being more reliable and performant by avoiding network connectivity to the back-end server. On-device ASR can also lead to a more sustainable solution by considering the energy vs. accuracy trade-off and choosing right model for specific use cases/applications of the product. Hence, in this paper we evaluate energy-accuracy trade-off of ASR with a typical transformer based speech recognition model on an edge device. We have run evaluations on Raspberry Pi with an off-the-shelf USB meter for measuring energy consumption. We conclude that, in the case of CPU based ASR inference, the energy consumption grows exponentially as the word error rate improves linearly. Additionally, based on our experiment we deduce that, with PyTorch mobile optimization and quantization, the typical transformer based ASR on edge performs reasonably well in terms of accuracy and latency and comes close to the accuracy of server based inference.
format article
author Santosh Gondi
Vineel Pratap
author_facet Santosh Gondi
Vineel Pratap
author_sort Santosh Gondi
title Performance and Efficiency Evaluation of ASR Inference on the Edge
title_short Performance and Efficiency Evaluation of ASR Inference on the Edge
title_full Performance and Efficiency Evaluation of ASR Inference on the Edge
title_fullStr Performance and Efficiency Evaluation of ASR Inference on the Edge
title_full_unstemmed Performance and Efficiency Evaluation of ASR Inference on the Edge
title_sort performance and efficiency evaluation of asr inference on the edge
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/588485d4c2fa48e7833512d9c2f772e6
work_keys_str_mv AT santoshgondi performanceandefficiencyevaluationofasrinferenceontheedge
AT vineelpratap performanceandefficiencyevaluationofasrinferenceontheedge
_version_ 1718410403279536128