EERA-KWS: A 163 TOPS/W Always-on Keyword Spotting Accelerator in 28nm CMOS Using Binary Weight Network and Precision Self-Adaptive Approximate Computing

This paper proposed an energy-efficient reconfigurable accelerator for keyword spotting (EERA-KWS) based on binary weight network (BWN) and fabricated in 28-nm CMOS technology. This keyword spotting system consists of two parts: the feature extraction based on melscale frequency cepstral coefficient...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Bo Liu, Zhen Wang, Hu Fan, Jing Yang, Wentao Zhu, Lepeng Huang, Yu Gong, Wei Ge, Longxing Shi
Formato: article
Lenguaje:EN
Publicado: IEEE 2019
Materias:
Acceso en línea:https://doaj.org/article/08b9754fd8cb4ab6905f47947b201669
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:This paper proposed an energy-efficient reconfigurable accelerator for keyword spotting (EERA-KWS) based on binary weight network (BWN) and fabricated in 28-nm CMOS technology. This keyword spotting system consists of two parts: the feature extraction based on melscale frequency cepstral coefficients (MFCC) and the keywords classification based on a BWN model, which is trained through the Google&#x2019;s Speech Commands database and deployed on our custom. To reduce the power consumption while maintaining the system recognition accuracy, we first optimize the MFCC implementation with approximate computing techniques, including Pre-emphasis coefficient transformation, rectangular Mel filtering, Framing and FFT optimization. Then, we propose a precision self-adaptive reconfigurable accelerator with digital-analog mixed approximate computing units to process the BWN efficiently. Based on the SNR prediction of background noise and post-detection of network output confidence, the BWN accelerator data path can be dynamically and adaptively reconfigured as 4, 8, or 16 bits. For the BWN accelerator, we proposed a time-delay based addition unit to process bit-wise approximate computing for the convolution layers and fully connected layers, and a LUT based unit for the activation layers. Implemented under TSMC 28 nm HPC&#x002B; process technology, the estimated power is <inline-formula> <tex-math notation="LaTeX">$77.8~\mu \text{W}~\sim ~115.9\mu \text{W}$ </tex-math></inline-formula>, the energy efficiency can achieve 163 TOPS/W, which is over <inline-formula> <tex-math notation="LaTeX">$1.8\times $ </tex-math></inline-formula> better than the state-of-the-art architecture.