Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous...
Guardado en:
Autores principales: | Yui-Kai Weng, Shih-Hsu Huang, Hsu-Yu Kao |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/5cf53a5820af40cca4e6ee6645d2bf4d |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Learned Image Compression With Separate Hyperprior Decoders
por: Zhao Zan, et al.
Publicado: (2021) -
Design of universal convolutional layer IP core based on FPGA
por: Guochen AN, et al.
Publicado: (2021) -
Evaluation of Deep Neural Network Compression Methods for Edge Devices Using Weighted Score-Based Ranking Scheme
por: Olutosin Ajibola Ademola, et al.
Publicado: (2021) -
Sparse and dense matrix multiplication hardware for heterogeneous multi-precision neural networks
por: Jose Nunez-Yanez, et al.
Publicado: (2021) -
Power Efficient Tiny Yolo CNN Using Reduced Hardware Resources Based on Booth Multiplier and WALLACE Tree Adders
por: Fasih Ud Din Farrukh, et al.
Publicado: (2020)