Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous...
Enregistré dans:
Auteurs principaux: | Yui-Kai Weng, Shih-Hsu Huang, Hsu-Yu Kao |
---|---|
Format: | article |
Langue: | EN |
Publié: |
MDPI AG
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/5cf53a5820af40cca4e6ee6645d2bf4d |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Learned Image Compression With Separate Hyperprior Decoders
par: Zhao Zan, et autres
Publié: (2021) -
Design of universal convolutional layer IP core based on FPGA
par: Guochen AN, et autres
Publié: (2021) -
Evaluation of Deep Neural Network Compression Methods for Edge Devices Using Weighted Score-Based Ranking Scheme
par: Olutosin Ajibola Ademola, et autres
Publié: (2021) -
Sparse and dense matrix multiplication hardware for heterogeneous multi-precision neural networks
par: Jose Nunez-Yanez, et autres
Publié: (2021) -
Power Efficient Tiny Yolo CNN Using Reduced Hardware Resources Based on Booth Multiplier and WALLACE Tree Adders
par: Fasih Ud Din Farrukh, et autres
Publié: (2020)