Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations

In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yui-Kai Weng, Shih-Hsu Huang, Hsu-Yu Kao
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/5cf53a5820af40cca4e6ee6645d2bf4d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:5cf53a5820af40cca4e6ee6645d2bf4d
record_format dspace
spelling oai:doaj.org-article:5cf53a5820af40cca4e6ee6645d2bf4d2021-11-25T18:56:41ZBlock-Based Compression and Corresponding Hardware Circuits for Sparse Activations10.3390/s212274681424-8220https://doaj.org/article/5cf53a5820af40cca4e6ee6645d2bf4d2021-11-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/22/7468https://doaj.org/toc/1424-8220In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption.Yui-Kai WengShih-Hsu HuangHsu-Yu KaoMDPI AGarticlecompression formatsconvolutional neural networksdata volumedigital circuitsedge computinglogic designChemical technologyTP1-1185ENSensors, Vol 21, Iss 7468, p 7468 (2021)
institution DOAJ
collection DOAJ
language EN
topic compression formats
convolutional neural networks
data volume
digital circuits
edge computing
logic design
Chemical technology
TP1-1185
spellingShingle compression formats
convolutional neural networks
data volume
digital circuits
edge computing
logic design
Chemical technology
TP1-1185
Yui-Kai Weng
Shih-Hsu Huang
Hsu-Yu Kao
Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
description In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption.
format article
author Yui-Kai Weng
Shih-Hsu Huang
Hsu-Yu Kao
author_facet Yui-Kai Weng
Shih-Hsu Huang
Hsu-Yu Kao
author_sort Yui-Kai Weng
title Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
title_short Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
title_full Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
title_fullStr Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
title_full_unstemmed Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
title_sort block-based compression and corresponding hardware circuits for sparse activations
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/5cf53a5820af40cca4e6ee6645d2bf4d
work_keys_str_mv AT yuikaiweng blockbasedcompressionandcorrespondinghardwarecircuitsforsparseactivations
AT shihhsuhuang blockbasedcompressionandcorrespondinghardwarecircuitsforsparseactivations
AT hsuyukao blockbasedcompressionandcorrespondinghardwarecircuitsforsparseactivations
_version_ 1718410524999286784