Mixed-precision weights network for field-programmable gate array.

In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1,1}, ternary {-1,0,1}, and 32-bit floating-point. We further developed the MPWN from both software and hardware aspects. From t...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ninnart Fuengfusin, Hakaru Tamukoh
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/8d7be8b03dd34b0d98af4d0dbd817acd
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:8d7be8b03dd34b0d98af4d0dbd817acd
record_format dspace
spelling oai:doaj.org-article:8d7be8b03dd34b0d98af4d0dbd817acd2021-12-02T20:11:22ZMixed-precision weights network for field-programmable gate array.1932-620310.1371/journal.pone.0251329https://doaj.org/article/8d7be8b03dd34b0d98af4d0dbd817acd2021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0251329https://doaj.org/toc/1932-6203In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1,1}, ternary {-1,0,1}, and 32-bit floating-point. We further developed the MPWN from both software and hardware aspects. From the software aspect, we evaluated the MPWN on the Fashion-MNIST and CIFAR10 datasets. We systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, we proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, we also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, we synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. Our proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional 32-bit floating-point model. In addition, our implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations.Ninnart FuengfusinHakaru TamukohPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 5, p e0251329 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Ninnart Fuengfusin
Hakaru Tamukoh
Mixed-precision weights network for field-programmable gate array.
description In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1,1}, ternary {-1,0,1}, and 32-bit floating-point. We further developed the MPWN from both software and hardware aspects. From the software aspect, we evaluated the MPWN on the Fashion-MNIST and CIFAR10 datasets. We systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, we proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, we also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, we synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. Our proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional 32-bit floating-point model. In addition, our implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations.
format article
author Ninnart Fuengfusin
Hakaru Tamukoh
author_facet Ninnart Fuengfusin
Hakaru Tamukoh
author_sort Ninnart Fuengfusin
title Mixed-precision weights network for field-programmable gate array.
title_short Mixed-precision weights network for field-programmable gate array.
title_full Mixed-precision weights network for field-programmable gate array.
title_fullStr Mixed-precision weights network for field-programmable gate array.
title_full_unstemmed Mixed-precision weights network for field-programmable gate array.
title_sort mixed-precision weights network for field-programmable gate array.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/8d7be8b03dd34b0d98af4d0dbd817acd
work_keys_str_mv AT ninnartfuengfusin mixedprecisionweightsnetworkforfieldprogrammablegatearray
AT hakarutamukoh mixedprecisionweightsnetworkforfieldprogrammablegatearray
_version_ 1718374897943576576