Neural Architecture Search and Hardware Accelerator Co-Search: A Survey

Deep neural networks (DNN) are now dominating in the most challenging applications of machine learning. As DNNs can have complex architectures with millions of trainable parameters (the so-called weights), their design and training are difficult even for highly qualified experts. In order to reduce...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Lukas Sekanina
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/c44f1794327f4d478272a0619e4c1a9d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:c44f1794327f4d478272a0619e4c1a9d
record_format dspace
spelling oai:doaj.org-article:c44f1794327f4d478272a0619e4c1a9d2021-11-17T00:01:03ZNeural Architecture Search and Hardware Accelerator Co-Search: A Survey2169-353610.1109/ACCESS.2021.3126685https://doaj.org/article/c44f1794327f4d478272a0619e4c1a9d2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9606893/https://doaj.org/toc/2169-3536Deep neural networks (DNN) are now dominating in the most challenging applications of machine learning. As DNNs can have complex architectures with millions of trainable parameters (the so-called weights), their design and training are difficult even for highly qualified experts. In order to reduce human effort, neural architecture search (NAS) methods have been developed to automate the entire design process. The NAS methods typically combine searching in the space of candidate architectures and optimizing (learning) the weights using a gradient method. In this paper, we survey the key elements of NAS methods that – to various extents – consider hardware implementation of the resulting DNNs. We classified these methods into three major classes: single-objective NAS (no hardware is considered), hardware-aware NAS (DNN is optimized for a particular hardware platform), and NAS with hardware co-optimization (hardware is directly co-optimized with DNN as a part of NAS). Compared to previous surveys, we emphasize the multi-objective design approach that must be adopted in NAS and focus on co-design algorithms developed for concurrent optimization of DNN architectures and hardware platforms. As most research in this area deals with NAS for image classification using convolutional neural networks, we follow this trajectory in our paper. After reading the paper, the reader should understand why and how NAS and hardware co-optimization are currently used to build cutting-edge implementations of DNNs.Lukas SekaninaIEEEarticleAutomated designclassificationco-designdeep neural networkhardware acceleratorneural architecture searchElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 151337-151362 (2021)
institution DOAJ
collection DOAJ
language EN
topic Automated design
classification
co-design
deep neural network
hardware accelerator
neural architecture search
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Automated design
classification
co-design
deep neural network
hardware accelerator
neural architecture search
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Lukas Sekanina
Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
description Deep neural networks (DNN) are now dominating in the most challenging applications of machine learning. As DNNs can have complex architectures with millions of trainable parameters (the so-called weights), their design and training are difficult even for highly qualified experts. In order to reduce human effort, neural architecture search (NAS) methods have been developed to automate the entire design process. The NAS methods typically combine searching in the space of candidate architectures and optimizing (learning) the weights using a gradient method. In this paper, we survey the key elements of NAS methods that – to various extents – consider hardware implementation of the resulting DNNs. We classified these methods into three major classes: single-objective NAS (no hardware is considered), hardware-aware NAS (DNN is optimized for a particular hardware platform), and NAS with hardware co-optimization (hardware is directly co-optimized with DNN as a part of NAS). Compared to previous surveys, we emphasize the multi-objective design approach that must be adopted in NAS and focus on co-design algorithms developed for concurrent optimization of DNN architectures and hardware platforms. As most research in this area deals with NAS for image classification using convolutional neural networks, we follow this trajectory in our paper. After reading the paper, the reader should understand why and how NAS and hardware co-optimization are currently used to build cutting-edge implementations of DNNs.
format article
author Lukas Sekanina
author_facet Lukas Sekanina
author_sort Lukas Sekanina
title Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
title_short Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
title_full Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
title_fullStr Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
title_full_unstemmed Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
title_sort neural architecture search and hardware accelerator co-search: a survey
publisher IEEE
publishDate 2021
url https://doaj.org/article/c44f1794327f4d478272a0619e4c1a9d
work_keys_str_mv AT lukassekanina neuralarchitecturesearchandhardwareacceleratorcosearchasurvey
_version_ 1718426072314281984