Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
Deep neural networks (DNN) are now dominating in the most challenging applications of machine learning. As DNNs can have complex architectures with millions of trainable parameters (the so-called weights), their design and training are difficult even for highly qualified experts. In order to reduce...
Saved in:
Main Author: | Lukas Sekanina |
---|---|
Format: | article |
Language: | EN |
Published: |
IEEE
2021
|
Subjects: | |
Online Access: | https://doaj.org/article/c44f1794327f4d478272a0619e4c1a9d |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Multi-Branch Neural Architecture Search for Lightweight Image Super-Resolution
by: Joon Young Ahn, et al.
Published: (2021) -
Developing Language-Specific Models Using a Neural Architecture Search
by: YongSuk Yoo, et al.
Published: (2021) -
Power Efficient Design of High-Performance Convolutional Neural Networks Hardware Accelerator on FPGA: A Case Study With GoogLeNet
by: Ahmed J. Abd El-Maksoud, et al.
Published: (2021) -
Utilization of Unsigned Inputs for NAND Flash-Based Parallel and High-Density Synaptic Architecture in Binary Neural Networks
by: Sung-Tae Lee, et al.
Published: (2021) -
Sparse and dense matrix multiplication hardware for heterogeneous multi-precision neural networks
by: Jose Nunez-Yanez, et al.
Published: (2021)