Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
Deep neural networks (DNN) are now dominating in the most challenging applications of machine learning. As DNNs can have complex architectures with millions of trainable parameters (the so-called weights), their design and training are difficult even for highly qualified experts. In order to reduce...
Guardado en:
Autor principal: | Lukas Sekanina |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/c44f1794327f4d478272a0619e4c1a9d |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Multi-Branch Neural Architecture Search for Lightweight Image Super-Resolution
por: Joon Young Ahn, et al.
Publicado: (2021) -
Developing Language-Specific Models Using a Neural Architecture Search
por: YongSuk Yoo, et al.
Publicado: (2021) -
Power Efficient Design of High-Performance Convolutional Neural Networks Hardware Accelerator on FPGA: A Case Study With GoogLeNet
por: Ahmed J. Abd El-Maksoud, et al.
Publicado: (2021) -
Utilization of Unsigned Inputs for NAND Flash-Based Parallel and High-Density Synaptic Architecture in Binary Neural Networks
por: Sung-Tae Lee, et al.
Publicado: (2021) -
Sparse and dense matrix multiplication hardware for heterogeneous multi-precision neural networks
por: Jose Nunez-Yanez, et al.
Publicado: (2021)