Model compression and simplification pipelines for fast deep neural network inference in FPGAs in HEP

Abstract Resource utilization plays a crucial role for successful implementation of fast real-time inference for deep neural networks (DNNs) and convolutional neural networks (CNNs) on latest generation of hardware accelerators (FPGAs, SoCs, ACAPs, GPUs). To fulfil the needs of the triggers that are...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Simone Francescato, Stefano Giagu, Federica Riti, Graziella Russo, Luigi Sabetta, Federico Tortonesi
Formato: article
Lenguaje:EN
Publicado: SpringerOpen 2021
Materias:
Acceso en línea:https://doaj.org/article/39c990620026419e9435862b94fc5b24
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

Ejemplares similares