Model compression and simplification pipelines for fast deep neural network inference in FPGAs in HEP
Abstract Resource utilization plays a crucial role for successful implementation of fast real-time inference for deep neural networks (DNNs) and convolutional neural networks (CNNs) on latest generation of hardware accelerators (FPGAs, SoCs, ACAPs, GPUs). To fulfil the needs of the triggers that are...
Guardado en:
Autores principales: | Simone Francescato, Stefano Giagu, Federica Riti, Graziella Russo, Luigi Sabetta, Federico Tortonesi |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
SpringerOpen
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/39c990620026419e9435862b94fc5b24 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Erratum to: Model compression and simplification pipelines for fast deep neural network inference in FPGAs in HEP
por: Simone Francescato, et al.
Publicado: (2021) -
Search for R-parity-violating supersymmetry in a final state containing leptons and many jets with the ATLAS experiment using $$\sqrt{s} = 13\hbox { TeV}$$ s = 13 TeV proton–proton collision data
por: G. Aad, et al.
Publicado: (2021) -
Generalized compact star models with conformal symmetry
por: J. W. Jape, et al.
Publicado: (2021) -
Phase transitions in the logarithmic Maxwell O(3)-sigma model
por: F. C. E. Lima, et al.
Publicado: (2021) -
Resonant leptogenesis and TM $$_1$$ 1 mixing in minimal type-I seesaw model with S $$_4$$ 4 symmetry
por: Bikash Thapa, et al.
Publicado: (2021)