Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
Understanding the reasons for the success of deep neural networks trained using stochastic gradient-based methods is a key open problem for the nascent theory of deep learning. The types of data where these networks are most successful, such as images or sequences of speech, are characterized by int...
Guardado en:
Autores principales: | Sebastian Goldt, Marc Mézard, Florent Krzakala, Lenka Zdeborová |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
American Physical Society
2020
|
Materias: | |
Acceso en línea: | https://doaj.org/article/1a23fabc856f4ecb8c6ed721b923a393 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Separability and geometry of object manifolds in deep neural networks
por: Uri Cohen, et al.
Publicado: (2020) -
Performance optimization of criminal network hidden link prediction model with deep reinforcement learning
por: Marcus Lim, et al.
Publicado: (2021) -
Hidden neural networks for transmembrane protein topology prediction
por: Ioannis A. Tamposis, et al.
Publicado: (2021) -
Fracton Models on General Three-Dimensional Manifolds
por: Wilbur Shirley, et al.
Publicado: (2018) -
Evidence of a new hidden neural network into deep fasciae
por: Caterina Fede, et al.
Publicado: (2021)