Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and...
Guardado en:
Autores principales: | Christopher Hillar, Tenzin Chan, Rachel Taubman, David Rolnick |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/530fe3130c4548da95e254bcc23a234c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Hypergraph reconstruction from network data
por: Jean-Gabriel Young, et al.
Publicado: (2021) -
Phase transitions and stability of dynamical processes on hypergraphs
por: Guilherme Ferraz de Arruda, et al.
Publicado: (2021) -
Node and edge nonlinear eigenvector centrality for hypergraphs
por: Francesco Tudisco, et al.
Publicado: (2021) -
Detecting informative higher-order interactions in statistically validated hypergraphs
por: Federico Musciotto, et al.
Publicado: (2021) -
Publisher Correction: Node and edge nonlinear eigenvector centrality for hypergraphs
por: Francesco Tudisco, et al.
Publicado: (2021)