Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and...
Enregistré dans:
Auteurs principaux: | Christopher Hillar, Tenzin Chan, Rachel Taubman, David Rolnick |
---|---|
Format: | article |
Langue: | EN |
Publié: |
MDPI AG
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/530fe3130c4548da95e254bcc23a234c |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Hypergraph reconstruction from network data
par: Jean-Gabriel Young, et autres
Publié: (2021) -
Phase transitions and stability of dynamical processes on hypergraphs
par: Guilherme Ferraz de Arruda, et autres
Publié: (2021) -
Node and edge nonlinear eigenvector centrality for hypergraphs
par: Francesco Tudisco, et autres
Publié: (2021) -
Detecting informative higher-order interactions in statistically validated hypergraphs
par: Federico Musciotto, et autres
Publié: (2021) -
Publisher Correction: Node and edge nonlinear eigenvector centrality for hypergraphs
par: Francesco Tudisco, et autres
Publié: (2021)