Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: Christopher Hillar, Tenzin Chan, Rachel Taubman, David Rolnick
Format: article
Langue:EN
Publié: MDPI AG 2021
Sujets:
Q
Accès en ligne:https://doaj.org/article/530fe3130c4548da95e254bcc23a234c
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover <i>n</i>-node networks with robust storage of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mrow><mi mathvariant="sans-serif">Ω</mi><mo>(</mo><msup><mi>n</mi><mrow><mn>1</mn><mo>−</mo><mi>ϵ</mi></mrow></msup><mo>)</mo></mrow></msup></semantics></math></inline-formula> memories for any <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>ϵ</mi><mo>></mo><mn>0</mn></mrow></semantics></math></inline-formula>. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.