Artificial neurovascular network (ANVN) to study the accuracy vs. efficiency trade-off in an energy dependent neural network

Abstract Artificial feedforward neural networks perform a wide variety of classification and function approximation tasks with high accuracy. Unlike their artificial counterparts, biological neural networks require a supply of adequate energy delivered to single neurons by a network of cerebral micr...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Bhadra S. Kumar, Nagavarshini Mayakkannan, N. Sowmya Manojna, V. Srinivasa Chakravarthy
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/134878b503b44fba9d6e4e37b5e40e55
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Artificial feedforward neural networks perform a wide variety of classification and function approximation tasks with high accuracy. Unlike their artificial counterparts, biological neural networks require a supply of adequate energy delivered to single neurons by a network of cerebral microvessels. Since energy is a limited resource, a natural question is whether the cerebrovascular network is capable of ensuring maximum performance of the neural network while consuming minimum energy? Should the cerebrovascular network also be trained, along with the neural network, to achieve such an optimum? In order to answer the above questions in a simplified modeling setting, we constructed an Artificial Neurovascular Network (ANVN) comprising a multilayered perceptron (MLP) connected to a vascular tree structure. The root node of the vascular tree structure is connected to an energy source, and the terminal nodes of the vascular tree supply energy to the hidden neurons of the MLP. The energy delivered by the terminal vascular nodes to the hidden neurons determines the biases of the hidden neurons. The “weights” on the branches of the vascular tree depict the energy distribution from the parent node to the child nodes. The vascular weights are updated by a kind of “backpropagation” of the energy demand error generated by the hidden neurons. We observed that higher performance was achieved at lower energy levels when the vascular network was also trained along with the neural network. This indicates that the vascular network needs to be trained to ensure efficient neural performance. We observed that below a certain network size, the energetic dynamics of the network in the per capita energy consumption vs. classification accuracy space approaches a fixed-point attractor for various initial conditions. Once the number of hidden neurons increases beyond a threshold, the fixed point appears to vanish, giving place to a line of attractors. The model also showed that when there is a limited resource, the energy consumption of neurons is strongly correlated to their individual contribution to the network’s performance.