CMOS Implementation of ANNs Based on Analog Optimization of N-Dimensional Objective Functions

The design of neural network architectures is carried out using methods that optimize a particular objective function, in which a point that minimizes the function is sought. In reported works, they only focused on software simulations or commercial complementary metal-oxide-semiconductor (CMOS), ne...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Alejandro Medina-Santiago, Carlos Arturo Hernández-Gracidas, Luis Alberto Morales-Rosales, Ignacio Algredo-Badillo, Monica Amador García, Jorge Antonio Orozco Torres
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/793ffac73a24459690c215081bc79033
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:The design of neural network architectures is carried out using methods that optimize a particular objective function, in which a point that minimizes the function is sought. In reported works, they only focused on software simulations or commercial complementary metal-oxide-semiconductor (CMOS), neither of which guarantees the quality of the solution. In this work, we designed a hardware architecture using individual neurons as building blocks based on the optimization of n-dimensional objective functions, such as obtaining the bias and synaptic weight parameters of an artificial neural network (ANN) model using the gradient descent method. The ANN-based architecture has a 5-3-1 configuration and is implemented on a 1.2 <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="sans-serif">μ</mi></semantics></math></inline-formula>m technology integrated circuit, with a total power consumption of 46.08 mW, using nine neurons and 36 CMOS operational amplifiers (op-amps). We show the results obtained from the application of integrated circuits for ANNs simulated in PSpice applied to the classification of digital data, demonstrating that the optimization method successfully obtains the synaptic weights and bias values generated by the learning algorithm (Steepest-Descent), for the design of the neural architecture.