Design of universal convolutional layer IP core based on FPGA

Aiming at the problems of insufficient computing speed and poor portability in the miniaturization and parallelization of convolutional neural network,this paper proposes a design of high-speed universal convolutional layer IP core using VHDL language based on the characteristics of convolutional ne...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Guochen AN, Hongtuo YUAN, Xiulu HAN, Xiaojun WANG, Yujia HOU
Formato: article
Lenguaje:ZH
Publicado: Hebei University of Science and Technology 2021
Materias:
T
Acceso en línea:https://doaj.org/article/8b62227a47d046a5af79d9621ec59363
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Aiming at the problems of insufficient computing speed and poor portability in the miniaturization and parallelization of convolutional neural network,this paper proposes a design of high-speed universal convolutional layer IP core using VHDL language based on the characteristics of convolutional neural network and FPGA devices.Layer based on convolution calculation,convolution core design is put forward for the parallel calculation and pipeline module,through each line in the convolution of the core connect to FIFO to improve the data flow,reduce the operating address jump,and join the control core to make it can adjust the convolution with images and convolution window size to layer parameters,generate different convolution layer,finally,the convolution layer is combined with the AXIS protocol and encapsulated into IP core.Under the working frequency of 50 MHz,the convolution calculation of 100×100 images with 2×2 convolution check is carried out.The utilization rate of each resource is less than 1%,[JP2]and the time is 204 μs.The theoretical calculation speed can reach the maximum of 5 MF/s.[JP]The IP core structure of the convolutional layer not only increases the portability of the convolutional module,but also ensures the computing speed,which provides a feasible implementation method for the implementation of convolutional neural network on miniaturized devices.