Stable, Low Power and Bit-Interleaving Aware SRAM Memory for Multi-Core Processing Elements
The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This m...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/ba7d2eb2ee1949f5af18969e8295843e |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This memory uses multi-port reading and writing operations, which reduces stability and reliability. In this paper, we proposed a self-adaptive 12T SRAM cell to increase the read stability for multi-port operation. The self-adaptive technique increases stability and reliability. We increased the read stability by refreshing the storing node in the read mode of operation. The proposed technique also prevents the bit-interleaving problem. Further, we offered a butterfly-inspired SRAM bank to increase the performance and reduce the power dissipation. The proposed SRAM saves 12% more total power than the state-of-the-art 12T SRAM cell-based SRAM. We improve the write performance by 28.15% compared with the state-of-the-art 12T SRAM design. The total area overhead of the proposed architecture compared to the conventional 6T SRAM cell-based SRAM is only 1.9 times larger than the 6T SRAM cell. |
---|