An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations

In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.

Guardado en:
Detalles Bibliográficos
Autores principales: Yin Wang, Hongwei Tang, Yufeng Xie, Xinyu Chen, Shunli Ma, Zhengzong Sun, Qingqing Sun, Lin Chen, Hao Zhu, Jing Wan, Zihan Xu, David Wei Zhang, Peng Zhou, Wenzhong Bao
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/95c950a5eea5402c9ba88b7eef5a5b8c
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.