An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations
In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.
Saved in:
Main Authors: | Yin Wang, Hongwei Tang, Yufeng Xie, Xinyu Chen, Shunli Ma, Zhengzong Sun, Qingqing Sun, Lin Chen, Hao Zhu, Jing Wan, Zihan Xu, David Wei Zhang, Peng Zhou, Wenzhong Bao |
---|---|
Format: | article |
Language: | EN |
Published: |
Nature Portfolio
2021
|
Subjects: | |
Online Access: | https://doaj.org/article/95c950a5eea5402c9ba88b7eef5a5b8c |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
FPGA-Based Convolutional Neural Network Accelerator with Resource-Optimized Approximate Multiply-Accumulate Unit
by: Mannhee Cho, et al.
Published: (2021) -
Mean oscillation and boundedness of multilinear operator related to multiplier operator
by: Zhao Qiaozhen, et al.
Published: (2021) -
Dimensional crossover in semiconductor nanostructures
by: Matthew P. McDonald, et al.
Published: (2016) -
ORLICZ - PETTIS THEOREMS FOR MULTIPLIER CONVERGENT OPERATOR VALUED SERIES
by: SWARTZ,CHARLES
Published: (2003) -
ORLICZ - PETTIS THEOREMS FOR MULTIPLIER CONVERGENT OPERATOR VALUED SERIES
by: SWARTZ,CHARLES
Published: (2004)