An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations
In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.
Guardado en:
Autores principales: | Yin Wang, Hongwei Tang, Yufeng Xie, Xinyu Chen, Shunli Ma, Zhengzong Sun, Qingqing Sun, Lin Chen, Hao Zhu, Jing Wan, Zihan Xu, David Wei Zhang, Peng Zhou, Wenzhong Bao |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/95c950a5eea5402c9ba88b7eef5a5b8c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
FPGA-Based Convolutional Neural Network Accelerator with Resource-Optimized Approximate Multiply-Accumulate Unit
por: Mannhee Cho, et al.
Publicado: (2021) -
Mean oscillation and boundedness of multilinear operator related to multiplier operator
por: Zhao Qiaozhen, et al.
Publicado: (2021) -
Dimensional crossover in semiconductor nanostructures
por: Matthew P. McDonald, et al.
Publicado: (2016) -
ORLICZ - PETTIS THEOREMS FOR MULTIPLIER CONVERGENT OPERATOR VALUED SERIES
por: SWARTZ,CHARLES
Publicado: (2003) -
ORLICZ - PETTIS THEOREMS FOR MULTIPLIER CONVERGENT OPERATOR VALUED SERIES
por: SWARTZ,CHARLES
Publicado: (2004)