Title | A 1.041-Mb/mm227.38-TOPS/W Signed-INT8 Dynamic-Logic-Based ADC-less SRAM Compute-in-Memory Macro in 28nm with Reconfigurable Bitwise Operation for AI and Embedded Applications |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | B Yan, JL Hsu, PC Yu, CC Lee, Y Zhang, W Yue, G Mei, Y Yang, H Li, Y Chen, and R Huang |
Conference Name | Digest of Technical Papers Ieee International Solid State Circuits Conference |
Date Published | 01/2022 |
Abstract | Advanced intelligent embedded systems perform cognitive tasks with highly-efficient vector-processing units for deep neural network (DNN) inference and other vector-based signal processing using limited power. SRAM-based compute-in-memory (CIM) achieves high energy efficiency for vector-matrix multiplications, offers <1ns read/write speed, and saves vastly repeating memory accesses. However, prior SRAM CIM macros require a large area for compute circuits (either using ADC for analog CIM [1- 4] or CMOS static logic for all-digital CIM [5-6]), have limited CIM functions, and use fixed vector-processing dimensions that cause a low-spatial-utilization rate when deploying DNN (Fig. 11.7.1). |
DOI | 10.1109/ISSCC42614.2022.9731545 |