A 1.041-Mb/mm<sup>2</sup>27.38-TOPS/W Signed-INT8 Dynamic-Logic-Based ADC-less SRAM Compute-in-Memory Macro in 28nm with Reconfigurable Bitwise Operation for AI and Embedded Applications

TitleA 1.041-Mb/mm227.38-TOPS/W Signed-INT8 Dynamic-Logic-Based ADC-less SRAM Compute-in-Memory Macro in 28nm with Reconfigurable Bitwise Operation for AI and Embedded Applications
Publication TypeConference Paper
Year of Publication2022
AuthorsB Yan, JL Hsu, PC Yu, CC Lee, Y Zhang, W Yue, G Mei, Y Yang, H Li, Y Chen, and R Huang
Conference NameDigest of Technical Papers Ieee International Solid State Circuits Conference
Date Published01/2022
Abstract

Advanced intelligent embedded systems perform cognitive tasks with highly-efficient vector-processing units for deep neural network (DNN) inference and other vector-based signal processing using limited power. SRAM-based compute-in-memory (CIM) achieves high energy efficiency for vector-matrix multiplications, offers <1ns read/write speed, and saves vastly repeating memory accesses. However, prior SRAM CIM macros require a large area for compute circuits (either using ADC for analog CIM [1- 4] or CMOS static logic for all-digital CIM [5-6]), have limited CIM functions, and use fixed vector-processing dimensions that cause a low-spatial-utilization rate when deploying DNN (Fig. 11.7.1).

DOI10.1109/ISSCC42614.2022.9731545