MobiLattice: A Depth-wise DCNN Accelerator with Hybrid Digital/Analog Nonvolatile Processing-In-Memory Block

Abstract

Nonvolatile Processing-In-Memory (NVPIM) architecture is a promising technology to enable energy-efficient inference of Deep Convolutional Neural Networks (DCNNs). One major advantage of NVPIM is that the vector dot-product operations can be completed efficiently by analog computing inside a Nonvolatile Memory (NVM) crossbar. However, its inference efficiency is severely downgraded when processing depth-wise convolution layers, which have been widely employed in many lightweight DCNNs. One major challenge is that the cell utilization is extreme low when mapping the depth-wise convolution layer to a crossbar. To overcome this problem, we propose a novel hybrid mode NVPIM architecture, namely, MobiLattice. With moderate hardware overhead, Mobi-Lattice enables both analog and digital mode operations on NVM crossbars. While conventional convolution layers are computed efficiently using the analog mode, the computation efficiency of depth-wise convolution layers are substantially improved using the digital mode by mitigating the redundant memory space in the NVM crossbars. Experimental results show that, compared to prior approaches where only the analog mode is supported by the NVPIM architecture, MobiLattice can speedup the processing of typical depth-wise DCNNs by 2 5× on average and up to 30× by combining with some extreme quantization schemes.

DOI
10.1145/3400302.3415666
Year