ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition

TitleESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition
Publication TypeConference Paper
Year of Publication2021
AuthorsS Li, E Hanson, X Qian, HH Li, and Y Chen
Conference NameProceedings of the Annual International Symposium on Microarchitecture, Micro
Date Published10/2021

The ever-growing parameter size and computation cost of Convolutional Neural Network (CNN) models hinder their deployment onto resource-constrained platforms. Network pruning techniques are proposed to remove the redundancy in CNN parameters and produce a sparse model. Sparse-aware accelerators are also proposed to reduce the computation cost and memory bandwidth requirements of inference by leveraging the model sparsity. The irregularity of sparse patterns, however, limits the efficiency of those designs. Researchers proposed to address this issue by creating a regular sparsity pattern through hardware-aware pruning algorithms. However, the pruning rate of these solutions is largely limited by the enforced sparsity patterns. This limitation motivates us to explore other compression methods beyond pruning. With two decoupled computation stages, we found that kernel decomposition could potentially take the processing of the sparse pattern off from the critical path of inference and achieve a high compression ratio without enforcing the sparse patterns. To exploit these advantages, we propose ESCALATE , an algorithm-hardware co-design approach based on kernel decomposition. At algorithm level, ESCALATE reorganizes the two computation stages of the decomposed convolution to enable a stream processing of the intermediate feature map. We proposed a hybrid quantization to exploit the different reuse frequency of each part of the decomposed weight. At architecture level, ESCALATE proposes a novel 'Basis-First' dataflow and its corresponding microarchitecture design to maximize the benefits brought by the decomposed convolution. We evaluate ESCALATE with four representative CNN models on both CIFAR-10 and ImageNet datasets and compare it against previous sparse accelerators and pruning algorithms. Results show that ESCALATE can achieve up to 325× and 11× compression ratio for models on CIFAR-10 and ImageNet, respectively. Comparing with previous dense and sparse accelerators, ESCALATE accelerator averagely boosts the energy efficiency by 8.3× and 3.77×, and reduces the latency by 17.9× and 2.16×, respectively.