Prosperity: Accelerating Spiking Neural Networks via Product Sparsity

Abstract

Spiking Neural Networks (SNNs) are highly efficient due to their spike-based activation, which inherently produces bit-sparse computation patterns. Existing hardware implementations of SNNs leverage this sparsity pattern to avoid wasteful zero-value computations, yet this approach fails to fully capitalize on the potential efficiency of SNNs. This study introduces a novel sparsity paradigm called Product Sparsity, which leverages combinatorial similarities within matrix multiplication operations to reuse the inner product result and reduce redundant computations. Product Sparsity significantly enhances sparsity in SNNs without compromising the original computation results compared to traditional bit sparsity methods. For instance, in the SpikeBERT SNN model, Product Sparsity achieves a density of only 1.23% and reduces computation by 11×, compared to bit sparsity, which has a density of 13.19%. To efficiently implement Product Sparsity, we propose Prosperity, an architecture that addresses the challenges of identifying and eliminating redundant computations in real-time. Compared to prior SNN accelerator PTB and the A100 GPU, Prosperity achieves an average speedup of 7.4× and 1.8×, respectively, along with energy efficiency improvements of 8.0× and 193×, respectively. The code for Prosperity is available at https://github.com/dubcyfor3/Prosperity.

Github: https://github.com/dubcyfor3/Prosperity

 

Methodology
Image
coming_soon - Chiyue Wei

Coming soon in HPCA 2025

Results

 

Citation