Efficient neural network using pointwise convolution kernels with linear phase constraint

Abstract

In current efficient convolutional neural networks, 1 × 1 convolution is widely used. However, the amount of computation and the number of parameters of 1 × 1 convolution layers account for a large part of these neural network models. In this paper, we propose to use linear-phase pointwise convolution kernels (LPPC kernels) to reduce the computational complexities and storage costs of these neural networks. We design four types of LPPC kernels based on the parity of the number of input channels and symmetry of the weights of the pointwise convolution kernel. Experimental results show that Type-I LPPC kernels can compress some popular networks better with a small reduction in accuracy than the other types of LPPC kernels. The LPPC kernels can be used as new 1 × 1 convolution kernels to design efficient neural network architectures in the future. Moreover, the LPPC kernels are friendly to low-power hardware accelerator design to achieve lower memory access cost and smaller model size.

DOI
10.1016/j.neucom.2020.10.067
Year