Exploring the optimal learning technique for IBM TrueNorth platform to overcome quantization loss

TitleExploring the optimal learning technique for IBM TrueNorth platform to overcome quantization loss
Publication TypeConference Paper
Year of Publication2016
AuthorsHP Cheng, W Wen, C Song, B Liu, H Li, and Y Chen
Conference NameProceedings of the 2016 Ieee/Acm International Symposium on Nanoscale Architectures, Nanoarch 2016
Date Published09/2016

As the first large-scale commercial spiking-based neuromorphic computing platform, IBM TrueNorth chip received tremendous attentions in society. However, one of the known issues in TrueNorth design is the limited precision of synaptic weights, each of which can be selected from only four integers. The current workaround is running multiple neural network copies of which the average value of each synaptic weight is close to that in the original network. To improve the computation accuracy and reduce the incurred hardware cost, in this work, we investigate seven different regularization functions in the cost function of the learning process on TrueNorth platform. The hypothesis is that the quantization loss in the mapping from the trained network in floating-point data format to TrueNorth chip with limited integer values shall be minimized if the discrepancy between the trained weight and the quantized weights by optimizing the training process. Our experimental results proved that the proposed techniques considerably improve the computation accuracy of TrueNorth platform and reduce the incurred hardware and performance overheads. Among all the tested methods, L1TEA regularization achieved the best result, say, up to 2.74% accuracy enhancement when deploying MNIST application onto TrueNorth platform.