Toward Efficient and Adaptive Design of Video Detection System with Deep Neural Networks

TitleToward Efficient and Adaptive Design of Video Detection System with Deep Neural Networks
Publication TypeJournal Article
Year of Publication2022
AuthorsJ Mao, Q Yang, A Li, KW Nixon, H Li, and Y Chen
JournalAcm Transactions on Embedded Computing Systems
Volume21
Issue3
Date Published05/2022
Abstract

In the past decade, Deep Neural Networks (DNNs), e.g., Convolutional Neural Networks, achieved human-level performance in vision tasks such as object classification and detection. However, DNNs are known to be computationally expensive and thus hard to be deployed in real-time and edge applications. Many previous works have focused on DNN model compression to obtain smaller parameter sizes and consequently, less computational cost. Such methods, however, often introduce noticeable accuracy degradation. In this work, we optimize a state-of-the-art DNN-based video detection framework - Deep Feature Flow (DFF) from the cloud end using three proposed ideas. First, we propose Asynchronous DFF (ADFF) to asynchronously execute the neural networks. Second, we propose a Video-based Dynamic Scheduling (VDS) method that decides the detection frequency based on the magnitude of movement between video frames. Last, we propose Spatial Sparsity Inference, which only performs the inference on part of the video frame and thus reduces the computation cost. According to our experimental results, ADFF can reduce the bottleneck latency from 89 to 19 ms. VDS increases the detection accuracy by 0.6% mAP without increasing computation cost. And SSI further saves 0.2 ms with a 0.6% mAP degradation of detection accuracy.

DOI10.1145/3484946
Short TitleAcm Transactions on Embedded Computing Systems