Improving Out-of-Distribution Detection by Learning from the Deployment Environment

Abstract

Recognition systems in the remote sensing domain often operate in 'open-world' environments, where they must be capable of accurately classifying data from the in-distribution categories while simultaneously detecting and rejecting anomalous/out-of-distribution (OOD) inputs. However, most modern designs use deep neural networks (DNNs) to perform this recognition function that are trained under 'closed-world' assumptions in offline-only environments. As a result, by construction, these systems are ill-posed to handle anomalous inputs and have no mechanism for improving OOD detection abilities during deployment. In this work, we address these weaknesses from two aspects. First, we introduce advanced DNN training methods to codesign for accuracy and OOD detection in the offline training phase. We then propose a novel 'learn-online' workflow for updating the DNNs during deployment using a small library of carefully collected samples from the operating environment. To show the efficacy of our methods, we consider experimenting with two popular recognition tasks in remote sensing: scene classification in electro-optical satellite images and automatic target recognition in synthetic aperture radar imagery. In both, we find that our two primary design contributions can individually improve detection performance, while also being complementary. Additionally, we find that detection performance on difficult and highly granular OOD samples can be drastically improved using only tens or hundreds of samples collected from the environment. Finally, through analysis, we determine that the logic for adding/removing samples from the collection library is of key importance and using a proper learning rate during the model update step is critical.

DOI
10.1109/JSTARS.2022.3146362
Year