Identification of Out-of-Distribution Cases of CNN using Class-Based Surprise AdequacyPoster
Machine learning is vulnerable to possible incorrect classification of cases that are out of the distribution observed during training and calibration.
To identify OOD cases, we propose to use Surprise Adequacy Deep Learning Likelihood (SADL) instantiated to each output class, to measure In-Distribution or Out-Of-Distribution computational likelihood of classifications performed by a network.
Out-of-distribution cases were not drawn from the same distribution of the training sets and they were created using affine transformations of legitimate inputs and adversarial attacks.
Presented experimental results show that OOD identification allows up to 70% to 90% OOD detection. This identification ratio is comparable with the results obtained in the literature using SADL in conjunction with secondary training and classifier for adversarial attack filtering, but the class-based preserves the performance, without the need for a secondary classifier.
The identification of OOD computations may be beneficial in sensitive and critical domains such as aerospace, medicine, cyber-security, and many others, where it may be hard to forecast proper and representative samples of unknown or unexpected cases.