Skip to content

Deep Anomaly Detection with Outlier Exposure (ICLR 2019)

License

Notifications You must be signed in to change notification settings

mmazeika/outlier-exposure

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Outlier Exposure

This repository contains the essential code for the paper Deep Anomaly Detection with Outlier Exposure (ICLR 2019).

Requires Python 3+ and PyTorch 0.4.1+.

Overview

Outlier Exposure (OE) is a method for improving anomaly detection performance in deep learning models. Using an out-of-distribution dataset, we fine-tune a classifier so that the model learns heuristics to distinguish anomalies and in-distribution samples. Crucially, these heuristics generalize to new distributions. This repository contains a subset of the calibration and multiclass classification experiments. Please consult the paper for the full results and method descriptions.

Contained within this repository is code for the multiclass and calibration experiments for SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet.

Citation

If you find this useful in your research, please consider citing:

@article{hendrycks2019oe,
  title={Deep Anomaly Detection with Outlier Exposure},
  author={Hendrycks, Dan and Mazeika, Mantas and Dietterich, Thomas},
  journal={Proceedings of the International Conference on Learning Representations},
  year={2019}
}

Outlier Datasets

These experiments make use of numerous outlier datasets. Links for less common datasets are as follows, 80 Million Tiny Images, Icons-50, Textures, Chars74K, and Places365.

About

Deep Anomaly Detection with Outlier Exposure (ICLR 2019)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Shell 1.0%