Skip to content

[ICLR 2023] Official Tensorflow implementation of "Distributionally Robust Post-hoc Classifiers under Prior Shifts"

Notifications You must be signed in to change notification settings

weijiaheng/Drops

Repository files navigation

Distributionally Robust Post-hoc Classifiers under Prior Shifts [ICLR 2023]

This repository is the official Tensorflow implementation of [Distributionally Robust Post-hoc Classifiers under Prior Shifts]) accepted by ICLR 2023.

Title: Distributionally Robust Post-hoc Classifiers under Prior Shifts [pdf]
Authors:Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wensheng Chu, Yang Liu, Abhishek Kumar
Institute: University of California, Santa Cruz & Google Research
One sentence summary: We propose a method for scaling the model predictions at test-time for improved distribution robustness to prior shifts.

Experimental code for DROPS projects on class-imbalanced learning tasks

Example Usage

(1) Two-step vatriant for training DROPS

Step 1: run synthetic long-tailed cifar experiments with ce loss to get the base model

python main_lt.py --dataset 'cifar10' --loss 'ce' --imb_ratio 0.1 --dro_div 'kl'

Step 2: perform Distribution RObust PoSthoc on saved model prediction logits obtained from the pre-trained model

python drops_test_time.py --dataset 'cifar10' --loss 'drops' --imb_ratio 0.1 --eta_lambda 10 --num_iters 25 --eta_lambda_mult 0.95 --prior_type 'train' --cal_type 'none' --eps 0.9

(2) One-step vatriant for training DROPS

To run synthetic long-tailed cifar experiments with ce loss, run

python main_lt.py --dataset 'cifar10' --loss 'drops' --imb_ratio 0.1 --dro_div 'kl'

Motivation 1:
How a method performs well under a generic evaluation metric: When the target distribution r is a uniform vector, such metric covers the mean accuracy and worst accuracy as two extremes.

Motivation 2:
Deep neural nets could capture represenrations well, we could inherit the learned nice representation and introduce a post-hoc approach to achieve model robustness under prior shifts. An example is given as follows:

Drops:
We propose a Distributionally RObust PoSt-hoc method (DROPS), which enablesthe reuse of a pre-trained model for different robustness requirements by simply scaling the modelpredictions. Drops obtains the optimal per-class/group weights for scaling through maximizing the delta-worst accuracy.

Experiments:
A special case of prior shift is the Class-imbalanced learning. The code of Drops w.r.t. imbalanced CIFAR-10 and CIFAR-100 datasets will be released soon!

Cite Drops
If you find the code useful in your research, please consider citing our paper:

@inproceedings{
wei2023distributionally,
title={Distributionally Robust Post-hoc Classifiers under Prior Shifts},
author={Jiaheng Wei and Harikrishna Narasimhan and Ehsan Amid and Wen-Sheng Chu and Yang Liu and Abhishek Kumar},
booktitle={International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=3KUfbI9_DQE}
}

About

[ICLR 2023] Official Tensorflow implementation of "Distributionally Robust Post-hoc Classifiers under Prior Shifts"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published