One simple code to cheat your retrieval model via Modifying Query ONLY (based on pytorch) accepted by IJCV. Pre-print version is at https://arxiv.org/abs/1809.02681.
The main idea underpinning our method is simple yet effective, making the query feature to conduct a U-turn ↪️.
Please check the step-by-step tutorial in https://github.com/layumi/Person_reID_baseline_pytorch
Try four attack methods with one line. Please change the path before run it.
python experiment.py
We attach the training code, which is based on the excellent code in TPAMI 2018. https://github.com/layumi/Oxford-Paris-Attack
Our effort is to cheat the TPAMI model. Yes. We succeed. https://github.com/layumi/Oxford-Paris-Attack
Please check subfolders.
Food: https://github.com/layumi/U_turn/tree/master/Food
CUB: https://github.com/layumi/U_turn/tree/master/cub
We attach the training code, which is borrowed from ResNet-Wide (with Random Erasing).
https://github.com/layumi/A_reID/tree/master/cifar
@article{zheng2022query,
title={U-turn: Crafting Adversarial Queries with Opposite-direction Features},
author={Zheng, Zhedong and Zheng, Liang and Yang, Yi and Wu, Fei},
journal={IJCV},
year={2022}
}