Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adversarial example generation by FGSM: different normalization of training vs test images? #1032

Closed
hookxs opened this issue Jun 17, 2020 · 3 comments · Fixed by #2419
Closed
Assignees
Labels
Adversarial Training Issues relating to the adversarial example generation tutorial docathon-h1-2023 A label for the docathon in H1 2023 medium

Comments

@hookxs
Copy link

hookxs commented Jun 17, 2020

In the Adversarial example generation tutorial the classifier from https://github.com/pytorch/examples/tree/master/mnist is used. However, this classifier is trained with input normalization transforms.Normalize((0.1307,), (0.3081,)) while in the FGSM tutorial no normalization is used and the perturbed images are clamped to [0,1] - is this not a contradiction?

@holly1238 holly1238 added the Adversarial Training Issues relating to the adversarial example generation tutorial label Jul 27, 2021
@Melcfrn
Copy link

Melcfrn commented Jun 9, 2022

Hi,
Furthermore, dimensions in convolution layers are differents.
The file should have been changed. But the link to the model saved is good.

Clément

@svekars svekars added medium docathon-h1-2023 A label for the docathon in H1 2023 labels May 31, 2023
@QasimKhan5x
Copy link
Contributor

QasimKhan5x commented Jun 2, 2023

As per my understanding, the following changes are needed in this tutorial:

  1. The Net in the Adversarial example generation tutorial does not match the one in MNIST tutorial. The model definition needs to be changed.
  2. The Adversarial example generation does not apply transforms.Normalize((0.1307,), (0.3081,)) in its test_loader, whereas the dataloaders in the MNIST example use it. Therefore, I should add this transform.

@svekars if I'm understanding this correctly, may I make these changes and proceed with a pull request?

@QasimKhan5x
Copy link
Contributor

/assigntome

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Adversarial Training Issues relating to the adversarial example generation tutorial docathon-h1-2023 A label for the docathon in H1 2023 medium
Projects
None yet
5 participants