You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I really appreciate your work!
When I tried to perform the adversarial training on the CIFAR10 dataset by modifying the code of tutorial_train_mnist.py. I changed the get_mnist_train_loader and get_mnist_test_loader functions to get_cifar10_train_loader and get_cifar10_test_loader, and the LeNet5 model's input dimensions accordingly. But the problem is the loss doesn't decrease and clean acc and adv acc is always at 10%.
I also tried to use a larger model like Resnet. But the problem is the same.
So any ideas on why the loss doesn't decrease for cifar10?
@Originofamonia It's a bit hard for me to understand what's going on without looking at the specifics. Hopefully I'll put up a tested CIFAR10 adversarial training script later this week, and maybe that'll be helpful. Will reply here once that's up.
The common setting in CIFAR-10 under L-inf threat model is: eps=0.031, nb_iter=10 or 7, eps_iter=0.007. The original setting in the tutorial is for MNIST, which is too difficult to defend in CIFAR-10.
Hello,
I really appreciate your work!
When I tried to perform the adversarial training on the CIFAR10 dataset by modifying the code of tutorial_train_mnist.py. I changed the get_mnist_train_loader and get_mnist_test_loader functions to get_cifar10_train_loader and get_cifar10_test_loader, and the LeNet5 model's input dimensions accordingly. But the problem is the loss doesn't decrease and clean acc and adv acc is always at 10%.
I also tried to use a larger model like Resnet. But the problem is the same.
So any ideas on why the loss doesn't decrease for cifar10?
`Train Epoch: 1 [0/50000 (0%)] Loss: 2.420085
Train Epoch: 1 [20000/50000 (40%)] Loss: 2.302042
Train Epoch: 1 [40000/50000 (80%)] Loss: 2.303400
Test set: avg cln loss: 2.3025, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3035, adv acc: 1000/10000 (10%)
Train Epoch: 2 [0/50000 (0%)] Loss: 2.300893
Train Epoch: 2 [20000/50000 (40%)] Loss: 2.303464
Train Epoch: 2 [40000/50000 (80%)] Loss: 2.303012
Test set: avg cln loss: 2.3026, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3027, adv acc: 1000/10000 (10%)
Train Epoch: 3 [0/50000 (0%)] Loss: 2.301586
Train Epoch: 3 [20000/50000 (40%)] Loss: 2.301844
Train Epoch: 3 [40000/50000 (80%)] Loss: 2.303260
Test set: avg cln loss: 2.3025, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3031, adv acc: 999/10000 (10%)
Train Epoch: 4 [0/50000 (0%)] Loss: 2.303174
Train Epoch: 4 [20000/50000 (40%)] Loss: 2.302358
Train Epoch: 4 [40000/50000 (80%)] Loss: 2.302135
Test set: avg cln loss: 2.3025, cln acc: 1008/10000 (10%)
Test set: avg adv loss: 2.3029, adv acc: 1000/10000 (10%)
Train Epoch: 5 [0/50000 (0%)] Loss: 2.303104
Train Epoch: 5 [20000/50000 (40%)] Loss: 2.303405
Train Epoch: 5 [40000/50000 (80%)] Loss: 2.301460
Test set: avg cln loss: 2.3023, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3032, adv acc: 1000/10000 (10%)
Train Epoch: 6 [0/50000 (0%)] Loss: 2.303206
Train Epoch: 6 [20000/50000 (40%)] Loss: 2.300870
Train Epoch: 6 [40000/50000 (80%)] Loss: 2.303452
Test set: avg cln loss: 2.3025, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3028, adv acc: 1000/10000 (10%)
Train Epoch: 7 [0/50000 (0%)] Loss: 2.302966
Train Epoch: 7 [20000/50000 (40%)] Loss: 2.302667
Train Epoch: 7 [40000/50000 (80%)] Loss: 2.303157
Test set: avg cln loss: 2.3025, cln acc: 1238/10000 (12%)
Test set: avg adv loss: 2.3028, adv acc: 724/10000 (7%)
Train Epoch: 8 [0/50000 (0%)] Loss: 2.302794
Train Epoch: 8 [20000/50000 (40%)] Loss: 2.302416
Train Epoch: 8 [40000/50000 (80%)] Loss: 2.302629
Test set: avg cln loss: 2.3025, cln acc: 1000/10000 (10%)
Test set: avg adv loss: 2.3027, adv acc: 1000/10000 (10%)
Train Epoch: 9 [0/50000 (0%)] Loss: 2.302886
Train Epoch: 9 [20000/50000 (40%)] Loss: 2.302330
Train Epoch: 9 [40000/50000 (80%)] Loss: 2.302263
`
Thanks!
The text was updated successfully, but these errors were encountered: