Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can differential privacy's protective effect be verified? #108

Open
MrLinNing opened this issue Jul 2, 2023 · 1 comment
Open

Can differential privacy's protective effect be verified? #108

MrLinNing opened this issue Jul 2, 2023 · 1 comment

Comments

@MrLinNing
Copy link

Your work is excellent, providing a great verification tool for security and privacy researchers. I would like to inquire whether your method can be combined with existing differential privacy defense frameworks, such as the Opacus differential privacy framework. Is it possible to create a tutorial to demonstrate how to verify the effectiveness of differential privacy in defending against your MIA attack method? Thank you!

@MrLinNing
Copy link
Author

Additionally, there is a puzzling issue in this tutorial. For the CIFAR-10 dataset, although the training accuracy is relatively high, at over 80%, the testing accuracy is quite poor, at less than 50%. This is an overfitting phenomenon, and the model has no practical value. Suppose we want to increase the test accuracy by changing the training structure or hyperparameters (learning rate, batch size), the resulting MIA ROC is almost the same as random guessing. In this case, it seems that the MIA attack becomes meaningless. How should we understand this situation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant