-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhanced MIA #99
Comments
With this new version of privacy meter you can reproduce many of the results in enhanced MIA paper, and other papers. We will soon be adding more info plus access to the older paper code. |
Okay, thanks. Does this new version support python version >=3.6? However, I got this error on Colab when following the instruction for installation: Package 'privacy-meter' requires a different Python: 3.8.10 not in '>=3.9.0' |
Hi @Ty0ng, I wanted to follow up on the issues you reported. We have provided the pointer to the previous Enhanced MIA implementation in the research/readme.md file. Please refer to that for further details. Regarding the issues on Colab, we have included a workaround in the tutorial/readme.md file. Could you please check if the provided solution resolves your problem? If you have any further questions or concerns, please let us know. |
Yes the code worked. Thank you! I have another question, for the reference attack. According to the paper, the target dataset used to train the target model should be different from the reference dataset for the reference model. What happens if the reference dataset is a subset of the target dataset? |
Hi @Ty0ng , To evaluate the privacy risk of a machine learning model, it's important to understand the security game being played, as outlined in Section 3.1 of in the paper. The privacy loss of the target model with respect to its training dataset is the adversary's success in winning the security game over multiple repetitions. The attack error depends on various factors listed in Section 3.2 of the paper. Regarding your question, if the reference dataset is a subset of the target dataset, you are providing the adversary with additional information and changing the security game. Specifically, in this scenario, the adversary's objective would be to infer membership information about the target point, given the knowledge about a subset of the target model's training dataset. The results of the attack would have a different meaning compared to the reference attack evaluated in the paper. For a more thorough discussion on this topic, please refer to Section 4 of the paper. I hope this explanation helps. |
Hi what happened to the codes/folder for Enhanced MIA?
The text was updated successfully, but these errors were encountered: