Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This code transforms the audios dirty label backdoor attack into a truly robust clean label attack!,
Please include a summary of the change, motivation and which issue is fixed. Any dependencies changes should also be included.
Fixes # (issue)
Type of change
This class implements a clean label attack, in particular for poisoning attacks with clean labels. The main contributions of this are as follows:
Robust clean label backdoor attack !
Please check all relevant options.
Test Configuration:
Checklist
My code follows the style guidelines of this project
This code defines a class " PoisoningAttackCleanLabelBackdoor" that performs a true clean label backdoor robust attack.
When the poison method is called, it applies the trigger function to the input data and returns the poisoned data with the same clean labels as the original data and applies an alpha factor to make the attack very imperceptible even if the audio trigger has a high volume!