Indirect Invisible Poisoning Attacks on Domain Adaptation
-
Updated
Jul 12, 2021 - Python
Indirect Invisible Poisoning Attacks on Domain Adaptation
Tensorflow implementation of APT (Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR 2021)
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
Hack tool for local network: Man in the middle, hosts scan, ARP poisoning, Router and DNS Poisoning
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Continuous Integration And Continuous Delivery Poisoning Guides
Poisoning attack methods against adversarial training algorithms
Taller de Adversarial Machine Learning
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
A Python library for Secure and Explainable Machine Learning
FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.
FedAnil is a secure blockchain-enabled Federated Deep Learning Model to address non-IID data and privacy concerns. This repo hosts a simulation for FedAnil written in Python.
M. Anisetti, C. A. Ardagna, A. Balestrucci, N. Bena, E. Damiani, C. Y. Yeun. "On the Robustness of Random Forest Against Data Poisoning: An Ensemble-Based Approach". In IEEE TSUSC, vol. 8 no. 4
Official Website of https://github.com/tamlhp/awesome-recsys-poisoning
Test tool to simulate two types of poisoning attack on AI model
This project uses Python and machine learning to classify plant species as poisonous or non-poisonous. It aims to provide an efficient way to identify safe and harmful plants, useful for botanists, hikers, and the agricultural sector.
Add a description, image, and links to the poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the poisoning-attacks topic, visit your repo's landing page and select "manage topics."