The open-sourced Python toolbox for backdoor attacks and defenses.
-
Updated
Jul 30, 2024 - Python
The open-sourced Python toolbox for backdoor attacks and defenses.
Neural Network Verification Software Tool
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Open-source framework for uncertainty and deep learning models in PyTorch 🌱
[ICML2022 Long Talk] Official Pytorch implementation of "To Smooth or Not? When Label Smoothing Meets Noisy Labels"
A project to add scalable state-of-the-art out-of-distribution detection (open set recognition) support by changing two lines of code! Perform efficient inferences (i.e., do not increase inference time) and detection without classification accuracy drop, hyperparameter tuning, or collecting additional data.
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Framework for Adversarial Malware Evaluation.
PyTorch package to train and audit ML models for Individual Fairness
Papers and online resources related to machine learning fairness
MERLIN is a global, model-agnostic, contrastive explainer for any tabular or text classifier. It provides contrastive explanations of how the behaviour of two machine learning models differs.
[Findings of EMNLP 2022] Holistic Sentence Embeddings for Better Out-of-Distribution Detection
SyReNN: Symbolic Representations for Neural Networks
Privacy-Preserving Machine Learning (PPML) Tutorial
Code from PLDI '21 paper "Provable Repair of Deep Neural Networks."
Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)
A project to improve out-of-distribution detection (open set recognition) and uncertainty estimation by changing a few lines of code in your project! Perform efficient inferences (i.e., do not increase inference time) without repetitive model training, hyperparameter tuning, or collecting additional data.
TRIAGE: Characterizing and auditing training data for improved regression (NeurIPS 2023)
A project to train your model from scratch or fine-tune a pretrained model using the losses provided in this library to improve out-of-distribution detection and uncertainty estimation performances. Calibrate your model to produce enhanced uncertainty estimations. Detect out-of-distribution data using the defined score type and threshold.
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
Add a description, image, and links to the trustworthy-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-machine-learning topic, visit your repo's landing page and select "manage topics."