Attention-based Counterfactual Explanation for Multivariate Time Series (DaWak 2023)
-
Updated
Jan 24, 2024 - Jupyter Notebook
Attention-based Counterfactual Explanation for Multivariate Time Series (DaWak 2023)
This repository is a curated collection of information (keywords, papers, libraries, books, etc.) about counterfactual explanations🙃
Repository for "Endogenous Macrodynamics in Algorithmic Recourse" (Altmeyer et al., 2023)
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
Code for "Robust counterfactual explanations for random forests"
Diffusion-driven Counterfactual Explanation for Functional MRI (https://arxiv.org/abs/2307.09547)
CELS: Counterfactual Explanation for Time Series Data via Learned Saliency Maps (2023 Big data)
Creating a pipeline for generating semi-factual and counter-factual explanations for computer vision tasks.
SG-CF Shapelet-Guided Counterfactual Explanation for Time Series Data (2022 Big Data)
Easiest way to generate counterfactual explanations
This project Implements the paper “Robustness implies Fairness in Casual Algorithmic Recourse” using the R language.
Text-to-Image Models for Counterfactual Explanations: a black-box approach Official Code. WACV 2024
Motif-guided time series counterfactual explanations (ICPR 2022)
An XGBoost model in Python that classifies if a customer will cancel his/her hotel booking or not. I also use counterfactuals guided by prototypes from the Alibi package to explore the minimum changes needed to flip a prediction from canceled to not canceled and vice versa.
Tensorflow implementation of "Born Identity Network: Multi-way Counterfactual Map Generation to Explain a Classifier's Decision"
[Autumn 2022] Specialization project leading up to main thesis in MSc Applied Physics and Mathematics at NTNU.
counterfactual explanations for XGBoost and tree ensemble models - counterfactual reasoning - model interpretability
Official implement of our work: Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual Explanations, which is published in ACM TOMM 2022
Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.
Add a description, image, and links to the counterfactual-explanations topic page so that developers can more easily learn about it.
To associate your repository with the counterfactual-explanations topic, visit your repo's landing page and select "manage topics."