Model interpretability and understanding for PyTorch
-
Updated
Aug 7, 2024 - Python
Model interpretability and understanding for PyTorch
Features selector based on the self selected-algorithm, loss function and validation method
XAI - An eXplainability toolbox for machine learning
ProphitBet is a Machine Learning Soccer Bet prediction application. It analyzes the form of teams, computes match statistics and predicts the outcomes of a match using Advanced Machine Learning (ML) methods. The supported algorithms in this application are Neural Networks, Random Forests & Ensembl Models.
Leave One Feature Out Importance
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Adding feature_importances_ property to sklearn.cluster.KMeans class
Variance-based Feature Importance in Neural Networks
Analyzing the Features which leads to heart diseases and visualizing the models' performance and important features using eli5, shap and pdp.
Beta Machine Learning Toolkit
CancelOut is a special layer for deep neural networks that can help identify a subset of relevant input features for streaming or static data.
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Predicted and identified the drivers of Singapore HDB resale prices (2015-2019) with 0.96 Rsquare & $20,000 MAE. Web app deployment using Streamlit for user price prediction.
Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").
Official repository of the paper "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance", M. Carletti, M. Terzi, G. A. Susto.
This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
An R package for computing asymmetric Shapley values to assess causality in any trained machine learning model
Add a description, image, and links to the feature-importance topic page so that developers can more easily learn about it.
To associate your repository with the feature-importance topic, visit your repo's landing page and select "manage topics."