You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal of SHAP is to explain the prediction of an instance x by computing the contribution of each feature to the prediction. The SHAP explanation method computes Shapley values from coalitional game theory. The feature values of a data instance act as players in a coalition.
Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.
Individual Conditional Expectation (ICE) plots display one line per instance that shows how the instance's prediction changes when a feature changes. The Partial Dependence Plot (PDP) for the average effect of a feature is a global method because it does not focus on specific instances, but on an overall average.