Skip to content

Akash-07DL/Explainable-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What is XAI?

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.

LIME---SIMPLE-IMAGE-CLASSIFICATION-EXPLAINER

What is Local Interpretable Model-Agnostic Explanations (LIME)?

LIME, the acronym for local interpretable model-agnostic explanations, is a technique that approximates any black box machine learning model with a local, interpretable model to explain each individual prediction.

The abbreviation of LIME itself should give you an intuition about the core idea behind it. LIME is:

Model agnostic: which means that LIME is model-independent. In other words, LIME is able to explain any black-box classifier you can think of.

Interpretable: which means that LIME provides you a solution to understand why your model behaves the way it does.

Local: which means that LIME tries to find the explanation of your black-box model by approximating the local linear behavior of your model.

Links:

  1. https://insights.sei.cmu.edu/blog/what-is-explainable-ai/ -- XAI

  2. https://towardsdatascience.com/interpreting-image-classification-model-with-lime-1e7064a2f2e5#:~:text=LIME%20stands%20for%20Local%20Interpretable,data%2C%20images%2C%20or%20texts. -- LIME Image Explainer

  3. https://medium.datadriveninvestor.com/xai-with-lime-for-cnn-models-5560a486578 - LIME Explainer for digit Classificaction