Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lime with Relative Input Stability and Local Lipschitz Estimate #325

Open
ddurmaz97 opened this issue Jan 18, 2024 · 1 comment
Open

Lime with Relative Input Stability and Local Lipschitz Estimate #325

ddurmaz97 opened this issue Jan 18, 2024 · 1 comment
Assignees

Comments

@ddurmaz97
Copy link

Description

I have used a tabular dataset "adult census" for my MLP model and wanted to evaluate LIME XAI method with the Quantus metrics.
Here is the MLP Model:

Define your MLP model

class MLPModel(nn.Module):
def init(self, input_size, hidden_size, output_size):
super(MLPModel, self).init()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, output_size)
self.softmax = nn.Softmax(dim=1)

def forward(self, x):
    x = self.fc1(x)
    x = self.relu1(x)
    x = self.fc2(x)
    return self.softmax(x)

Instantiate your model with appropriate input, hidden, and output sizes

input_size = train_features.shape[1]
hidden_size = 64
output_size = 2

mlp_model = MLPModel(input_size, hidden_size, output_size)

Define the loss function and optimizer

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(mlp_model.parameters(), lr=0.01)

Training parameters

num_epochs = 500

Training loop

for epoch in range(num_epochs):
# Set the model to training mode
mlp_model.train()

# Convert train features and labels to tensors
train_input_tensor = torch.from_numpy(train_features.astype(np.float32))
train_input_tensor = train_input_tensor.type(torch.FloatTensor)
train_labels_tensor = torch.from_numpy(train_labels.astype(np.int64))

# Zero the gradients
optimizer.zero_grad()

# Forward pass
train_outputs = mlp_model(train_input_tensor)

# Calculate the loss
loss = criterion(train_outputs, train_labels_tensor)

# Backward pass and optimization
loss.backward()
optimizer.step()

# Print the loss for every epoch
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item()}')

Set the model to evaluation mode

mlp_model.eval()

Test your model on the test set

test_input_tensor = torch.from_numpy(test_features.astype(np.float32))
test_input_tensor = test_input_tensor.type(torch.FloatTensor)
out_probs = mlp_model(test_input_tensor).detach().numpy()
out_classes = np.argmax(out_probs, axis=1)

Convert output classes to long tensor

test_labels_tensor = torch.from_numpy(test_labels.astype(np.int64))

Then, I have tried to use Relative Input Stability metric:
metric = quantus.RelativeInputStability()
scores_L_RIS= metric(
model=mlp_model,
x_batch=test_features,
y_batch=test_labels,
explain_func=quantus.explain,
explain_func_kwargs={"method": "Lime"}
)
scores_L_RIS

I get always this error if I run this code with Lime method:

AssertionError Traceback (most recent call last)
Cell In[33], line 2
1 metric = quantus.RelativeInputStability()
----> 2 scores_L_RIS= metric(
3 model=mlp_model,
4 x_batch=test_features,
5 y_batch=test_labels,
6 explain_func=quantus.explain,
7 explain_func_kwargs={"method": "Lime"}
8 )
9 scores_L_RIS

File ~/miniforge3/envs/denizdurmaz/lib/python3.8/site-packages/quantus/metrics/robustness/relative_input_stability.py:211, in RelativeInputStability.call(self, model, x_batch, y_batch, model_predict_kwargs, explain_func, explain_func_kwargs, a_batch, device, softmax, channel_first, batch_size, **kwargs)
156 def call( # type: ignore
157 self,
158 model: tf.keras.Model | torch.nn.Module,
(...)
169 **kwargs,
170 ) -> List[float]:
171 """
172 For each image x:
173 - Generate num_perturbations perturbed xs in the neighborhood of x.
(...)
209 float in case return_aggregate=True, otherwise np.ndarray of floats
...
193 "metrics rely on ordering."
194 "Recompute the explanations."
195 )

AssertionError: The elements in the attribution vector are all equal to zero, which may cause inconsistent results since many metrics rely on ordering. Recompute the explanations.

I got the same error by applying Local Lipschitz Estimate. The other XAI methods GradientShap and DeepLift do not give such an error.

Could you please help me with this issue?

@annahedstroem
Copy link
Member

Hi @ddurmaz97 thanks for your question!

It looks like you have an issue with the explanation method.

For validation, can you please use quantus.explain and then generate the LIME explanations for inspection? Check that they are not producing just zeros and update the hyperparameteres of your explanation function if they do?

e.g., you can run

explanations = quantus.explain(model, inputs, targets, **{"method": "Lime", "xai_lib": "captum", "xai_lib_kwargs": {<ALL YOUR KWARGS TO PASS TO CAPTUM LIME>}}

and look into the explanations !

@annahedstroem annahedstroem self-assigned this Jan 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants