You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use DeepExplain to get attribution scores for my LSTM inputs directly using LRP.
I am feeding inputs that are feature vectors of dimension 500 to a BiLSTM (I am using max sequence length of 30) then using some dense layers and a Softmax activation at the end.
Suppose I want to explain an input with sequence length 25, each of these 25 are 500 dimension feature vector. The 25 sequence length will be padded to the max sequence length of 30. The issue is I do not have any embedding lookup in the model. I want to get the attribution scores for each feature vector (as there are no tokens/words as input in my model).
How do I do this?
The text was updated successfully, but these errors were encountered:
I am trying to use DeepExplain to get attribution scores for my LSTM inputs directly using LRP.
I am feeding inputs that are feature vectors of dimension 500 to a BiLSTM (I am using max sequence length of 30) then using some dense layers and a Softmax activation at the end.
Suppose I want to explain an input with sequence length 25, each of these 25 are 500 dimension feature vector. The 25 sequence length will be padded to the max sequence length of 30. The issue is I do not have any embedding lookup in the model. I want to get the attribution scores for each feature vector (as there are no tokens/words as input in my model).
How do I do this?
The text was updated successfully, but these errors were encountered: