Skip to content

Latest commit

 

History

History
39 lines (25 loc) · 2.02 KB

README.md

File metadata and controls

39 lines (25 loc) · 2.02 KB

This repository contains the code to run the experiments present in this paper. The code here is frozen to what it was when we originally wrote the paper. If you're interested in using LIME, check out this repository, where we have packaged it up, improved the code quality, added visualizations and other improvements.

Running the commands below should be enough to get all of the results. You need specific versions python, sklearn, numpy, scipy. Install requirements in a virtualenv using:

pip install -r requirements.txt

If we forgot something, please email the first author.

Experiment in section 5.2:

  • DATASET -> 'multi_polarity_books', 'multi_polarity_kitchen', 'multi_polarity_dvd', 'multi_polarity_kitchen'

  • ALGORITHM -> 'l1logreg', 'tree'

  • EXPLAINER -> 'lime', 'parzen', 'greedy' or 'random'

      python evaluate_explanations.py --dataset DATASET --algorithm ALGORITHM --explainer EXPLAINER 
    

Experiment in section 5.3:

  • DATASET -> 'multi_polarity_books', 'multi_polarity_kitchen', 'multi_polarity_dvd', 'multi_polarity_kitchen'

  • ALGORITHM -> 'logreg', 'random_forest', 'svm', 'tree' or 'embforest', although you would need to set up word2vec for embforest

      python data_trusting.py -d DATASET -a ALGORITHM -k 10 -u .25 -r NUM_ROUNDS
    

Experiment in section 5.4:

  • NUM_ROUNDS -> Desired number of rounds

  • DATASET -> 'multi_polarity_books', 'multi_polarity_kitchen', 'multi_polarity_dvd', 'multi_polarity_kitchen'

  • PICK -> 'submodular' or 'random' Run the following with the desired number of rounds:

      mkdir out_comparing
    
      python generate_data_for_compare_classifiers.py -d DATASET -o out_comparing/ -k 10 -r NUM_ROUNDS
    
      python compare_classifiers.py -d DATASET -o out_comparing/ -k 10 -n 10 -p PICK
    

Religion dataset:

Available here

Multi-polarity datasets:

I got them from here