Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extension to multiclass classification and regression #2

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

bvanberl
Copy link

Extending MACE for multiclass classification and regression

Overview

It was suggested in the authors' original paper that MACE could be naturally extended to multiclass classification and regression scenarios. I made changes to the repository to support generation of counterfactuals for multiclass and regression problems. Model characteristic formulae and counterfactual formulae were modified to accommodate these scenarios.

Multiclass Classification

The idea of a counterfactual was taken to be an example that is different from the class of the factual outcome. By default, the closest counterfactual from any class is discovered. The user has the option of specifying an alternative predicted class; in that case, the counterfactual formula enforces that the prediction of the counterfactual example matches the class specified by the user. Two benchmark datasets (along with preprocessing scripts) were added: the Iris Data Set and the Poker Data Set.

Regression

A counterfactual was taken to be an example whose prediction is greater than or equal to some prespecified distance r from the factual outcome. For instance, if the factual outcome is 5.0 and r = 0.5, then an acceptable counterfactual outcome is any real number <= 4.5 or >= 5.5. The user can specify r as a command argument. The user can also specify an alternative predicted value o, such that the counterfactual outcome is in the range [o-r, o+r]. Two benchmark datasets (along with preprocessing scripts) were added: the Wine Quality Data Set and the Boston House Prices Data Set.

Issues

I noticed that occasionally a precision discrepancy arose between the PySMT models and the scikit-learn models when using MACE to find counterfactuals in regression and multiclass scenarios. My (temporary) solution was to apply decimal rounding to feature values and slightly lowering the threshold for the characteristic formulae of tree-based models.

Feature Tweaking and Minimum Observable

I updated the FT and MO methods to accommodate multiclass classification and regression models. I wanted to compare those methods with MACE for these problem types. My findings showed that MACE continued to produce closer counterfactuals for these problem types.

Thank you for your great work developing MACE! I am excited to follow future development and evaluation of the method!

Copy link
Collaborator

@amirhk amirhk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @bvanberl

i'm delighted to see your contribution and thank you for improving the codebase!

after checking the comments, may I ask that you run the githooks before pushing as well? would be good to add some tests as well so we don't break your contributions later :)

also, if you have other suggestions for the code, please feel free to tell us or create another diff!

@@ -127,7 +134,7 @@ def generateExplanations(
raise Exception(f'{approach_string} not recognized as a valid `approach_string`.')


def runExperiments(dataset_values, model_class_values, norm_values, approaches_values, batch_number, sample_count, gen_cf_for, process_id):
def runExperiments(dataset_values, model_class_values, norm_values, approaches_values, batch_number, sample_count, gen_cf_for, process_id, regression_min_diff, outcome):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit; s/outcome/target
for target class

@@ -167,6 +174,8 @@ def runExperiments(dataset_values, model_class_values, norm_values, approaches_v

# save some files
dataset_obj = loadData.loadDataset(dataset_string, return_one_hot = one_hot, load_from_cache = False, debug_flag = False)
if dataset_obj.n_classes > 2 and model_class_string == 'lr':
raise Exception(f'{model_class_string} cannot be used for non-binary ground truth. {dataset_string} dataset has {dataset_obj.n_classes} classes.')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@@ -211,6 +220,10 @@ def runExperiments(dataset_values, model_class_values, norm_values, approaches_v
else:
raise Exception(f'{gen_cf_for} not recognized as a valid `gen_cf_for`.')

# if desired counterfactual outcome is specified, remove all examples with that predicted label
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would you please clarify this for me? not sure i follow

counterfactual_sample = dict(zip(factual_sample.keys(), es_instance))
counterfactual_sample['y'] = counterfactual_label
counterfactual_sample['y'] = ensemble_classifier.predict(es_instance.reshape(1, -1))
distance = normalizedDistance.getDistanceBetweenSamples(
factual_sample,
counterfactual_sample,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did not particularly understand the refactoring here; please explain briefly in case i'm missing anything

for i in range(n_classes)
#balanced_data_frame[balanced_data_frame.loc[:,output_col] == 0].sample(number_of_subsamples_in_each_class, random_state = RANDOM_SEED),
#balanced_data_frame[balanced_data_frame.loc[:,output_col] == 1].sample(number_of_subsamples_in_each_class, random_state = RANDOM_SEED),
]).sample(frac = 1, random_state = RANDOM_SEED)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good stuff; let's try to remove unused/commented lines (sorry, i had some before as well)

print('[INFO] done.\n', file=log_file)
print('[INFO] done.\n')
assert accuracy_score(y_train, model_trained.predict(X_train)) > 0.70
#assert accuracy_score(y_train, model_trained.predict(X_train)) > 0.70 # TODO uncomment
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's keep this in if possible
or replace with a big warning in the console that indicates the model (classifier/regressor) isn't a good model to begin with...

@@ -277,8 +318,8 @@ def lr2formula(model, model_symbols):
])
),
Real(0)),
EqualsOrIff(model_symbols['output']['y']['symbol'], TRUE()),
EqualsOrIff(model_symbols['output']['y']['symbol'], FALSE())
EqualsOrIff(model_symbols['output']['y']['symbol'], Int(1)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

necessary?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants