Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create transparency with respect to the metric’s data and model applicability #279

Merged
merged 19 commits into from
Jul 3, 2023

Conversation

annahedstroem
Copy link
Member

@annahedstroem annahedstroem commented Jun 30, 2023

Addressed issues:

Minimum acceptance criteria

  • Specify what is necessary for the PR to be merged with the main branch.
  • @mentions of the person that is apt to review these changes e.g., @annahedstroem

@annahedstroem annahedstroem changed the title Metric marking https://github.com/understandable-machine-intelligence-lab/Quantus/issues/280 Jun 30, 2023
@annahedstroem annahedstroem changed the title https://github.com/understandable-machine-intelligence-lab/Quantus/issues/280 Create transparency with respect to the metric’s individual data and model applicability Jun 30, 2023
@annahedstroem annahedstroem changed the title Create transparency with respect to the metric’s individual data and model applicability Create transparency with respect to the metric’s data and model applicability Jun 30, 2023
@aaarrti
Copy link
Collaborator

aaarrti commented Jun 30, 2023

Addressed issues:

Minimum acceptance criteria

  • Specify what is necessary for the PR to be merged with the main branch.
  • @mentions of the person that is apt to review these changes e.g., @annahedstroem

Just to make sure we are on the same page, the goals of this task are:

  • make it easier for users to understand if their data and metric of choice are compatible
  • systematize available metrics, pre-processing features, supported model, etc.

Am I right?

quantus/metrics/base.py Outdated Show resolved Hide resolved
quantus/metrics/base.py Outdated Show resolved Hide resolved
quantus/metrics/base.py Outdated Show resolved Hide resolved
quantus/metrics/base.py Outdated Show resolved Hide resolved
@annahedstroem
Copy link
Member Author

Addressed issues:

Minimum acceptance criteria

  • Specify what is necessary for the PR to be merged with the main branch.
  • @mentions of the person that is apt to review these changes e.g., @annahedstroem

Just to make sure we are on the same page, the goals of this task are:

  • make it easier for users to understand if their data and metric of choice are compatible
  • systematize available metrics, pre-processing features, supported model, etc.

Am I right?

Yes, this is a starting point for this effort. Then we can complement this PR with a more in-depth ModelType and DataType checker, as the need arise.

def evaluation_category(self):
raise NotImplementedError

@property
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one can safely be implemented in the base class,

@property
def model_applicability(self) -> Set[ModelType]:
     return {ModelType.Torch, ModelType.TensorFlow}

Since all metrics support both, afaik.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, can be done.

quantus/metrics/base.py Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

Codecov Report

Merging #279 (5a9ee4b) into main (72af154) will increase coverage by 0.49%.
The diff coverage is 96.72%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

@@            Coverage Diff             @@
##             main     #279      +/-   ##
==========================================
+ Coverage   92.87%   93.36%   +0.49%     
==========================================
  Files          60       62       +2     
  Lines        3200     3468     +268     
==========================================
+ Hits         2972     3238     +266     
- Misses        228      230       +2     
Impacted Files Coverage Δ
quantus/functions/loss_func.py 83.33% <0.00%> (ø)
quantus/functions/normalise_func.py 74.07% <ø> (ø)
...ics/randomisation/model_parameter_randomisation.py 82.89% <64.70%> (+1.46%) ⬆️
quantus/metrics/faithfulness/irof.py 98.33% <87.50%> (+0.18%) ⬆️
quantus/metrics/base_perturbed.py 93.33% <93.33%> (ø)
quantus/helpers/enums.py 100.00% <100.00%> (ø)
quantus/metrics/__init__.py 100.00% <100.00%> (ø)
quantus/metrics/axiomatic/completeness.py 95.12% <100.00%> (+0.83%) ⬆️
quantus/metrics/axiomatic/input_invariance.py 100.00% <100.00%> (ø)
quantus/metrics/axiomatic/non_sensitivity.py 100.00% <100.00%> (ø)
... and 31 more

@annahedstroem annahedstroem merged commit b44d542 into main Jul 3, 2023
7 checks passed
@annahedstroem annahedstroem deleted the metric-marking branch November 27, 2023 14:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants