-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create transparency with respect to the metric’s data and model applicability #279
Conversation
Just to make sure we are on the same page, the goals of this task are:
Am I right? |
Yes, this is a starting point for this effort. Then we can complement this PR with a more in-depth ModelType and DataType checker, as the need arise. |
quantus/metrics/base.py
Outdated
def evaluation_category(self): | ||
raise NotImplementedError | ||
|
||
@property |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one can safely be implemented in the base class,
@property
def model_applicability(self) -> Set[ModelType]:
return {ModelType.Torch, ModelType.TensorFlow}
Since all metrics support both, afaik.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, can be done.
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## main #279 +/- ##
==========================================
+ Coverage 92.87% 93.36% +0.49%
==========================================
Files 60 62 +2
Lines 3200 3468 +268
==========================================
+ Hits 2972 3238 +266
- Misses 228 230 +2
|
Addressed issues:
last_results
toevaluation_scores
andall_results
toall_evaluation_scores
Minimum acceptance criteria