Skip to content

A Python package for the systematic evaluation of label noise correction methods in achieving ML fairness

License

Notifications You must be signed in to change notification settings

reluzita/fair-lnc-evaluation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Systematic analysis of the impact of label noise correction on ML Fairness

This Python package provides an implementation of the empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets, proposed in [1]. The methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. Experiment tracking is done using mlflow.

Installation

You can install the package using pip:

pip install fair_lnc_evaluation

Usage

Examples of how to use this package can be found on the examples folder.

References

Contributing

Contributions to this package are welcome! If you have any bug reports, feature requests, or would like to contribute with code improvements, please submit an issue or a pull request on the GitHub repository.

License

This package is distributed under the MIT License.


Feel free to modify and expand upon this README.md template according to your specific package and the algorithms you implement.

About

A Python package for the systematic evaluation of label noise correction methods in achieving ML fairness

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages