Skip to content

Interpreting sequence-to-function machine learning models

License

Notifications You must be signed in to change notification settings

ML4GLand/SeqExplainer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyPI version PyPI - Downloads

SeqExplainer Logo

SeqExplainer (Sequence explainability tools)

SeqExplainer is a Python package for interpreting sequence-to-function machine learning models. Most of the core functionality is for post-hoc analysis of a trained model. SeqExplainer currently supports:

Requirements

The main dependencies of SeqExplainer are:

python
torch
captum
numpy
matplotlib
logomaker
sklearn
shap

Contributing

This section was modified from https://github.com/pachterlab/kallisto.

All contributions, including bug reports, documentation improvements, and enhancement suggestions are welcome. Everyone within the community is expected to abide by our code of conduct

As we work towards a stable v1.0.0 release, and we typically develop on branches. These are merged into dev once sufficiently tested. dev is the latest, stable, development branch.

main is used only for official releases and is considered to be stable. If you submit a pull request, please make sure to request to merge into dev and NOT main.

References

  1. Novakovsky, G., Dexter, N., Libbrecht, M. W., Wasserman, W. W. & Mostafavi, S. Obtaining genetics insights from deep learning via explainable artificial intelligence. Nat. Rev. Genet. 1–13 (2022) doi:10.1038/s41576-022-00532-2