Skip to content

SparseML v0.12.1 Patch Release

Compare
Choose a tag to compare
@jeanniefinks jeanniefinks released this 05 May 18:39
· 2 commits to release/0.12 since this release
d82e3bd

This is a patch release for 0.12.0 that contains the following changes:

  • Disabling of distillation modifiers no longer crashes Hugging Face Transformers integrations --distillation_teacher disable
  • Numeric stability is provided for distillation modifiers using log_softmax instead of softmax.
  • Accuracy and performance issues were addressed for quantized graphs in image classification and NLP.
  • When using mixed precision for a quantized recipe with image classification, crashes no longer occur.