Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QuantizationModifier] take ownership of add_observers_, unit test fixes #1261

Merged

Conversation

bfineran
Copy link
Contributor

@bfineran bfineran commented Dec 15, 2022

this PR addresses three issues found in testing of the new QuantizationModifier:

  1. addresses an break in recipe template tests
  2. uses FakeQuantizeBase for inheritance comparisons where possible (introduced as default in later versions of torch)
  3. torch 1.9 tests revealed that torch quantization will inject its own activation post process hooks into certain activations without any way to disable. this PR adds its own implementation of add observers that does not include this override. (fixes issues with quantization of Sigmoid and other activations)
  4. previously, the new modifier still deferred to torch for quantization of output activations - this was a known hole as we try to quantize "everything" while torch mostly keeps to a white-list. as part of the activation fix in (3), we update our add observers implementation to quantize activations of all target modules as determined by QuantizationScheme propagation
  5. fix for compatibility with the FloatFunctional module used for resnet quantization

test_plan:

  • tests updated to ensure that output activation quantization is tested for existence and adherence to the scheme

@bfineran bfineran self-assigned this Dec 15, 2022
@corey-nm
Copy link
Contributor

torch 1.9 tests revealed that torch quantization will inject its own activation post process hooks into certain activations without any way to disable.

lol

Copy link
Contributor

@corey-nm corey-nm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, your version is way easier to understand than pytorch lol 🚀 💪

@bfineran bfineran merged commit b805b62 into quantization-refactor/main Dec 15, 2022
@bfineran bfineran deleted the quantization-refactor/gha-patch-1 branch December 15, 2022 20:48
@bfineran bfineran mentioned this pull request Dec 16, 2022
18 tasks
bfineran added a commit that referenced this pull request Dec 16, 2022
…xes (#1261)

* [QuantizationModifier] take ownership of add_observers_, unit test fixes

* suggestion from review - with quality override

* review - suggested comment

* fixes for FloatFunctional support (resnet50 broke)
bfineran added a commit that referenced this pull request Dec 19, 2022
* [QuantizationModifier] refactor base - move deprecated code to legacy file, add object routing for yaml load (#1059)

* move existing ModifierQuantization and tests to legacy file

* [QuantizationModifier] refactor base - move deprecated code to legacy file, add object routing for yaml load

* [QuantizationModifier] pydantic classes for defining quantization schemes to generate QConfigs (#1061)

* [QuantizationModifier] pydantic classes for defining quantization schemes to generate QConfigs

* review response

* [WIP][QuantizationModifier] base refactor flow - quantize entire module from QuantizationScheme (#1185)

* [QuantizationModifier] base refactor flow - quantize entire module from QuantizationScheme

* review response

* testing - lifecycle + QAT application

* activate qat tests

* [QuantizationModifier] improved quantization flow - control of propagation with schemes and stronger testing (#1198)

* [QuantizationModifier] exclude_module_types list modifier param to disable module types from quantization (#1199)

* [QuantizationModifier] submodule_schemes property impl - target specific submodules by scheme (#1201)

* [QuantizationModifier] submodule_schemes property impl - target specific submodules by scheme

* generalize helper fn name + quality

* [QuantizationModifier] module_type_schemes - override quantization scheme by layer type (#1202)

* [QuantizationModifier] module_type_schemes - override quantization scheme by layer type

* yaml pydoc example

* [QuantizationModifier] target hardware support (#1203)

* [QuantizationModifier] freeze bn stats and disable observers for QAT finetuning support (#1206)

* [QuantizationModifier] num_calibration_steps support (PTQ) (#1208)

* [QuantizationModifier] override params for model fuse step (#1209)

* [QuantizationModifier] refactor QuantizationScheme to its own file (#1223)

* [QuantizationModifier] QATWrapper support (#1226)

* [QuantizationModifier] logging support (#1231)

* [QuantizationModifier] logging support

* fake quantize bits logging

* [QuantizationModifier] potentially re-load quantization schemes on qconfig load (#1236)

* [QuantizationModifier] UX refactor - submodule_overrides and ignore (#1239)

* rename modifier default_scheme -> scheme

* refactor set_quantization_schemes (tests passing with existing UX)

* exclude_module_types -> ignore ; adds submodule exclusion

* refactor submodule and module type schemes into unified submodule_overrides

* [QuantizationModifier] strict mode - raise if unmatched submodules or types (#1241)

* [QuantizationModifier] take ownership of add_observers_, unit test fixes (#1261)

* [QuantizationModifier] take ownership of add_observers_, unit test fixes

* suggestion from review - with quality override

* review - suggested comment

* fixes for FloatFunctional support (resnet50 broke)

* [rebase] merge in changes to legacy modifier quantization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants