Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QuantizationModifier] override params for model fuse step #1209

Merged

Conversation

bfineran
Copy link
Member

@bfineran bfineran commented Dec 5, 2022

ports the model fuse function overrides from the legacy modifier
adds a target_hardware field to QuantizationScheme for tracking and serialization of platform dependent changes

test_plan:

  • changes pass unit tests
  • yaml tests updated

@bfineran bfineran self-assigned this Dec 5, 2022
@bfineran bfineran mentioned this pull request Dec 5, 2022
18 tasks
@bfineran bfineran force-pushed the quantization-refactor/calibration-ptq branch from 2435fa1 to fcba7fd Compare December 12, 2022 21:00
Base automatically changed from quantization-refactor/calibration-ptq to quantization-refactor/main December 12, 2022 21:00
@bfineran bfineran force-pushed the quantization-refactor/module-fuse-fn branch from db2be88 to 25808e7 Compare December 12, 2022 21:01
@bfineran bfineran merged commit 6146ade into quantization-refactor/main Dec 12, 2022
@bfineran bfineran deleted the quantization-refactor/module-fuse-fn branch December 12, 2022 21:01
bfineran added a commit that referenced this pull request Dec 19, 2022
* [QuantizationModifier] refactor base - move deprecated code to legacy file, add object routing for yaml load (#1059)

* move existing ModifierQuantization and tests to legacy file

* [QuantizationModifier] refactor base - move deprecated code to legacy file, add object routing for yaml load

* [QuantizationModifier] pydantic classes for defining quantization schemes to generate QConfigs (#1061)

* [QuantizationModifier] pydantic classes for defining quantization schemes to generate QConfigs

* review response

* [WIP][QuantizationModifier] base refactor flow - quantize entire module from QuantizationScheme (#1185)

* [QuantizationModifier] base refactor flow - quantize entire module from QuantizationScheme

* review response

* testing - lifecycle + QAT application

* activate qat tests

* [QuantizationModifier] improved quantization flow - control of propagation with schemes and stronger testing (#1198)

* [QuantizationModifier] exclude_module_types list modifier param to disable module types from quantization (#1199)

* [QuantizationModifier] submodule_schemes property impl - target specific submodules by scheme (#1201)

* [QuantizationModifier] submodule_schemes property impl - target specific submodules by scheme

* generalize helper fn name + quality

* [QuantizationModifier] module_type_schemes - override quantization scheme by layer type (#1202)

* [QuantizationModifier] module_type_schemes - override quantization scheme by layer type

* yaml pydoc example

* [QuantizationModifier] target hardware support (#1203)

* [QuantizationModifier] freeze bn stats and disable observers for QAT finetuning support (#1206)

* [QuantizationModifier] num_calibration_steps support (PTQ) (#1208)

* [QuantizationModifier] override params for model fuse step (#1209)

* [QuantizationModifier] refactor QuantizationScheme to its own file (#1223)

* [QuantizationModifier] QATWrapper support (#1226)

* [QuantizationModifier] logging support (#1231)

* [QuantizationModifier] logging support

* fake quantize bits logging

* [QuantizationModifier] potentially re-load quantization schemes on qconfig load (#1236)

* [QuantizationModifier] UX refactor - submodule_overrides and ignore (#1239)

* rename modifier default_scheme -> scheme

* refactor set_quantization_schemes (tests passing with existing UX)

* exclude_module_types -> ignore ; adds submodule exclusion

* refactor submodule and module type schemes into unified submodule_overrides

* [QuantizationModifier] strict mode - raise if unmatched submodules or types (#1241)

* [QuantizationModifier] take ownership of add_observers_, unit test fixes (#1261)

* [QuantizationModifier] take ownership of add_observers_, unit test fixes

* suggestion from review - with quality override

* review - suggested comment

* fixes for FloatFunctional support (resnet50 broke)

* [rebase] merge in changes to legacy modifier quantization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants