Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Exporter Refactor] Convert Quantizable Matmul #1219

Conversation

dbogunowicz
Copy link
Contributor

Adds:

  • convert_quantizable_matmul transform

@dbogunowicz dbogunowicz changed the base branch from main to feature/damian/export_pipeline_refactor December 8, 2022 13:42
Copy link
Contributor

@corey-nm corey-nm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easy 🔥

Copy link
Contributor

@corey-nm corey-nm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

bogunowicz@arrival.com and others added 3 commits December 8, 2022 17:43
…efactor' into feature/damian/transform_convert_quantizable_matmul
…efactor' into feature/damian/transform_convert_quantizable_matmul
Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great - One important thing to note is since this is the case where both inputs are activations, we should also check that both inputs to the quantize linear parents are not initializers

# Convert
model = convert_matmul_to_quantized(match, model)
remove_node_and_params_from_graph(model, match.node)
ONNXGraph(model).sort_nodes_topologically()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: I think this can go outside the loop. shouldn't matter either way though

Comment on lines +144 to +145
if graph.get_init_by_name(quantize_linear_parent.input[0]):
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should capture this in the docstring that it expects non initializers as parents

bogunowicz@arrival.com added 2 commits December 9, 2022 16:01
…efactor' into feature/damian/transform_convert_quantizable_matmul
@dbogunowicz dbogunowicz merged commit aa49d17 into feature/damian/export_pipeline_refactor Dec 9, 2022
@dbogunowicz dbogunowicz deleted the feature/damian/transform_convert_quantizable_matmul branch December 9, 2022 15:04
bfineran pushed a commit that referenced this pull request Dec 29, 2022
…refactor (#1192)

* initial commit

* [Exporter Refactor] `BaseTransform` and `OnnxTransform` (#1210)

* initial commit

* PR comments

* initial commit

* Delete test_fold_identity_initializers.py

* Delete __init__.py

* Delete __init__.py

* Update src/sparseml/exporters/transforms/base_transform.py

* fix docstrings

* few improvements and tests

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Onnx sub graph matching (#1211)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* Addressing review comments

* Removing extra file from merge

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Fixing match initializer logic

* [Exporter Refactor] Fold Identity Initializers Transform (#1194)

* initial commit

* PR comments

* initial commit

* Delete test_fold_identity_initializers.py

* Delete __init__.py

* Delete __init__.py

* Adding onnx graph structural matching

* Styling

* Update src/sparseml/exporters/transforms/base_transform.py

* fix docstrings

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* ready for review

* Update src/sparseml/exporters/transforms/fold_identity_initializers.py

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* some nits according to Bens comments

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* Adding ConstantsToInitializers pass (#1227)

* Adding UnwrapBatchNorms transform (#1230)

* [Export Refactor] Adding InitializersToUint8 transform (#1228)

* Adding InitializersToUint8 transform

* Update src/sparseml/exporters/transforms/initializers_to_uint8.py

* [Exporter Refactor] Quantizable Conv Integer (#1220)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* Update convert_quantizable_conv_integer.py

Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Convert Quantizable Matmul (#1219)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* Delete base_exporter.py

* Update src/sparseml/exporters/transforms/convert_quantizable_matmul.py

* beautify

* check for initializers

* add docstring

Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* fix quality

* [Export Refactor] Adding ConvToQLinearConv transform (#1221)

* Adding ConvToQLinearConv transform

* Responding to review comments

* Respond to reviews

* [Export Refactor] Adding FlattenQParams transform (#1229)

* Adding FlattenQParams transform

* Respond to review

* Adding GemmToMatMulIntegerAddCastMul trasnform (#1237)

* Adding MatMulToMatMulIntegerAddCastMul transform (#1238)

* Adding FoldReLUQuants transform (#1240)

* Adding PropagateEmbeddingQuantization transform (#1242)

* Adding RemoveDuplicateQuantizeOps transform (#1243)

* [Export Refactor]Adding  GemmToQLinearMatMul transform (#1225)

* Adding ConvToQLinearConv transform

* Responding to review comments

* Adding GemmToQLinearMatMul

* Styling

* [Export Refactor] Quantize QAT Embedding (#1234)

* initial commit

* intiial commit

* PR comments

* fix errors

* Apply suggestions from code review

* upadte heleprs

* matching of conv integer pass

* second implementation done, needs some polishing

* Adding match_structure and iter_structural_matches

* Using structural matching for quantizable_conv_integer

* initial commit

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* ready for PR

* beautify

* Delete test_helpers.py

* Delete base_exporter.py

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* [Exporter Refactor] Adding FoldConvDivBn (#1235)

* Adding FoldConvDivBn

* Expanding docstring

* Adding RemoveDuplicateQConvWeights transform (#1244)

* [Export Refactor] Delete Trivial Onnx Adds (#1233)

* initial commit

* get transform into the correct format

* ready for review

* fix naming in test

* Fixing trivial onnx adds

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

* [Exporter Refactor] Adding QuantizeResiduals transform (#1245)

* Adding QuantizeResiduals transform

* Adding tests

* Styling

* Fixing conv-integer transform and add sorting to core ops (#1252)

* Don't print out onnx model on validation error (#1253)

* FoldReluQuants now modifies all children of relu node (#1254)

* Adding shape check for weight comparison in duplicate-qconv-weights (#1255)

* [Exporter Refactor] Adding DeleteRepeatedQdq transform (#1257)

* Adding DeleteRepeatedQdq transform

* Adding unit test for delete repeated qdq

* Using assert_node_type

* Update src/sparseml/exporters/transforms/delete_repeated_qdq.py

* [Exporter Refactor] Adding SkipInputQuantize transform (#1256)

* Adding SkipInputQuantize transform

* add tests

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Fixing matching logic of qlinear transforms (#1251)

* Fixing matching logic of qlinear transforms

* Adding folding of input/output quants to qlinears

* [Exporter Refactor] Base Exporter class and example implementations (#1249)

* Initial comit of exporters

* Styling

* Fixing SkipInputQuantize

* Adding validation methods

* Clean up ONNXToDeepsparse

* Moving TorchToONNX to pytorch

* Adding inplace and saving pre optimized model to ONNXToDeepsparse

* Adding sketch of tests

* Regression tests against simple models

* resnet50 regression tests passing

* resnet50 exporters are all equivalent

* Moving FoldConvDivBn under initializer folding

* Adding yolov5 tests

* Apply suggestions from code review

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Review response

* Adding notes from review

* uncomment asserts... oops

* yolo & resnet tests passing

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* [Exporter Refactor] Adding `any_of` for get_structural_matches and MatchResult to str (#1262)

* Addin any_of and MatchResult to str

* Fixing docstring of get_structural_matches

* Adding add_node_deferred and delete_node_deffered to OnnxTransform (#1263)

* [Exporter Refactor] Standardize trivial transforms (#1264)

* Standardization of some transforms

* Adding logging methods to OnnxTransform class

* [Exporter Refactor] Standardize non core transforms (#1265)

* Standardizing transforms with node removals

* Using log_match

* [Exporter Refactor] Standardizing MatMulToQLinearMatMul (#1266)

* Standardizing MatMulToQLinearMatMul

* Using log_match

* [Exporter Refactor] Standardizing ConvToConvIntegerAddCastMul (#1267)

* Standardizing ConvToConvIntegerAddCastMul

* Using log_match

* [Exporter Refactor] Standardize qlinears (#1268)

* Standardizing qlinear transforms

* Using log_match

* [ExporterRefactor] Standardizing XToMatMulIntegerAddCastMul transforms (#1269)

* Standardizing MatMulIntegerAddCastMul transforms

* Using log_match and any_of

* [Exporter Refactor] Standardize qat embedding (#1270)

* Standardizing QuantizeQATEmbedding

* Add log_match

* Using renamed versions of transforms (#1271)

* Removing unused tests

* [Exporters Refactor] Regression test Bert (#1258)

* initial commit

* Apply suggestions from code review

* Update tests/sparseml/pytorch/test_torch_to_onnx_exporter.py

* Fixing bert exporters

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

* [Exporters Refactor] Fix failing base tests (#1260)

* initial commit

* PR edits

* Delete recipe.yaml

* fix onnx problem

* Fixing torch import issue and numpy attr error

* Another attempt at fixing get_numpy_dtype

* Fix numpy.float usage

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants