Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Exporter Refactor] Onnx sub graph matching #1211

Merged
merged 9 commits into from
Dec 8, 2022

Conversation

corey-nm
Copy link
Contributor

@corey-nm corey-nm commented Dec 7, 2022

This adds a function (get_structural_matches) that allows us to find all sub-graphs in an onnx graph that match a specific structure. Much of the code in the existing passes exists just to match sub-graphs, and it all follows a similar pattern. This function allows us to compactly and explicitly specify what we want to match against. It also retrieves the actual nodes for us and makes using them simple.

See docstrings for more information.

Testing plan

unit tests & we converted two complex passes to use this function in this PR (#1200), which is tested by unit tests

@corey-nm corey-nm changed the base branch from main to feature/damian/export_pipeline_refactor December 7, 2022 14:30
Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>
Copy link
Contributor

@dbogunowicz dbogunowicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet

@dbogunowicz dbogunowicz changed the title Onnx sub graph matching [Export Refactor] Onnx sub graph matching Dec 7, 2022
@dbogunowicz dbogunowicz changed the title [Export Refactor] Onnx sub graph matching [Exporter Refactor] Onnx sub graph matching Dec 7, 2022
Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks amazing @corey-nm most comments around structure.

Also suggest adding support for "or"s. maybe handling tuples in this case where something like ("Gemm", "MatMul") would come in handy

src/sparseml/exporters/transforms/utils/matching.py Outdated Show resolved Hide resolved
src/sparseml/exporters/transforms/utils/matching.py Outdated Show resolved Hide resolved

def _match_op_type(match: MatchResult, node: NodeOrInit, op_type: str) -> bool:
op_type, is_optional = _is_optional_node(op_type)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small overall nit - logic flow might be a little cleaner splitting by type instead of going by negations. (for example here we need to check for initializer stuff when we really just care about node protos)


parents = graph.get_node_parents(node)
assert len(parents) == len(node.input)
for parent, expected_op_sequence in zip(parents, parent_ops):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to protect more for user input error here?
might be especially tricky for ops with lots of inputs (think maybe qlinearconv). is it a requirement to have an entry for each input?

Copy link
Contributor Author

@corey-nm corey-nm Dec 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its actually necessary to not match exactly against length. e.g. "QuantizeLinear" has 3 inputs but with our recursion we only ever match against 1 input.

So the only checks right now are just to ensure that then length is at least the length of the actual parents.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see, so the only issue would be if we want to skip over any of the first k inputs to match against any later ones, then they'd need to add placeholder [].

like if we want to match to a QuantizeLinear with an initializer zero point for example it would be [[], [], [INITIALIZER]]?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep exactly!

assert len(matches) == 0

matches = get_structural_matches(onnx_graph, op_type="Identity")
assert len(matches) == 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason not to assert an == against the actual expected lists or even expected match objects?
that way we could pull everything into a parameterized test function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the elements of parents/children are nodes i wasn't sure how to match against them without transforming it in some way

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense.

we can still combine this logic, if we want to accept a list of expected node names, parent names, and children names.

not necessary now if crunched for time, but will make refactors and adding new tests a lot easier

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I like that idea, will do when more rigorous testing is added for this function

Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending ACK of comments

assert len(matches) == 0

matches = get_structural_matches(onnx_graph, op_type="Identity")
assert len(matches) == 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense.

we can still combine this logic, if we want to accept a list of expected node names, parent names, and children names.

not necessary now if crunched for time, but will make refactors and adding new tests a lot easier

@corey-nm corey-nm merged commit 0168571 into feature/damian/export_pipeline_refactor Dec 8, 2022
@corey-nm corey-nm deleted the onnx-matching branch December 8, 2022 16:17
bfineran pushed a commit that referenced this pull request Dec 29, 2022
…refactor (#1192)

* initial commit

* [Exporter Refactor] `BaseTransform` and `OnnxTransform` (#1210)

* initial commit

* PR comments

* initial commit

* Delete test_fold_identity_initializers.py

* Delete __init__.py

* Delete __init__.py

* Update src/sparseml/exporters/transforms/base_transform.py

* fix docstrings

* few improvements and tests

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Onnx sub graph matching (#1211)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* Addressing review comments

* Removing extra file from merge

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Fixing match initializer logic

* [Exporter Refactor] Fold Identity Initializers Transform (#1194)

* initial commit

* PR comments

* initial commit

* Delete test_fold_identity_initializers.py

* Delete __init__.py

* Delete __init__.py

* Adding onnx graph structural matching

* Styling

* Update src/sparseml/exporters/transforms/base_transform.py

* fix docstrings

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* ready for review

* Update src/sparseml/exporters/transforms/fold_identity_initializers.py

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* some nits according to Bens comments

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* Adding ConstantsToInitializers pass (#1227)

* Adding UnwrapBatchNorms transform (#1230)

* [Export Refactor] Adding InitializersToUint8 transform (#1228)

* Adding InitializersToUint8 transform

* Update src/sparseml/exporters/transforms/initializers_to_uint8.py

* [Exporter Refactor] Quantizable Conv Integer (#1220)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* Update convert_quantizable_conv_integer.py

Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Convert Quantizable Matmul (#1219)

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* Delete base_exporter.py

* Update src/sparseml/exporters/transforms/convert_quantizable_matmul.py

* beautify

* check for initializers

* add docstring

Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* fix quality

* [Export Refactor] Adding ConvToQLinearConv transform (#1221)

* Adding ConvToQLinearConv transform

* Responding to review comments

* Respond to reviews

* [Export Refactor] Adding FlattenQParams transform (#1229)

* Adding FlattenQParams transform

* Respond to review

* Adding GemmToMatMulIntegerAddCastMul trasnform (#1237)

* Adding MatMulToMatMulIntegerAddCastMul transform (#1238)

* Adding FoldReLUQuants transform (#1240)

* Adding PropagateEmbeddingQuantization transform (#1242)

* Adding RemoveDuplicateQuantizeOps transform (#1243)

* [Export Refactor]Adding  GemmToQLinearMatMul transform (#1225)

* Adding ConvToQLinearConv transform

* Responding to review comments

* Adding GemmToQLinearMatMul

* Styling

* [Export Refactor] Quantize QAT Embedding (#1234)

* initial commit

* intiial commit

* PR comments

* fix errors

* Apply suggestions from code review

* upadte heleprs

* matching of conv integer pass

* second implementation done, needs some polishing

* Adding match_structure and iter_structural_matches

* Using structural matching for quantizable_conv_integer

* initial commit

* Adding onnx graph structural matching

* Styling

* Adding missing init.py

* Update src/sparseml/exporters/transforms/utils/matching.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Updating docstring of structural_matches

* Adding __all__

* initial commit

* ready for PR

* beautify

* Delete test_helpers.py

* Delete base_exporter.py

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* [Exporter Refactor] Adding FoldConvDivBn (#1235)

* Adding FoldConvDivBn

* Expanding docstring

* Adding RemoveDuplicateQConvWeights transform (#1244)

* [Export Refactor] Delete Trivial Onnx Adds (#1233)

* initial commit

* get transform into the correct format

* ready for review

* fix naming in test

* Fixing trivial onnx adds

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

* [Exporter Refactor] Adding QuantizeResiduals transform (#1245)

* Adding QuantizeResiduals transform

* Adding tests

* Styling

* Fixing conv-integer transform and add sorting to core ops (#1252)

* Don't print out onnx model on validation error (#1253)

* FoldReluQuants now modifies all children of relu node (#1254)

* Adding shape check for weight comparison in duplicate-qconv-weights (#1255)

* [Exporter Refactor] Adding DeleteRepeatedQdq transform (#1257)

* Adding DeleteRepeatedQdq transform

* Adding unit test for delete repeated qdq

* Using assert_node_type

* Update src/sparseml/exporters/transforms/delete_repeated_qdq.py

* [Exporter Refactor] Adding SkipInputQuantize transform (#1256)

* Adding SkipInputQuantize transform

* add tests

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>

* [Exporter Refactor] Fixing matching logic of qlinear transforms (#1251)

* Fixing matching logic of qlinear transforms

* Adding folding of input/output quants to qlinears

* [Exporter Refactor] Base Exporter class and example implementations (#1249)

* Initial comit of exporters

* Styling

* Fixing SkipInputQuantize

* Adding validation methods

* Clean up ONNXToDeepsparse

* Moving TorchToONNX to pytorch

* Adding inplace and saving pre optimized model to ONNXToDeepsparse

* Adding sketch of tests

* Regression tests against simple models

* resnet50 regression tests passing

* resnet50 exporters are all equivalent

* Moving FoldConvDivBn under initializer folding

* Adding yolov5 tests

* Apply suggestions from code review

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Review response

* Adding notes from review

* uncomment asserts... oops

* yolo & resnet tests passing

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* [Exporter Refactor] Adding `any_of` for get_structural_matches and MatchResult to str (#1262)

* Addin any_of and MatchResult to str

* Fixing docstring of get_structural_matches

* Adding add_node_deferred and delete_node_deffered to OnnxTransform (#1263)

* [Exporter Refactor] Standardize trivial transforms (#1264)

* Standardization of some transforms

* Adding logging methods to OnnxTransform class

* [Exporter Refactor] Standardize non core transforms (#1265)

* Standardizing transforms with node removals

* Using log_match

* [Exporter Refactor] Standardizing MatMulToQLinearMatMul (#1266)

* Standardizing MatMulToQLinearMatMul

* Using log_match

* [Exporter Refactor] Standardizing ConvToConvIntegerAddCastMul (#1267)

* Standardizing ConvToConvIntegerAddCastMul

* Using log_match

* [Exporter Refactor] Standardize qlinears (#1268)

* Standardizing qlinear transforms

* Using log_match

* [ExporterRefactor] Standardizing XToMatMulIntegerAddCastMul transforms (#1269)

* Standardizing MatMulIntegerAddCastMul transforms

* Using log_match and any_of

* [Exporter Refactor] Standardize qat embedding (#1270)

* Standardizing QuantizeQATEmbedding

* Add log_match

* Using renamed versions of transforms (#1271)

* Removing unused tests

* [Exporters Refactor] Regression test Bert (#1258)

* initial commit

* Apply suggestions from code review

* Update tests/sparseml/pytorch/test_torch_to_onnx_exporter.py

* Fixing bert exporters

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

* [Exporters Refactor] Fix failing base tests (#1260)

* initial commit

* PR edits

* Delete recipe.yaml

* fix onnx problem

* Fixing torch import issue and numpy attr error

* Another attempt at fixing get_numpy_dtype

* Fix numpy.float usage

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>

Co-authored-by: bogunowicz@arrival.com <bogunowicz@arrival.com>
Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Corey Lowman <corey@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants