Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support tuple inputs in extract_submodel #2267

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

smpanaro
Copy link
Contributor

@smpanaro smpanaro commented Jul 7, 2024

In the extract_submodel debugging util, the input reachability check doesn't support ops that take list/tuple inputs e.g. concat, stack

For example, this script:

import coremltools as ct
from coremltools.converters.mil.debugging_utils import extract_submodel
import torch
from torch import nn
import numpy as np

class Net(nn.Module):
    def forward(self, x):
        x = x * x

        chunks = x.chunk(5, dim=-1)
        transformed = []
        for i in range(len(chunks)):
            transformed.append(chunks[i] * i)

        x = torch.cat(transformed, dim=-1)
        x = x ** 0.5
        return x

sample_input = torch.randn(1,32,1,512)
full_model = ct.convert(torch.jit.trace(Net().eval(), sample_input),
                        inputs=[ct.TensorType(shape=sample_input.shape, dtype=np.float16)],
                        minimum_deployment_target=ct.target.iOS16,
                        convert_to="mlprogram")
print("Full model:")
print(full_model._mil_program)

submodel = extract_submodel(full_model, outputs=["var_22"], inputs=["x_cast_fp16"])
print("Submodel:")
print(submodel._mil_program)

On 8.0b1:

Full model:

main[CoreML6](%x_1: (1, 32, 1, 512, fp16)(Tensor)) {
  block0() {
    %x_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = mul(x=%x_1, y=%x_1, name="x_cast_fp16")
    %var_3_cast_fp16_0: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_1: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_2: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_3: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_4: (1, 32, 1, 100, fp16)(Tensor) = split(x=%x_cast_fp16, split_sizes=[103, 103, 103, 103, 100], axis=-1, name="op_3_cast_fp16")
    %var_9_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_0, y=0.0, name="op_9_cast_fp16")
    %var_13_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_2, y=2.0, name="op_13_cast_fp16")
    %var_15_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_3, y=3.0, name="op_15_cast_fp16")
    %var_17_cast_fp16: (1, 32, 1, 100, fp16)(Tensor) = mul(x=%var_3_cast_fp16_4, y=4.0, name="op_17_cast_fp16")
    %var_20_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = concat(values=(%var_9_cast_fp16, %var_3_cast_fp16_1, %var_13_cast_fp16, %var_15_cast_fp16, %var_17_cast_fp16), axis=-1, interleave=False, name="op_20_cast_fp16")
    %var_22: (1, 32, 1, 512, fp16)(Tensor) = pow(x=%var_20_cast_fp16, y=0.5, name="op_22_cast_fp16")
  } -> (%var_22)
}

Traceback (most recent call last):
  File "/[removed]/submodel.py", line 28, in <module>
    submodel = extract_submodel(full_model, outputs=["var_22"], inputs=["x_cast_fp16"])
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/debugging_utils.py", line 168, in extract_submodel
    validate_inputs(func, input_vars)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/debugging_utils.py", line 85, in validate_inputs
    raise ValueError(f"output {output} not reachable from inputs")
ValueError: output var_22 not reachable from inputs

On this branch:

Full model:

main[CoreML6](%x_1: (1, 32, 1, 512, fp16)(Tensor)) {
  block0() {
    %x_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = mul(x=%x_1, y=%x_1, name="x_cast_fp16")
    %var_3_cast_fp16_0: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_1: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_2: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_3: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_4: (1, 32, 1, 100, fp16)(Tensor) = split(x=%x_cast_fp16, split_sizes=[103, 103, 103, 103, 100], axis=-1, name="op_3_cast_fp16")
    %var_9_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_0, y=0.0, name="op_9_cast_fp16")
    %var_13_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_2, y=2.0, name="op_13_cast_fp16")
    %var_15_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_3, y=3.0, name="op_15_cast_fp16")
    %var_17_cast_fp16: (1, 32, 1, 100, fp16)(Tensor) = mul(x=%var_3_cast_fp16_4, y=4.0, name="op_17_cast_fp16")
    %var_20_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = concat(values=(%var_9_cast_fp16, %var_3_cast_fp16_1, %var_13_cast_fp16, %var_15_cast_fp16, %var_17_cast_fp16), axis=-1, interleave=False, name="op_20_cast_fp16")
    %var_22: (1, 32, 1, 512, fp16)(Tensor) = pow(x=%var_20_cast_fp16, y=0.5, name="op_22_cast_fp16")
  } -> (%var_22)
}

Running MIL frontend_milinternal pipeline: 0 passes [00:00, ? passes/s]
Running MIL default pipeline: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 78/78 [00:00<00:00, 12916.25 passes/s]
Running MIL backend_mlprogram pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 12023.81 passes/s]
/[removed]/env/lib/python3.10/site-packages/coremltools/models/model.py:419: RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Error reading protobuf spec. validator error: Description of multiarray feature 'x_cast_fp16' has FLOAT16 dataType, which is only valid in specification version >= 7. This model has version 6".
  _warnings.warn(
Submodel:

main[CoreML6](%x_cast_fp16: (1, 32, 1, 512, fp16)(Tensor)) {
  block0() {
    %var_3_cast_fp16_0: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_1: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_2: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_3: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_4: (1, 32, 1, 100, fp16)(Tensor) = split(x=%x_cast_fp16, split_sizes=[103, 103, 103, 103, 100], axis=-1, name="op_3_cast_fp16")
    %var_9_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_0, y=0.0, name="op_9_cast_fp16")
    %var_13_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_2, y=2.0, name="op_13_cast_fp16")
    %var_15_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_3, y=3.0, name="op_15_cast_fp16")
    %var_17_cast_fp16: (1, 32, 1, 100, fp16)(Tensor) = mul(x=%var_3_cast_fp16_4, y=4.0, name="op_17_cast_fp16")
    %var_20_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = concat(values=(%var_9_cast_fp16, %var_3_cast_fp16_1, %var_13_cast_fp16, %var_15_cast_fp16, %var_17_cast_fp16), axis=-1, interleave=False, name="op_20_cast_fp16")
    %var_22: (1, 32, 1, 512, fp16)(Tensor) = pow(x=%var_20_cast_fp16, y=0.5, name="op_22_cast_fp16")
  } -> (%var_22)
}

@@ -170,6 +177,6 @@ def replace_inputs(func, input_vars):
PASS_REGISTRY["common::dead_code_elimination"](prog)

prog.skip_all_passes = True
submodel = ct.convert(prog, convert_to=backend, compute_units=model.compute_unit)
submodel = ct.convert(prog, convert_to=backend, compute_units=model.compute_unit, minimum_deployment_target=func.opset_version)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly related but fixes an error I get with my example conversion script:

RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Error reading protobuf spec. validator error: Description of multiarray feature 'x_cast_fp16' has FLOAT16 dataType, which is only valid in specification version >= 7. This model has version 6".

Happy to put this up as a separate PR if that's preferred.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a correct fix :)

@YifanShenSZ
Copy link
Collaborator

@jakesabathia2 who added extract_submodel

@@ -77,7 +77,14 @@ def validate_inputs(func, input_vars):
reachable_vars.add(op.outputs[0])

for op in func.operations:
if all([x in reachable_vars for x in op.inputs.values()]):
input_values = []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much @smpanaro for putting this PR!
In order to get this PR merged,
could you also add an unittest for this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem. Just added one (and also fixed one that was failing because of the opset_version change).

@smpanaro
Copy link
Contributor Author

@jakesabathia2 Let me know if anything else is needed here!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants