Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static quantization tutorial missing a step #1235

Closed
rfejgin opened this issue Nov 13, 2020 · 8 comments
Closed

Static quantization tutorial missing a step #1235

rfejgin opened this issue Nov 13, 2020 · 8 comments

Comments

@rfejgin
Copy link

rfejgin commented Nov 13, 2020

per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')

If I understand correctly, one needs to also set
torch.backends.quantized.engine = "fbgemm"

I tried to quantize a model without the missing step and got strange errors about certain operations not being supported on the FBGEMM backend. They go away with this step added.

cc @jerryzh168 @jianyuh @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen

@holly1238 holly1238 added the quantization Issues relating to quantization tutorials label Jul 27, 2021
@svekars svekars added module: quantization and removed quantization Issues relating to quantization tutorials labels Mar 14, 2023
@svekars svekars added medium docathon-h1-2023 A label for the docathon in H1 2023 labels May 31, 2023
@Samsonboadi
Copy link
Contributor

/assigntome

@svekars
Copy link
Contributor

svekars commented Jun 1, 2023

Can you please close one of the PRs for this issue?

@Samsonboadi
Copy link
Contributor

Done

@svekars
Copy link
Contributor

svekars commented Oct 24, 2023

This issue has been unassigned due to inactivity. If you are still planning to work on this, you can still send a PR referencing this issue.

@svekars svekars added docathon-h2-2023 and removed docathon-h1-2023 A label for the docathon in H1 2023 labels Oct 30, 2023
@Viditagarwal7479
Copy link
Contributor

In the current version of tutorials this python file has been replaced with a .rst file with fbgemm backend changed to x86. So shouldn't this issue be closed?

@krishnakalyan3
Copy link
Contributor

/assigntome

@svekars
Copy link
Contributor

svekars commented Nov 3, 2023

You can still add to the .rst file but with the updated backend, like torch.backend.quantized.engine = 'x86'

@krishnakalyan3
Copy link
Contributor

krishnakalyan3 commented Nov 4, 2023

@svekars this change has already been made to the rst
https://github.com/pytorch/tutorials/blob/main/advanced_source/static_quantization_tutorial.rst?plain=1#L462 and link to the doc reflecting the same https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html.

This issue is pointing to a stale branch.

@svekars svekars closed this as completed Nov 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment