Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add usage of int8-mixed-bf16 quantization with X86InductorQuantizer #2668

Conversation

leslie-fang-intel
Copy link
Contributor

@leslie-fang-intel leslie-fang-intel commented Nov 10, 2023

Description

Add usage of int8-mixed-bf16 quantization in X86InductorQuantizer.

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen

Copy link

pytorch-bot bot commented Nov 10, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/2668

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e7e2525 with merge base 56c7b4e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@leslie-fang-intel
Copy link
Contributor Author

cc @jgong5 @jerryzh168

@leslie-fang-intel leslie-fang-intel force-pushed the leslie/enable_x86_inductor_int8_mixed_bf16_quantization branch from 3be20b6 to d396e90 Compare November 11, 2023 01:30
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG, thanks!

@svekars
Copy link
Contributor

svekars commented Nov 15, 2023

Can you please update the branch so we can test and merge this.

@leslie-fang-intel leslie-fang-intel force-pushed the leslie/enable_x86_inductor_int8_mixed_bf16_quantization branch from 48d84b5 to e7e2525 Compare November 15, 2023 00:40
@leslie-fang-intel
Copy link
Contributor Author

Can you please update the branch so we can test and merge this.

@svekars rebased to the latest main branch. Please help to take a look again.

@svekars svekars merged commit ceed926 into pytorch:main Nov 15, 2023
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants