Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operator combination of model structures #3056

Open
zhihaofan opened this issue Jun 14, 2024 · 0 comments
Open

Operator combination of model structures #3056

zhihaofan opened this issue Jun 14, 2024 · 0 comments

Comments

@zhihaofan
Copy link

zhihaofan commented Jun 14, 2024

Hello,
I am using AIMET for QAT, but when I use fold_all_batch_norms, I find that there is a lot of loss before and after the fold.
I tried using . /Examples/torch/quantization/qat.ipynb when I tried it, I found that fold_all_batch_norms also had differences, but the results were acceptable.

Both models are constructed with connv , BatchNorm, ReLU, and residual networks. I would like to ask if there is a guide to use the matching of the operators. Because I suspect that there might be a problem with the pairing somewhere that is causing the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant