You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I reached the last step of exporting the model, I encountered the following error:
ValueError: Only ACTIVE QcQuantizeOpMode is supported while using StaticGridQuantWrapper。
I'm not sure if this is because the quantization parameters are not calculated during the training process of QAT。
In other words, the sim.compute_encodings method is not called during the training process。
If so, please tell me the correct place to use this method. If not, please tell me the possible reasons and solutions, thank you very much!
Hello,
I am doing QAT training and follow the tutorial below:https://quic.github.io/aimet-pages/releases/latest/api_docs/torch_quantsim.html#code-example-quantization-aware-training-qat
ImageNetDataPipeline.finetune(quant_sim.model, epochs=1, learning_rate=5e-7, learning_rate_schedule=[5, 10],
use_cuda=use_cuda)
Determine simulated accuracy
accuracy = ImageNetDataPipeline.evaluate(quant_sim.model, use_cuda)
print(accuracy)
quant_sim.export(path='./', filename_prefix='quantized_resnet18', dummy_input=dummy_input.cpu())
The text was updated successfully, but these errors were encountered: