Improvements in the quantizer and dequantization kernel #1061
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR has two contributions both working together for what is hopefully better quantization performance across the board.
a. We set the bias to the min or max value depending on which has the higher absolute value.
b. We set the scale to go from min to max or max to min respectively.
c. We adjust the scale to make sure that 0 is quantized as 0.
qmv
where everything is float32.Quantization performance
This is the quantization performance on Wikitext-2 test set. The Q4_0 performance is computed by quantizing and dequantizing the weights in place with absmax quantization and block size 32.
Regarding the block size discussion (which I cannot find now @ivanfioravanti) I think 64 is a good compromise for a default and 32 should be evaluated and used if the 64 performance is not adequate. Wdyt @awni and @jagrit06 ?
Throughput
The kernel change actually has no performance degradation whatsoever
Before
After