-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale chroma distortion. #1583
Scale chroma distortion. #1583
Conversation
@tdaede Thank you for this. I will start another collection of stats. |
I adapted my Configurable chroma dc_qi and ac_qi interpolation patch and collected for subset3 at speed preset 4. Repeating the previous approach and iterating over subsets to find a confidence interval: 0.316±0.001 or 81 in Q8, essentially the original value from Daala. |
I tested this combination on AWCY and the result was not great:
I also tested with --tune Psnr on AWCY which was not any better:
|
For reference, intra-only tells a different story:
|
I repeated the collection and analysis with the fix for Results for subset1 on AWCY seem reasonable:
Results for objective-1-fast on AWCY are terrible for some sequences:
|
dcf890e
to
2081fe3
Compare
src/rate.rs
Outdated
let (offset_u, offset_v) = chroma_offset(log_target_q); | ||
let quantizer_u = bexp64(log_target_q + offset_u + scale); | ||
let quantizer_v = bexp64(log_target_q + offset_v + scale); | ||
let (offset_u, offset_v) = chroma_offset(log_target_q, uv_m_q8); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like uv_m_q8
slipped in by a rebase error.
let (offset_u, offset_v) = chroma_offset(log_target_q, uv_m_q8); | |
let (offset_u, offset_v) = chroma_offset(log_target_q); |
src/encoder.rs
Outdated
@@ -2017,7 +2021,7 @@ pub fn write_tx_tree<T: Pixel>( | |||
|| skip | |||
|| dist >= 0 | |||
); | |||
tx_dist += dist; | |||
tx_dist += ((dist as f64) * fi.dist_scale[p]) as i64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so luma and chroma distortions can be scaled in tx-domain as well.
lambda: (::std::f64::consts::LN_2 / 6.0) | ||
* ((log_target_q as f64) * Q57_SQUARE_EXP_SCALE).exp(), | ||
lambda, | ||
dist_scale: [1.0, lambda / lambda_u, lambda / lambda_v], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but scalers for YUV are derived from lambdas?! Isn't that essentially equivalent to applying different lambda values when computing rd_cost? If so, I am afraid I need to ask why we want to scale distortions?
src/rdo.rs
Outdated
@@ -422,6 +425,8 @@ fn compute_tx_distortion<T: Pixel>( | |||
(tx_dist as f64 * bias) as u64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think 'tx_dist' here has been already scaled by fi.dist_scale[0], when it is passed from encode_block_post_cdef()?
src/rdo.rs
Outdated
@@ -422,6 +425,8 @@ fn compute_tx_distortion<T: Pixel>( | |||
(tx_dist as f64 * bias) as u64 | |||
}; | |||
|
|||
distortion = (fi.dist_scale[0] * distortion as f64) as u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thus, if not skipped block, it seems wrong to scale here since distortion is already has been scaled.
Instead, scaling luma distortion is only required for skipped case, i.e. at line 405 (old) above.
(Maybe scalar for luma is 1.0 thus no noticeable side effects observed.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I commented in line, it seems to me the tx_dist for luma is scaled twice for non-skip case. Please check this.
@ycho I have updated this PR with @barrbrain 's fixes, so it should now be correct. The only thing that I didn't include is the stats gathering and interpolation patch, as I think that should be a separate pr. |
@tdaede , can you run this PR on awcy pls? so that we can see the image frames as I requested. |
Thanks. |
Apart from subjective quality gains, do we think -2.22 ciede2000 worth 0.62 psnr loss (1 frame only)? |
@ycho yes, not because I think this particular tradeoff is worth it, but because we can make a better tradeoff - like @barrbrain 's retuning on top of this patch. |
Okay, then no objection on landing this PR but I'd like to re-remind us please check the visual quality with our eyes whenever we change subjective qualities. |
awcy run for objecctive-1-fast
Look dark70p_60f.y4m, which has significant loss 26% in PSNR. |
Yes, that's been the longstanding issue with this changeset. It suffers less after @barrbrain 's follow up patch, but still isn't great. I've basically accepted that it's random followout from the large change in chroma distortion - it's pushed lambda around in an unfavorable direction on this clip. |
@barrbrain can you rerun your tuning on top of this patch