Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale chroma distortion. #1583

Merged
merged 2 commits into from
Oct 11, 2019
Merged

Scale chroma distortion. #1583

merged 2 commits into from
Oct 11, 2019

Conversation

tdaede
Copy link
Collaborator

@tdaede tdaede commented Aug 23, 2019

@barrbrain can you rerun your tuning on top of this patch

@tdaede tdaede requested a review from barrbrain August 23, 2019 20:20
@coveralls
Copy link
Collaborator

coveralls commented Aug 23, 2019

Coverage Status

Coverage increased (+0.06%) to 71.107% when pulling d946584 on tdaede:chroma_dist into 8b19a94 on xiph:master.

@barrbrain
Copy link
Collaborator

@barrbrain can you rerun your tuning on top of this patch

@tdaede Thank you for this. I will start another collection of stats.

@barrbrain
Copy link
Collaborator

I adapted my Configurable chroma dc_qi and ac_qi interpolation patch and collected for subset3 at speed preset 4.

Repeating the previous approach and iterating over subsets to find a confidence interval: 0.316±0.001 or 81 in Q8, essentially the original value from Daala.

chroma-balance

@barrbrain
Copy link
Collaborator

I tested this combination on AWCY and the result was not great:

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
1.9315 7.2280 8.6807 2.0982 1.5924 1.8361 5.1297

I also tested with --tune Psnr on AWCY which was not any better:

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
3.2891 N/A 17.9523 4.2194 2.7868 3.4516 7.9285

@barrbrain
Copy link
Collaborator

For reference, intra-only tells a different story:

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
3.0248 -8.2096 -13.2757 3.1171 3.1988 3.1970 0.6644

src/rate.rs Outdated Show resolved Hide resolved
@barrbrain
Copy link
Collaborator

I repeated the collection and analysis with the fix for scale: 0.290±0.001 or 74 in Q8:

chroma-balance

Results for subset1 on AWCY seem reasonable:

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
2.2579 -22.1874 -25.1877 2.3843 2.2387 2.3291 -3.4330

Results for objective-1-fast on AWCY are terrible for some sequences:

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
7.1934 -8.0033 -14.5493 6.7523 7.9560 7.5724 0.8623

src/encoder.rs Outdated Show resolved Hide resolved
src/encoder.rs Outdated Show resolved Hide resolved
src/encoder.rs Outdated Show resolved Hide resolved
@tdaede tdaede force-pushed the chroma_dist branch 2 times, most recently from dcf890e to 2081fe3 Compare August 28, 2019 22:34
src/rate.rs Outdated
let (offset_u, offset_v) = chroma_offset(log_target_q);
let quantizer_u = bexp64(log_target_q + offset_u + scale);
let quantizer_v = bexp64(log_target_q + offset_v + scale);
let (offset_u, offset_v) = chroma_offset(log_target_q, uv_m_q8);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like uv_m_q8 slipped in by a rebase error.

Suggested change
let (offset_u, offset_v) = chroma_offset(log_target_q, uv_m_q8);
let (offset_u, offset_v) = chroma_offset(log_target_q);

src/encoder.rs Outdated
@@ -2017,7 +2021,7 @@ pub fn write_tx_tree<T: Pixel>(
|| skip
|| dist >= 0
);
tx_dist += dist;
tx_dist += ((dist as f64) * fi.dist_scale[p]) as i64;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so luma and chroma distortions can be scaled in tx-domain as well.

lambda: (::std::f64::consts::LN_2 / 6.0)
* ((log_target_q as f64) * Q57_SQUARE_EXP_SCALE).exp(),
lambda,
dist_scale: [1.0, lambda / lambda_u, lambda / lambda_v],
Copy link
Collaborator

@ycho ycho Oct 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but scalers for YUV are derived from lambdas?! Isn't that essentially equivalent to applying different lambda values when computing rd_cost? If so, I am afraid I need to ask why we want to scale distortions?

src/rdo.rs Outdated
@@ -422,6 +425,8 @@ fn compute_tx_distortion<T: Pixel>(
(tx_dist as f64 * bias) as u64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 'tx_dist' here has been already scaled by fi.dist_scale[0], when it is passed from encode_block_post_cdef()?

src/rdo.rs Outdated
@@ -422,6 +425,8 @@ fn compute_tx_distortion<T: Pixel>(
(tx_dist as f64 * bias) as u64
};

distortion = (fi.dist_scale[0] * distortion as f64) as u64;
Copy link
Collaborator

@ycho ycho Oct 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thus, if not skipped block, it seems wrong to scale here since distortion is already has been scaled.
Instead, scaling luma distortion is only required for skipped case, i.e. at line 405 (old) above.
(Maybe scalar for luma is 1.0 thus no noticeable side effects observed.)

Copy link
Collaborator

@ycho ycho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I commented in line, it seems to me the tx_dist for luma is scaled twice for non-skip case. Please check this.

@tdaede
Copy link
Collaborator Author

tdaede commented Oct 9, 2019

@ycho I have updated this PR with @barrbrain 's fixes, so it should now be correct. The only thing that I didn't include is the stats gathering and interpolation patch, as I think that should be a separate pr.

@ycho
Copy link
Collaborator

ycho commented Oct 10, 2019

@tdaede , can you run this PR on awcy pls? so that we can see the image frames as I requested.

@tdaede tdaede changed the title WIP: Scale chroma distortion. Scale chroma distortion. Oct 10, 2019
@ycho
Copy link
Collaborator

ycho commented Oct 10, 2019

@ycho https://beta.arewecompressedyet.com/?job=master-2019-10-09_182754-8b19a94&job=chroma_dist-2019-10-09_182802-d946584

Thanks.
I've checked several images, and seeing mostly improvements. And this PR increased bitstream size little bit.
For ex, what do you think for Clovis?

@ycho
Copy link
Collaborator

ycho commented Oct 10, 2019

@ycho https://beta.arewecompressedyet.com/?job=master-2019-10-09_182754-8b19a94&job=chroma_dist-2019-10-09_182802-d946584
(1 frame only)

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
0.6297 -12.1790 -14.6372 0.8292 0.6245 0.7480 -2.2254

Apart from subjective quality gains, do we think -2.22 ciede2000 worth 0.62 psnr loss (1 frame only)?

@tdaede
Copy link
Collaborator Author

tdaede commented Oct 10, 2019

@ycho yes, not because I think this particular tradeoff is worth it, but because we can make a better tradeoff - like @barrbrain 's retuning on top of this patch.

@ycho
Copy link
Collaborator

ycho commented Oct 10, 2019

@ycho yes, not because I think this particular tradeoff is worth it, but because we can make a better tradeoff - like @barrbrain 's retuning on top of this patch.

Okay, then no objection on landing this PR but I'd like to re-remind us please check the visual quality with our eyes whenever we change subjective qualities.

@ycho
Copy link
Collaborator

ycho commented Oct 10, 2019

awcy run for objecctive-1-fast

PSNR PSNR Cb PSNR Cr PSNR HVS SSIM MS SSIM CIEDE 2000
2.3408 -14.4691 -22.1420 2.3058 2.3652 2.3527 -3.1967

Look dark70p_60f.y4m, which has significant loss 26% in PSNR.

@tdaede
Copy link
Collaborator Author

tdaede commented Oct 11, 2019

Yes, that's been the longstanding issue with this changeset. It suffers less after @barrbrain 's follow up patch, but still isn't great. I've basically accepted that it's random followout from the large change in chroma distortion - it's pushed lambda around in an unfavorable direction on this clip.

@tdaede tdaede merged commit 4a0a0b1 into xiph:master Oct 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants