Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port horizontal flip tests #7703

Merged
merged 5 commits into from
Jun 28, 2023
Merged

port horizontal flip tests #7703

merged 5 commits into from
Jun 28, 2023

Conversation

pmeier
Copy link
Contributor

@pmeier pmeier commented Jun 27, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Jun 27, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7703

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures

As of commit 95c80d2:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@@ -43,7 +43,8 @@ def horizontal_flip_image_tensor(image: torch.Tensor) -> torch.Tensor:
return image.flip(-1)


horizontal_flip_image_pil = _FP.hflip
def horizontal_flip_image_pil(image: PIL.Image.Image) -> PIL.Image.Image:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this thin wrapper, the dispatch test fails, because horizontal_flip_image_pil is never called. Only _FP.hflip is called.

The old test handle this by allowing the user to set another name for mocking, but I don't want to bring this complexity to the new tests. Since we need to have v2 "standalone" from v1 at some point anyway, might as well do it here now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit concerned in general about modifying the code only to make tests easier. I guess that is OK in this case because it doesn't add much complexity to the code, but let's keep an eye on this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. The issue here is that in the beginning of the v2 transform kernels, we had a lot of them just aliased to their v1 equivalent. The "old" v2 test framework accounted for that with

# Defaults to `kernel.__name__`. Should be set if the function is exposed under a different name
# TODO: This can probably be removed after roll-out since we shouldn't have any aliasing then
kernel_name=None,

e.g.

KernelInfo(
F.horizontal_flip_image_tensor,
kernel_name="horizontal_flip_image_tensor",

The one above is obsolete now, since we removed the aliasing some time ago in #6983.

So basically here I'm just doing something we already planned to do in the first place, just a little earlier.

@@ -493,7 +484,7 @@ def test_kernel_video(self):

@pytest.mark.parametrize("size", OUTPUT_SIZES)
@pytest.mark.parametrize(
"input_type_and_kernel",
("input_type", "kernel"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a QoL improvement that saves us a line in the test below.

expected_bboxes = expected_bboxes[0]

return expected_bboxes
return torch.stack([transform(b) for b in bounding_box.reshape(-1, 4).unbind()]).reshape(bounding_box.shape)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, the old logic was harder to parse. Basically all we are doing here is to break a batched tensor into its individual boxes, apply the helper to them, and reverse the process.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bounding_box.reshape(-1, 4)

The only reason we need this reshape is because we may pass non-2D boxes (i.e. a single box as 1D tensor), right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup. We could factor this out into a decorator that we can put onto reference functions so they only need to handle the unbatched case. For now, we only have the affine bbox helper so I left it as is. Will look into the decorator again if we need it elsewhere.

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Philip, minor Qs ans suggestion but LGTM

Should we also add a helper to test randomness as done in test_randomness()?

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
expected_bboxes = expected_bboxes[0]

return expected_bboxes
return torch.stack([transform(b) for b in bounding_box.reshape(-1, 4).unbind()]).reshape(bounding_box.shape)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bounding_box.reshape(-1, 4)

The only reason we need this reshape is because we may pass non-2D boxes (i.e. a single box as 1D tensor), right?



class TestHorizontalFlip:
def _make_input(self, input_type, *, dtype=None, device="cpu", spatial_size=(17, 11), **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this mostly the same as the one in TestResize?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If that's ok with you, I'll copy paste it for now until I have a handful transforms ported. If it turns out we never or rarely use something else, I'll factor it out as public function.

@@ -43,7 +43,8 @@ def horizontal_flip_image_tensor(image: torch.Tensor) -> torch.Tensor:
return image.flip(-1)


horizontal_flip_image_pil = _FP.hflip
def horizontal_flip_image_pil(image: PIL.Image.Image) -> PIL.Image.Image:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit concerned in general about modifying the code only to make tests easier. I guess that is OK in this case because it doesn't add much complexity to the code, but let's keep an eye on this

@pmeier pmeier merged commit 25c8a3a into pytorch:main Jun 28, 2023
@pmeier pmeier deleted the port/horizontal-flip branch June 28, 2023 10:50
This was referenced Jun 30, 2023
facebook-github-bot pushed a commit that referenced this pull request Jul 3, 2023
Reviewed By: vmoens

Differential Revision: D47186579

fbshipit-source-id: 5077d10522cf36ba99aac3863cd55bb967eb8c89
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants