Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port AA tests #7927

Merged
merged 15 commits into from
Sep 4, 2023
Merged

port AA tests #7927

merged 15 commits into from
Sep 4, 2023

Conversation

pmeier
Copy link
Contributor

@pmeier pmeier commented Sep 1, 2023

Follow up to #7839 (comment).

Some questions that are up for discussion:

  1. Since we have changed the random sampling in v2, we cannot easily compare the AA transforms to their v1 equivalent. It is doable, but leads to complicated monkeypatching / mocking, e.g.

    le = len(t._AUGMENTATION_SPACE)
    keys = list(t._AUGMENTATION_SPACE.keys())
    randint_values = []
    for i in range(le):
    # Stable API, op_index random call
    randint_values.append(i)
    # Stable API, if signed there is another random call
    if t._AUGMENTATION_SPACE[keys[i]][1]:
    randint_values.append(0)
    # New API, _get_random_item
    randint_values.append(i)
    randint_values = iter(randint_values)
    mocker.patch("torch.randint", side_effect=lambda *arg, **kwargs: torch.tensor(next(randint_values)))
    mocker.patch("torch.rand", return_value=1.0)

    IMO, it was a good thing that we implemented this thorough check when we were actively working on implementing the AA transforms for v2 to avoid accidental breakages. However, now that they are ported and checked, we can probably drop them. We do not guarantee BC for randomly sampled parameters and thus, these test offer little value. Worse they might break in the future if we change something about the random sampling.

  2. The checks for the v1 transforms call the transform multiple times:

    I presume this is to get a better coverage of the randomly sampled ops. However, this also leads to quite long test times, e.g.

    time pytest test/test_transforms.py::test_augmix &> /dev/null
    pytest test/test_transforms.py::test_augmix &> /dev/null  1143,96s user 71,33s system 1156% cpu 1:45,11 total

    Since we have a frozen RNG seed, different parametrization will not increase coverage of the sampled ops. One option could be to reproducibly set the seed based on the parametrization, i.e. seed = hash(("interpolation", InterpolationMode.NEAREST)). Since we run multiple parametrizations for each transform, that should give us good coverage without the need to run the transform multiple times for each parametrization.

Thoughts?

cc @vfdev-5

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 1, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7927

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Merge Blocking SEVs

There is 1 active merge blocking SEVs. Please view them below:

If you must merge, use @pytorchbot merge -f.

❌ 1 New Failure

As of commit a4de6f0 with merge base d0e16b7 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@@ -232,7 +232,7 @@ def _check_transform_v1_compatibility(transform, input, *, rtol, atol):
"""If the transform defines the ``_v1_transform_cls`` attribute, checks if the transform has a public, static
``get_params`` method that is the v1 equivalent, the output is close to v1, is scriptable, and the scripted version
can be called without error."""
if type(input) is not torch.Tensor or isinstance(input, PIL.Image.Image):
if not (type(input) is torch.Tensor or isinstance(input, PIL.Image.Image)):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was spotted by @NicolasHug. We only want to test the v1 compatibility for pure tensors and PIL images. Fixing this, lead to two more changes that I'll flag below.

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
Comment on lines +2778 to +2781
input = make_input(device=device)
check_transform(
transforms.RandomErasing(p=1), input, check_v1_compatibility=not isinstance(input, PIL.Image.Image)
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RandomErasing in v1 does not support PIL images.

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
# here and only check if we can script the v2 transform and subsequently call the result.
check_transform(transform, input, check_v1_compatibility=False)

if type(input) is torch.Tensor and dtype is torch.uint8:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The and dtype is torch.uint8 is specific to AA, since some ops did not support uint8 in v1 (scripting and eager). This is no longer an issue in v2.

pmeier and others added 2 commits September 4, 2023 15:36
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM when green, thanks Philip

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
test/test_transforms_v2_refactored.py Show resolved Hide resolved
@pmeier pmeier merged commit 1f94320 into pytorch:main Sep 4, 2023
62 of 63 checks passed
@pmeier pmeier deleted the port/aa branch September 4, 2023 18:25
@github-actions
Copy link

github-actions bot commented Sep 4, 2023

Hey @pmeier!

You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py

facebook-github-bot pushed a commit that referenced this pull request Sep 26, 2023
Summary: Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>

Reviewed By: matteobettini

Differential Revision: D49600791

fbshipit-source-id: abf058e28a949717be7ad343e5417fca098d4078
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants