Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up pytorch_captum tests by moving calculation of torch.cuda.is_available() up to module level #402

Closed
wants to merge 1 commit into from

Conversation

cspanda
Copy link

@cspanda cspanda commented Jun 10, 2020

Summary:
In this diff we attempt to prevent listing of the tests in test_jit.py and test_data_parallel.py if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.

The main reason why this takes so long is that the computation for torch.cuda.is_available() requires the import of the module torch._C which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test //pytorch/captum:attributions is expensive (both timewise and resource wise).


Differential Revision: D21966198

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21966198

cspanda pushed a commit to cspanda/captum that referenced this pull request Jun 10, 2020
…available() up to module level (pytorch#402)

Summary:
Pull Request resolved: pytorch#402

In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.

The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise).

 ---

Reviewed By: vivekmig

Differential Revision: D21966198

fbshipit-source-id: 58c1bf77000169c07f831472e8eb54fb06381b99
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21966198

…available() up to module level (pytorch#402)

Summary:
Pull Request resolved: pytorch#402

In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.

The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise).

 ---

Reviewed By: vivekmig

Differential Revision: D21966198

fbshipit-source-id: b3fb0c71c90e26ae854a7e8128259032caf520de
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21966198

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 1eb5dbb.

p16i pushed a commit to p16i/captum that referenced this pull request Jun 20, 2020
…available() up to module level (pytorch#402)

Summary:
Pull Request resolved: pytorch#402

In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.

The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise).

 ---

Reviewed By: vivekmig

Differential Revision: D21966198

fbshipit-source-id: 913173879582df562a188be16560ee60e183b084
NarineK pushed a commit to NarineK/captum-1 that referenced this pull request Nov 19, 2020
…available() up to module level (pytorch#402)

Summary:
Pull Request resolved: pytorch#402

In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.

The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise).

 ---

Reviewed By: vivekmig

Differential Revision: D21966198

fbshipit-source-id: 913173879582df562a188be16560ee60e183b084
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants