-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up pytorch_captum tests by moving calculation of torch.cuda.is_available() up to module level #402
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This pull request was exported from Phabricator. Differential Revision: D21966198 |
cspanda
pushed a commit
to cspanda/captum
that referenced
this pull request
Jun 10, 2020
…available() up to module level (pytorch#402) Summary: Pull Request resolved: pytorch#402 In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case. The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise). --- Reviewed By: vivekmig Differential Revision: D21966198 fbshipit-source-id: 58c1bf77000169c07f831472e8eb54fb06381b99
cspanda
force-pushed
the
export-D21966198
branch
from
June 10, 2020 19:43
667811a
to
4a4e1c1
Compare
This pull request was exported from Phabricator. Differential Revision: D21966198 |
…available() up to module level (pytorch#402) Summary: Pull Request resolved: pytorch#402 In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case. The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise). --- Reviewed By: vivekmig Differential Revision: D21966198 fbshipit-source-id: b3fb0c71c90e26ae854a7e8128259032caf520de
cspanda
force-pushed
the
export-D21966198
branch
from
June 11, 2020 19:33
4a4e1c1
to
8f6c1b7
Compare
This pull request was exported from Phabricator. Differential Revision: D21966198 |
This pull request has been merged in 1eb5dbb. |
p16i
pushed a commit
to p16i/captum
that referenced
this pull request
Jun 20, 2020
…available() up to module level (pytorch#402) Summary: Pull Request resolved: pytorch#402 In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case. The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise). --- Reviewed By: vivekmig Differential Revision: D21966198 fbshipit-source-id: 913173879582df562a188be16560ee60e183b084
NarineK
pushed a commit
to NarineK/captum-1
that referenced
this pull request
Nov 19, 2020
…available() up to module level (pytorch#402) Summary: Pull Request resolved: pytorch#402 In this diff we attempt to prevent listing of the tests in `test_jit.py` and `test_data_parallel.py` if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case. The main reason why this takes so long is that the computation for `torch.cuda.is_available()` requires the import of the module `torch._C` which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test `//pytorch/captum:attributions` is expensive (both timewise and resource wise). --- Reviewed By: vivekmig Differential Revision: D21966198 fbshipit-source-id: 913173879582df562a188be16560ee60e183b084
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
In this diff we attempt to prevent listing of the tests in
test_jit.py
andtest_data_parallel.py
if CUDA is not available by moving the CUDA check up to the module level. This way we don't have to discover these tests and then attempt to run them only for it to trigger this exact code path somewhere down the setup stage for each test case.The main reason why this takes so long is that the computation for
torch.cuda.is_available()
requires the import of the moduletorch._C
which we've profiled in the past to take anywhere between 15-20 seconds. To do this for over 1000 tests every single time we test//pytorch/captum:attributions
is expensive (both timewise and resource wise).Differential Revision: D21966198