-
Notifications
You must be signed in to change notification settings - Fork 478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial version of async attribution with torch.futures #1295
Conversation
This pull request was exported from Phabricator. Differential Revision: D56764316 |
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
Summary: Pull Request resolved: pytorch#1295 Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance. PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step. Details: - Initial evaluation returns a future, save it. - Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly. - When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input. - Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods. TODO: Extend FeatureAttributor to support `torch.futures` Differential Revision: D56764316
This pull request was exported from Phabricator. Differential Revision: D56764316 |
This pull request has been merged in 3543414. |
Differential Revision: D56764316