Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial version of async attribution with torch.futures #1295

Closed
wants to merge 1 commit into from

Conversation

yucu
Copy link
Contributor

@yucu yucu commented Jun 6, 2024

Differential Revision: D56764316

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 11, 2024
Summary: Pull Request resolved: pytorch#1295

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 11, 2024
Summary: Pull Request resolved: pytorch#1295

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 11, 2024
Summary: Pull Request resolved: pytorch#1295

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 11, 2024
Summary: Pull Request resolved: pytorch#1295

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 11, 2024
Summary: Pull Request resolved: pytorch#1295

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 12, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 13, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 14, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 14, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 29, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jun 29, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jul 1, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jul 1, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jul 1, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

yucu added a commit to yucu/captum that referenced this pull request Jul 1, 2024
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
Summary:
Pull Request resolved: pytorch#1295

Currently Captum doesn't support async forward functions. Ads R&P team would like this feature in order to replace their custom variant (D56655643) of Feature Ablation with Captum and maintain similar performance.

PyTorch introduce future concepts ([link](https://pytorch.org/docs/stable/futures.html)) so we can adopt it for feature_ablation.py as the first step.

Details:
- Initial evaluation returns a future, save it.
- Each evaluation for each feature for each input will returns an attribution result (plus corresponding weight if applicable), save all those result separately since futures cannot be added up directly.
- When all futures above are done. we can add up the evaluation result to the final outcome as one Tensor per input.
- Since common._run_forward is used by other attribution methods, need to do some type hacking over there. But if users attempt to use those methods async, they will end up in failure before Captum support async for those methods.

TODO: Extend FeatureAttributor to support `torch.futures`

Differential Revision: D56764316
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56764316

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 3543414.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants