Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I am a Research Institute of Microsoft Research Institute. When I used apex in mmdection software, the following error occurred, We look forward to your answer. Thank you very much #1227

Closed
xianglei3 opened this issue Nov 25, 2021 · 4 comments

Comments

@xianglei3
Copy link

the err is
if cached_x.grad_fn.next_functions[1][0].variable is not x:

the total log is :
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 443, in _conv_forward
self.padding, self.dilation, self.groups)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/apex/amp/wrap.py", line 21, in wrapper
args[i] = utils.cached_cast(cast_fn, args[i], handle.cache)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/apex/amp/utils.py", line 97, in cached_cast
if cached_x.grad_fn.next_functions[1][0].variable is not x:
IndexError: tuple index out of range
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 25385) of binary: /home/kny/anaconda3/envs/mmd/bin/python
Traceback (most recent call last):
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/kny/anaconda3/envs/mmd/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

./tools/train.py FAILED

Failures:
<NO_OTHER_FAILURES>

@Tomsen1410
Copy link

Tomsen1410 commented Dec 14, 2021

I get the same "tuple index out of range" error when running my model on Google Colab:

File ...
...
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
  return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 301, in forward
  return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 298, in _conv_forward
  self.padding, self.dilation, self.groups)
File "/usr/local/lib/python3.7/dist-packages/apex/amp/wrap.py", line 21, in wrapper
  args[i] = utils.cached_cast(cast_fn, args[i], handle.cache)
File "/usr/local/lib/python3.7/dist-packages/apex/amp/utils.py", line 97, in cached_cast
  if cached_x.grad_fn.next_functions[1][0].variable is not x:
IndexError: tuple index out of range

Edit:
It seems to work after applying this hotfix. Please note that I did not invest time in understanding whats going on there, but rather applied it as a quick fix in order to train my model.

@ptrblck
Copy link
Contributor

ptrblck commented Aug 3, 2022

apex.amp is deprecated and you should use the native torch.cuda.amp implementation as described here. Closing.

@ThelilinNB
Copy link

Hello, have you solved your problem

@cucdengjunli
Copy link

mark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants