You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During studying the implementation, I suspected this line https://github.com/tatsu-lab/alpaca_farm/blob/main/src/alpaca_farm/rl/rl_trainer.py#L150 for zero the gradients during gradient accumulation could cause zero out all gradients except the gradients from the final gradient accumulation steps (accelerator.sync_gradients), as policy.zero_grad is used instead of optimizer.zero_grad.
I think this could cause ignore all gradients from the gradient accumulation steps except step with sync_gradients=True. Could you let me know about this possible problems? Thank you!
The text was updated successfully, but these errors were encountered:
Oh, thanks for pointing this out. You're absolutely correct. The AcceleratedOptimizer object has the right wraps that disable zero_grad on non-sync steps, and should be used for clearing gradients, as opposed to the wrapped model.
Looking at the history of our internal codebase, we were actually calling self.optimizer.zero_grad(set_to_none=True) until a commit on May 7 2023 changed this to the model. Note we have trained similarly performing PPO models both before and after that commit, so I think it's safe to say that this bug hasn't affected the results in noticeable ways.
Note the quark trainer doesn't have this problem, since I'm not using the accelerator ctx manager there.
Thanks for fixing this! However, I find that removing self.policy.zero_grad(set_to_none=True) immediately causes a cuda out-of-memory error. I'm running PPO training on 8 A100 GPUs. Have you encountered this issue?
Hello, thank you for a great work!
During studying the implementation, I suspected this line https://github.com/tatsu-lab/alpaca_farm/blob/main/src/alpaca_farm/rl/rl_trainer.py#L150 for zero the gradients during gradient accumulation could cause zero out all gradients except the gradients from the final gradient accumulation steps (
accelerator.sync_gradients
), aspolicy.zero_grad
is used instead ofoptimizer.zero_grad
.I think this could cause ignore all gradients from the gradient accumulation steps except step with
sync_gradients=True
. Could you let me know about this possible problems? Thank you!The text was updated successfully, but these errors were encountered: