-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the run directory rank-local; fix checkpoints saving and restoring #215
Conversation
- Sharding the of the run directory accross ranks won't work in multi-node training. This change makes the run directory rank-local - Fixed callbacks and loggers to support rank-local run directories. Specifically, wandb and the run directory uploader now run on all ranks, not just rank zero - When using deepspeed with zero-1+, each rank writes to the checkpoint folder. Previously, only rank zero's data was being stored. Now, each rank's data is being stored by the rank-local run directory uploader. The checkpoint loader takes a checkpoint path that is parameterized by the rank, so each node will load only the shards of the checkpoint that is needed.
…omposer into ravi/rank_local_run_directory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a major issue with the intra-node communication in multinode settings, and I'll want to take another look after that's fixed. Other than that, it's looking good!
Removed run_directory.get_relative_to_run_directory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now! Thanks for cleaning all that up!
…ng (mosaicml#215) Fixed the run directory and deepspeed checkpointing: - Creating a run directory by default, if one was not set via an environment variable - Removed `run_directory.get_relative_to_run_directory(...)`; replaced that with `os.path.join(run_directory.get_run_directory(...))` - Having one run directory per node (i.e. sharing the run directory across local ranks) won't work in multi-node training. This change makes the run directory rank-local (so each rank sees its own run directory) - Added a helper method `run_directory.get_node_run_directory()`. This should not really be used directly, but rather - Switched the run directory uploader and wandb to run on every rank instead of just rank zero. This ensures that all artifacts will be properly stored. - Fixed deepspeed checkpointing: Specifically, when using deepspeed with zero-1+, each rank wrote to the folder specified. Only rank 0's data was being stored, meaning that we were losing rank 1+'s state (i.e. the optimizer state). Deepspeed checkpointing tests were broken. Now, each rank's data is being stored by the rank-local run directory uploader (see above bullet). Meaning, there are N files per checkpoint now. To restore a checkpoint, you need the global rank zero checkpoint (which contains the model) and the rank n checkpoint (which contains the optimizer state). To support this, the checkpoint loader takes a checkpoint path that is parameterized by the rank, so each rank can fetch the files it needs. As an optimization, the LOCAL rank zero on each node is responsible for downloading the GLOBAL rank zero checkpoint (it would be redundant to download this file multiple times per node).
Fixed the run directory and deepspeed checkpointing:
Creating a run directory by default, if one was not set via an environment variable
Removed
run_directory.get_relative_to_run_directory(...)
; replaced that withos.path.join(run_directory.get_run_directory(...))
Having one run directory per node (i.e. sharing the run directory across local ranks) won't work in multi-node training. This change makes the run directory rank-local (so each rank sees its own run directory)
Added a helper method
run_directory.get_node_run_directory()
. This should not really be used directly, but ratherSwitched the run directory uploader and wandb to run on every rank instead of just rank zero. This ensures that all artifacts will be properly stored.
Fixed deepspeed checkpointing:
Specifically, when using deepspeed with zero-1+, each rank wrote to the folder specified. Only rank 0's data was being stored, meaning that we were losing rank 1+'s state (i.e. the optimizer state). Deepspeed checkpointing tests were broken. Now, each rank's data is being stored by the rank-local run directory uploader (see above bullet). Meaning, there are N files per checkpoint now.
To restore a checkpoint, you need the global rank zero checkpoint (which contains the model) and the rank n checkpoint (which contains the optimizer state). To support this, the checkpoint loader takes a checkpoint path that is parameterized by the rank, so each rank can fetch the files it needs. As an optimization, the LOCAL rank zero on each node is responsible for downloading the GLOBAL rank zero checkpoint (it would be redundant to download this file multiple times per node).
TODO: