Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the run directory rank-local; fix checkpoints saving and restoring #215

Merged
merged 12 commits into from
Jan 14, 2022

Conversation

ravi-mosaicml
Copy link
Contributor

@ravi-mosaicml ravi-mosaicml commented Jan 11, 2022

Fixed the run directory and deepspeed checkpointing:

  • Creating a run directory by default, if one was not set via an environment variable

  • Removed run_directory.get_relative_to_run_directory(...); replaced that with os.path.join(run_directory.get_run_directory(...))

  • Having one run directory per node (i.e. sharing the run directory across local ranks) won't work in multi-node training. This change makes the run directory rank-local (so each rank sees its own run directory)

  • Added a helper method run_directory.get_node_run_directory(). This should not really be used directly, but rather

  • Switched the run directory uploader and wandb to run on every rank instead of just rank zero. This ensures that all artifacts will be properly stored.

  • Fixed deepspeed checkpointing:

    Specifically, when using deepspeed with zero-1+, each rank wrote to the folder specified. Only rank 0's data was being stored, meaning that we were losing rank 1+'s state (i.e. the optimizer state). Deepspeed checkpointing tests were broken. Now, each rank's data is being stored by the rank-local run directory uploader (see above bullet). Meaning, there are N files per checkpoint now.

    To restore a checkpoint, you need the global rank zero checkpoint (which contains the model) and the rank n checkpoint (which contains the optimizer state). To support this, the checkpoint loader takes a checkpoint path that is parameterized by the rank, so each rank can fetch the files it needs. As an optimization, the LOCAL rank zero on each node is responsible for downloading the GLOBAL rank zero checkpoint (it would be redundant to download this file multiple times per node).

TODO:

  • Do a wandb run to see what groups look like

- Sharding the of the run directory accross ranks won't work in multi-node training. This change makes the run directory rank-local
- Fixed callbacks and loggers to support rank-local run directories. Specifically, wandb and the run directory uploader now run on all ranks, not just rank zero
- When using deepspeed with zero-1+, each rank writes to the checkpoint folder. Previously, only rank zero's data was being stored. Now, each rank's data is being stored by the rank-local run directory uploader. The checkpoint loader takes a checkpoint path that is parameterized by the rank, so each node will load only the shards of the checkpoint that is needed.
Copy link
Contributor

@jbloxham jbloxham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a major issue with the intra-node communication in multinode settings, and I'll want to take another look after that's fixed. Other than that, it's looking good!

composer/loggers/logger_hparams.py Show resolved Hide resolved
composer/loggers/wandb_logger.py Outdated Show resolved Hide resolved
composer/utils/dist.py Outdated Show resolved Hide resolved
composer/utils/dist.py Outdated Show resolved Hide resolved
composer/trainer/checkpoint.py Outdated Show resolved Hide resolved
composer/trainer/checkpoint.py Outdated Show resolved Hide resolved
Copy link
Contributor

@jbloxham jbloxham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now! Thanks for cleaning all that up!

@ravi-mosaicml ravi-mosaicml merged commit 1d77070 into dev Jan 14, 2022
@ravi-mosaicml ravi-mosaicml deleted the ravi/rank_local_run_directory branch January 14, 2022 23:07
coryMosaicML pushed a commit to coryMosaicML/composer that referenced this pull request Feb 23, 2022
…ng (mosaicml#215)

Fixed the run directory and deepspeed checkpointing:

- Creating a run directory by default, if one was not set via an environment variable
- Removed `run_directory.get_relative_to_run_directory(...)`; replaced that with `os.path.join(run_directory.get_run_directory(...))`
- Having one run directory per node (i.e. sharing the run directory across local ranks) won't work in multi-node training. This change makes the run directory rank-local (so each rank sees its own run directory)
- Added a helper method `run_directory.get_node_run_directory()`. This should not really be used directly, but rather 
- Switched the run directory uploader and wandb to run on every rank instead of just rank zero. This ensures that all artifacts will be properly stored.
- Fixed deepspeed checkpointing:
  
  Specifically, when using deepspeed with zero-1+, each rank wrote to the folder specified. Only rank 0's data was being stored, meaning that we were losing rank 1+'s state (i.e. the optimizer state). Deepspeed checkpointing tests were broken. Now, each rank's data is being stored by the rank-local run directory uploader (see above bullet). Meaning, there are N files per checkpoint now.

  To restore a checkpoint, you need the global rank zero checkpoint (which contains the model) and the rank n checkpoint (which contains the optimizer state). To support this, the checkpoint loader takes a checkpoint path that is parameterized by the rank, so each rank can fetch the files it needs. As an optimization, the LOCAL rank zero on each node is responsible for downloading the GLOBAL rank zero checkpoint (it would be redundant to download this file multiple times per node).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Checkpoint loader should download checkpoints only on rank 0
2 participants