Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix drop_first checking in partitioning to account for world_size divisibility #706

Merged
merged 15 commits into from
Jun 18, 2024

Conversation

snarayan21
Copy link
Collaborator

@snarayan21 snarayan21 commented Jun 18, 2024

Description of changes:

This PR loosens restrictions on checking epoch_size during the sample partition creation function. If we attempt to create a partition on epoch_size number of samples, but epoch_size is not divisible by the world size, then we repeat samples to make sure the epoch_size is divisible by the world size. However, the current epoch_size checking did not take this into account. An explanation of why we have to repeat a few samples is below.

When we partition all samples over nodes/ranks/workers, we need to repeat some samples to make sure that the number of samples in the epoch is divisible by the world size. This need arises because:

  1. Users don't want samples to be unnecessarily dropped during training
  2. In distributed training, dataloaders across all devices should have the same behavior when a partial global batch is being processed.

For 1., after talking with stakeholders and customers, repeating just a few samples instead of dropping them entirely has been seen as the better tradeoff. So, suppose someone sets epoch_size to 500, with a world size of 32. Then 12 samples will be repeated in the epoch to take the epoch size to 512 to make the epoch_size divisible by the world size, meaning that we assign the same number of samples to every GPU.

We need 2. because imagine a training step where some GPUs have a full batch of samples, but others have only a partial batch. If drop_last=True for the dataloader, then some GPUs will attempt to train since they don't have a partial batch, but other GPUs will drop the last batch, causing an error during training since some GPUs try to train while others don't. Repeating samples to make sure the epoch_size is divisible by the world size takes care of this because each GPU will have the same number of samples in every single global batch -- either all GPUs have a full batch, or all GPUs have a partial batch. As a result, we never run into a case where some GPUs attempt to train, while others don't.

Manual test run (resumption-issue-testing-500ep-WoeCrv) that successfully resumed, addressing bug report that raised this issue in the first place.

Originally reported here.

Issue #, if available:

Merge Checklist:

Put an x without space in the boxes that apply. If you are unsure about any checklist, please don't hesitate to ask. We are here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

  • I have read the contributor guidelines
  • This is a documentation change or typo fix. If so, skip the rest of this checklist.
  • I certify that the changes I am introducing will be backward compatible, and I have discussed concerns about this, if any, with the MosaicML team.
  • I have updated any necessary documentation, including README and API docs (if appropriate).

Tests

  • I ran pre-commit on my change. (check out the pre-commit section of prerequisites)
  • I have added tests that prove my fix is effective or that my feature works (if appropriate).
  • I ran the tests locally to make sure it pass. (check out testing)
  • I have added unit and/or integration tests as appropriate to ensure backward compatibility of the changes.

Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, wouldn't mind another pair of eyes since I haven't looked at this code in a long time

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

streaming/base/partition/__init__.py Outdated Show resolved Hide resolved
Co-authored-by: Mihir Patel <mihir.v.patel7@gmail.com>
Copy link
Collaborator

@karan6181 karan6181 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank You!

@snarayan21 snarayan21 enabled auto-merge (squash) June 18, 2024 18:57
@snarayan21 snarayan21 merged commit 27d61d8 into main Jun 18, 2024
8 checks passed
@snarayan21 snarayan21 deleted the saaketh/partition_repetitions branch June 18, 2024 19:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants