Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove evaluator state #3339

Merged
merged 1 commit into from
May 29, 2024
Merged

Remove evaluator state #3339

merged 1 commit into from
May 29, 2024

Conversation

snarayan21
Copy link
Contributor

@snarayan21 snarayan21 commented May 29, 2024

What does this PR do?

We don't use eval state at all -- currently we simply set the eval dataloader state only if the dataset is a StreamingDataset, and we also simply set the current sample to 0 (meaning that we start from the beginning). This is not needed at all and is causing some errors when resuming training on various numbers of GPUs. Since we iterate through the entire eval loader every single time we do eval, there is no notion of state that has to be preserved for eval datasets. This PR removes that functionality completely.

Specifically, this caused errors with resuming on various numbers of GPUs and setting device_eval_batch_size. When going from 24 to 32 gpus, with both runs specifying device_eval_batch_size of 2, Streaming would try to resume using a global eval batch size of 64 samples, but the state dict would also show that 24 devices had been used in the initial run. To deterministically resume, Streaming attempts to partition the current global batch size over the initial number of devices, and 64 samples are not divisible by 24 devices, throwing an error.

Test runs have succeeded:
1b-dense-fsdp-shardgradop-lion8b-fullckpt-resume-s44ELm: resume from old checkpoint correctly
1b-dense-fsdp-shardgradop-lion8b-fullckpt-start-HDVoIC: save new checkpoint with PR
1b-dense-fsdp-shardgradop-lion8b-fullckpt-resume-VPH2X6: resume from new checkpoint correctly

What issue(s) does this change relate to?

Before submitting

  • Have you read the contributor guidelines?
  • Is this change a documentation change or typo fix? If so, skip the rest of this checklist.
  • Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so.
  • Did you update any related docs and document your change?
  • Did you update any related tests and add any new tests related to your change? (see testing)
  • Did you run the tests locally to make sure they pass?
  • Did you run pre-commit on your change? (see the pre-commit section of prerequisites)

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you describe the errors this caused?

@snarayan21
Copy link
Contributor Author

@mvpatel2000 updated PR description. I'm not going to merge this until Kushal can resolve the issue with this branch

@snarayan21
Copy link
Contributor Author

Test runs succeeded (see PR desc), merging

@snarayan21 snarayan21 merged commit 85f7778 into dev May 29, 2024
15 checks passed
@snarayan21 snarayan21 deleted the saaketh/eval_dataset_state_nuke branch May 29, 2024 20:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants