This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
Update numpy and pytorch seeding for dataloader and multiple processes per machine. #299
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Current state:
Currently we set different seeds per-nodes, but the same seed among all training processes on a node. However, each of our Dataloader process seeds are all different each epoch, but non-deterministic.
Proposed State:
Different random number seeds for each dist_rank, and different, deterministic, for each Dataloader process seeds per epoch.
https://fb.quip.com/hVIcAahpVLo2
Effects:
Fixes randomization for a few losses, hooks, and trunks. Fixes randomization when using the Fork multiprocessing option for transformations. Fixes collapse of seeds to 0 when config seed set to 0.
There are 3 changes summarized as:
Differential Revision: D27784137