You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some more context, we have a dataset with ~1.2 billion samples at like 1MB/sample. The index.json file of the merged dataset will be in the tens of GBs, which makes the dataset prohibitively slow to initialize.
Hey, we have seen index.json load times be slow. I think that this is because we download the index file on every single rank, rather than downloading it on just one rank and then broadcasting its contents to other ranks. Downloading a file that's a few GB from cloud storage just on one rank should be relatively fast. This would be a good enhancement but isn't high priority for us right now -- if it's not too much of a hassle, mind submitting a PR?
🚀 Feature Request
Large
index.json
are slow to load. Currently, I am trying to increase shard size, so stream.py#L473 will be faster (hopefully).Motivation
These two steps are very slow for large index.json files.
and
especially with large scale dataset (e.g, Billion same).
The text was updated successfully, but these errors were encountered: