-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU utilisation drop between epochs #643
Comments
Cycling/interleaving/etc multiple StreamingDatasets/StreamingDataLoaders has the potential to result in complicated situations when it comes to coordination. Instead, why not just use Streams? The sample space of a StreamingDataset is the concatenation of one or more Streams, which correspond to a serialized Streaming dataset directory, i.e. they are sub-datasets. (Well, technically, if you use StreamingDataset you are already using Streams, SD just creates a single Stream implicitly behind the scenes and hands various arguments off to it -- if you don't provide Streams explicitly.) from streaming import Stream, StreamingDataset
first = Stream(local=..., remote=...)
second = Stream(local=..., remote=...)
dataset = StreamingDataset(streams=[first, second], batch_size=...) |
@rishabhm12 You should also make sure to set EDIT: using the Composer launcher instead of TorchDistributor and not setting persistent_workers=True seems to address the problem in my testing. |
@knighton the same behaviour persists with even a single data loader when streaming from remote. @snarayan21 will keeping the workers alive completely solve for the util drop or will only slightly improve the downtime (from 30 mins to 25 mins), have you tried by changing the argument? I was of the opinion, this behaviour is when forward + back prop << time taken to download shards from remote to local. |
@rishabhm12 This should solve the utilization drop if the issue was re-creating the worker StreamingDatasets. As I don't have your script, I don't know the exact improvement it will give you, but we've seen this in the past and it has addressed the problem. Also, since your job is continuing to run on the same set of GPUs/nodes, StreamingDataset is smart about partitioning shard downloads between epochs, so no shard files will have to be downloaded between epochs. Let us know if setting |
@snarayan21 we did try setting the |
@rishabhm12 Ah that's not good, mind sending over a version of your training script we can repro? Would love to get to the bottom of this. Also, I don't think your screenshot uploaded fully 😅 Thanks! |
@miguelalba96 @rishabhm12 Can you make sure that the correct Any minimal repro script either of you could provide so we can debug this would be greatly appreciated! |
I;m having the same issue too. @snarayan21 assigning persistant worker didn't help |
@snarayan21 I am passing the local batch size to StreamingDatasets. I have shared the scripts with databricks team, they will get in touch with you |
@snarayan21 @knighton Here is my code. I have only one source of stream:
|
And I think wait between epochs is also batch_size dependent. The higher the batch_size, higher is the wait time |
@rishabhm12 Yes, and it only happens on multi GPU. on single gpu it is ok I'm using AMD ROCM rccl |
@rishabhm12 @smilenaderi Are both of you using pre-processing functions for each sample/batch before training? Also, how big are your batch sizes and epoch sizes? |
@miguelalba96 In the past we've seen that treating GCSFuse as "local" can be slow. Have you tried treating it as remote, or moving your data to local disk? |
@snarayan21 yeah there's lite preprocessing happening, basically a lookup (O(1)) and converting np arr to torch tensors. |
You could log your actual disk read/write speed and see if your Dataloaders are IO bound, i had this issue as well when loading big images from local drive while also croping them (thowing alot of stuff away right away) while still wanting big batch sizes. Increesing Numbers of workers even decreesed read MB/Sec because (i guess) the head of my harddrive now had to jump between locations. |
I transferred the data locally (1.1TB) to the instance and the problem of low GPU utilisation was solved. It's cheaper to pay the extra for TB of SSD storage than GPU hours |
@miguelalba96 some things you could try, given that local disk works well instead of FUSE-mounted:
let me know if that helps! |
I am curious, what launchers are people using? I have reproduced the issue of low utilization between epochs when using TorchDistributor, but the issue goes away with the Composer launcher. |
Hey @rishabhm12 @Matagi1996 @miguelalba96 @smilenaderi -- |
Hi Team,
I am training a model for retail usecase and I have three streaming data loaders which are interleaving (multitask setup). After the first epoch, I see gpu utilisation drop. How can I make the utilisation close to 100%? I observe the same behaviour if I load from local_disk too, by providing remote=None and local = local_path for all the three data loaders. When I just train with one data loader which loads from local_disk, then I do not observe this behaviour. Ideally, I want the data to be streamed from remote location (gcs in this case) with a util close to 100%. This leads to a lot of ideal time and high training cost.
I also tried playing with the predownload param but there's no improvement as such .
The text was updated successfully, but these errors were encountered: