Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I observe that the validation phase is much slower than the training phase on large validation sets and multi-GPU machines #13142

Open
1 task done
ASharpSword opened this issue Jun 27, 2024 · 5 comments
Labels
question Further information is requested

Comments

@ASharpSword
Copy link

Search before asking

Question

Hello, dear author. I observed that validation was very slow using only one GPU regardless of how many Gpus there were. Here's a question I'd like to ask from a novice perspective: why not make the validation part multi-GPU parallel as well? Is it impossible or unnecessary or you don't have time to do it? Since I was recently looking for a way to reduce the validation part of the time, I was wondering if there was an existing solution that could save me some time. If not, I'm trying multi-GPU parallel validation, just like multi-GPU training. Does this work? Please forgive me if I have caused any offence

Additional

No response

@ASharpSword ASharpSword added the question Further information is requested label Jun 27, 2024
Copy link
Contributor

👋 Hello @ASharpSword, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@ASharpSword
Copy link
Author

I am trying to generate val_loader with the same parameters as train_loader and remove the restriction that only the master process can create val_loader. Next, I undid the constraint that validate.run() should only be run by the master process, and I removed the tqdm from validate.run() so that the TQDMS don't interfere with each other and print too much information. However, these measures lead me to get some scattered validation results instead of a complete validation set. I had to combine partial validation set results from different GPU processes to get a complete validator result, I don't know if there is anything wrong with this, if so, I ask the author to point it out.

@ASharpSword
Copy link
Author

I think I already know what I need to do, a training set of n GPU processes is split equally, but only the progress of the master i.e. process 0 is displayed, disguised as the overall progress with pbar = tqdm(total=nb). I could also disguise the total progress with the partial validation of the 0 process using pbar = tqdm(total=nb), but I would have to rewrite the mAP calculation and other subsequent processes to make them work for multiple processes.

1 similar comment
@ASharpSword
Copy link
Author

I think I already know what I need to do, a training set of n GPU processes is split equally, but only the progress of the master i.e. process 0 is displayed, disguised as the overall progress with pbar = tqdm(total=nb). I could also disguise the total progress with the partial validation of the 0 process using pbar = tqdm(total=nb), but I would have to rewrite the mAP calculation and other subsequent processes to make them work for multiple processes.

@glenn-jocher
Copy link
Member

Hello,

Thank you for your detailed observations and for sharing your approach to addressing the validation phase's performance on multi-GPU setups. Your insights are valuable and show a deep understanding of the underlying processes.

Indeed, the validation phase in YOLOv5 currently runs on a single GPU, which can become a bottleneck, especially with large validation sets. Your idea of distributing the validation workload across multiple GPUs is a promising approach to mitigate this issue.

Here are a few points to consider and some suggestions to help you refine your implementation:

  1. Distributed Validation: As you mentioned, splitting the validation set across multiple GPUs and aggregating the results is a viable solution. This approach requires careful handling of the results to ensure the final metrics (e.g., mAP) are correctly computed.

  2. Synchronization: Ensure that all GPU processes synchronize their results before computing the final metrics. This can be achieved using torch.distributed utilities to gather results from all processes.

  3. Progress Bar: Using tqdm for progress indication can be tricky in a multi-process environment. One approach is to update the progress bar only from the master process, as you suggested. Alternatively, you can use a custom logging mechanism to aggregate progress updates from all processes.

  4. Code Example: Here's a basic outline of how you might structure the validation loop with distributed processing:

    import torch
    import torch.distributed as dist
    from tqdm import tqdm
    
    def validate(model, dataloader, device):
        model.eval()
        results = []
        with torch.no_grad():
            for batch in tqdm(dataloader, desc="Validation", disable=dist.get_rank() != 0):
                inputs, targets = batch
                inputs = inputs.to(device)
                outputs = model(inputs)
                results.append((outputs, targets))
        
        # Gather results from all processes
        all_results = [None] * dist.get_world_size()
        dist.all_gather_object(all_results, results)
        
        # Flatten the list of results
        all_results = [item for sublist in all_results for item in sublist]
        
        # Compute metrics (e.g., mAP) on the aggregated results
        metrics = compute_metrics(all_results)
        return metrics
    
    def compute_metrics(results):
        # Implement your metric computation logic here
        pass
  5. Testing and Debugging: Ensure you test your implementation thoroughly to verify that the distributed validation produces consistent and accurate results. You might want to start with a smaller dataset to simplify debugging.

  6. Community Contributions: If you achieve a robust solution, consider contributing it back to the YOLOv5 repository. The YOLO community would greatly benefit from improvements in multi-GPU validation performance.

For further details on multi-GPU training and validation, you can refer to the Multi-GPU Training Tutorial.

Thank you again for your contributions and for pushing the boundaries of what's possible with YOLOv5. If you have any more questions or need further assistance, feel free to ask!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants