Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU, subprocess.CalledProcessError #3663

Closed
DeanZag opened this issue Jun 17, 2021 · 7 comments
Closed

Multi-GPU, subprocess.CalledProcessError #3663

DeanZag opened this issue Jun 17, 2021 · 7 comments
Labels
question Further information is requested Stale

Comments

@DeanZag
Copy link

DeanZag commented Jun 17, 2021

I am trying to perform a DL training using YOLO V5 on a HPC with two GPUs. I have followed all steps by Yolo V5 Multi-GPU DistributedDataParallel Mode. The command I have used to run the training is as follow:

`(base) amrcnw@amrcnw-G482-Z54-00:~/Olive_project/Yolo_V5/yolov5$ python -m torch.distributed.launch --master_port 9963 --nproc_per_node 2 train.py --data ./data.yaml --cfg yolov5s.yaml --weights yolov5s.pt --epochs 3 --batch-size 64 --device 0,1
#*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/cuda/init.py:125: UserWarning:
A100-PCIE-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the A100-PCIE-40GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/cuda/init.py:125: UserWarning:
A100-PCIE-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the A100-PCIE-40GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Your branch is behind 'origin/master' by 457 commits, and can be fast-forwarded.
(use "git pull" to update your local branch)

Using torch 1.6.0 CUDA:0 (A100-PCIE-40GB, 40536MB)
CUDA:1 (A100-PCIE-40GB, 40536MB)

Traceback (most recent call last):
File "train.py", line 483, in
dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
File "/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 422, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 172, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
File "/home/amrcnw/anaconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/amrcnw/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 261, in
main()
File "/home/amrcnw/anaconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 256, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/home/amrcnw/anaconda3/bin/python', '-u', 'train.py', '--local_rank=1', '--data', './data.yaml', '--cfg', 'yolov5s.yaml', '--weights', 'yolov5s.pt', '--epochs', '3', '--batch-size', '64', '--device', '0,1']' returned non-zero exit status 1.`

Also I have used the following command which gave me the same error

python -m torch.distributed.launch --nproc_per_node 2 train.py --img 416 --batch 16 --epochs 300 --data ./data.yaml --cfg ./models/yolov5s.yaml --weights '' --name Test_3_10Fit yolov5s.pt

there is an error with subprocess.CalledProcessError: Command. Can someone explain to me what is wrong?

@DeanZag DeanZag added the question Further information is requested label Jun 17, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jun 17, 2021

👋 Hello @DeanZag, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 17, 2021

@DeanZag your error is probably related to the warnings displayed on your console about mismatch in your pytorch installation.

For best results I would recommend training in the Docker image:

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu.

@DeanZag
Copy link
Author

DeanZag commented Jun 18, 2021

Hi @glenn-jocher

I had my images arranged by Roboflow and it was working fine on a previous GPU (GTX 1080). Currently, I am running the same images for training on HPC contain x2 NVIDIA RTX A6000 with 2TP memory.

I recloned yolov5 using the following commands

git clone https://github.com/ultralytics/yolov5 # clone repo cd yolov5 pip install -r requirements.txt

Here's the output I got
image

Nothing is happening aftere "Model Summary: 283 layers, 7063542 parameters, 7063542 gradients, 16.4 GFLOPs" (I did press Ctrl+Z to stop the process .

image

Not sure what is it wrong as it used to work fine few weeks when I run --evolve?!!

Additional questions:

When run the multiple GPUs command `python -m torch.distributed.launch --master_port 1234 --nproc_per_node 2 train.py --batch-size 64 --data ./data.yaml --weights yolov5s.pt` it display below output. 

image

Thanks

@glenn-jocher
Copy link
Member

@DeanZag your PyTorch install is incompatible with your hardware. This has nothing to do with YOLOv5. Visit https://pytorch.org/get-started/locally/

@DeanZag
Copy link
Author

DeanZag commented Jun 19, 2021

Thanks @glenn-jocher
Just fixed the PyTorch using the following command
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

I had to restart the HPC in order to resolve the NCCL issue and it is working.

surprisingly I have used the original commands to run training
python train.py --img 416 --batch 64 --epochs 300 --data ./data.yaml --cfg models/yolov5s.yaml --weights '' --name yolov5s_HPC_Train_results --cache

It is working fine
image

Not sure if both GPUs are on or just one?!! since didn't use the Mlti GPUs command
python -m torch.distributed.launch --master_port 1234 --nproc_per_node 2 train.py --batch-size 64 --data ./data.yaml --weights yolov5s.pt

Any ideas?

@glenn-jocher
Copy link
Member

@DeanZag current code requires opt-in to any devices you want to use other than device 0:

python -m torch.distributed.run --nproc_per_node 2 train.py --data coco128.yaml --device 0,1

@github-actions
Copy link
Contributor

github-actions bot commented Jul 20, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants