Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do inference by multiple GPUs? #78

Closed
ZhengMengbin opened this issue Jul 5, 2019 · 6 comments
Closed

How to do inference by multiple GPUs? #78

ZhengMengbin opened this issue Jul 5, 2019 · 6 comments

Comments

@ZhengMengbin
Copy link

I want to using more than one GPU to do inference, what should I do? Using the format as training?

@tianzhi0549
Copy link
Owner

Yes.

@super-wcg
Copy link

@ZhengMengbin How to set multi GPU to train the model? Can you show me your code?

@tianzhi0549
Copy link
Owner

@super-wcg please use the following command line for multi-GPU inference.

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port=$((RANDOM + 10000)) \
    tools/test_net.py \
    --config-file configs/fcos/fcos_R_50_FPN_1x.yaml \
    MODEL.WEIGHT FCOS_R_50_FPN_1x.pth \
    TEST.IMS_PER_BATCH 8

@LIUhansen
Copy link

How to choose a specific GPU?

@tianzhi0549
Copy link
Owner

@LIUhansen please use export CUDA_VISIBLE_DEVICES="<GPU_ID>".

@Tigerwander
Copy link

But the dataloader speed is hard to say, i test coco_2017_val, but the tqdm kept as 0%, and when i debug, it seems stucked by enumerate(tqdm(dataloader)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants