diff --git a/efficientdet/README.md b/efficientdet/README.md index 7db802a0..59b0eff5 100644 --- a/efficientdet/README.md +++ b/efficientdet/README.md @@ -335,7 +335,7 @@ If you want to do inference for custom data, you can run You should check more details of runmode which is written in caption-4. -## 9. Train on multi GPUs. +## 9. Training on single node GPUs. Create a config file for the PASCAL VOC dataset called voc_config.yaml and put this in it. diff --git a/efficientdet/tf2/README.md b/efficientdet/tf2/README.md index a31d62b2..74fe8e69 100644 --- a/efficientdet/tf2/README.md +++ b/efficientdet/tf2/README.md @@ -260,11 +260,11 @@ Finetune needs to use --pretrained_ckpt. If you want to continue to train the model, simply re-run the above command because the `num_epochs` is a maximum number of epochs. For example, to reproduce the result of efficientdet-d0, set `--num_epochs=300` then run the command multiple times until the training is finished. -## 9. Train on multi GPUs. +## 9. Training on single node GPUs. Just add ```--strategy=gpus``` -## 10. Train on multi node GPUs. +## 10. Training on multi node GPUs. Following scripts will start a training task with 2 nodes. Start Chief training node.