Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Darknet Polynomial LR Curve #18

Closed
glenn-jocher opened this issue Sep 24, 2018 · 8 comments
Closed

Darknet Polynomial LR Curve #18

glenn-jocher opened this issue Sep 24, 2018 · 8 comments
Labels
question Further information is requested

Comments

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 24, 2018

I found darknet's polynomial learning rate curve here:

case POLY:
    return net->learning_rate * pow(1 - (float)batch_num / net->max_batches, net->power);

https://github.com/pjreddie/darknet/blob/680d3bde1924c8ee2d1c1dea54d3e56a05ca9a26/src/network.c#L111

If I use power = 4 from parser.c then I plot the following curve (in MATLAB), assuming max_batches = 1563360 (160 epochs at batch_size 12, for 9771 batches/epoch). This leaves the final lr(1563360) = 0. This means that is is impossible for anyone to begin training a model from the official YOLOv3 weights and expect to resume training at lr = 0.001 with no problems. The model is going to clearly bounce out of its local minimum back into the huge gradients it first saw at epoch 0.

>> batch = 0:(9771*160);
>> lr = 1e-3 * (1 - batch./1563360).^4;
>> fig; plot(batch,lr,'.-'); xyzlabel('batch','learning rate'); fcnfontsize(14); fcntight;

@glenn-jocher glenn-jocher added the question Further information is requested label Sep 24, 2018
@okanlv
Copy link

okanlv commented Sep 24, 2018

yolov3 uses 'steps' policy to adjust the learning rate. At the end of the training lr = 0.00001, so it should converge with this learning rate using SGD. Why did you try to use the polynomial policy?

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Sep 24, 2018

Ohhhhh. I read about the polynomial lr curve in the v2 paper, I thought it was carried over to v3. I'll implement the steps policy from the cfg file instead.

But something is odd. I thought yolov3 was trained to 160 epochs, but maybe not. It looks like in yolov3.cfg batch = 16 (batch_size I think), and max_batches = 500200. trainvalno5k.txt has 117264 images in it, or 117264 / 16 = 7329 batches/epoch. 500200 / 7329 = 68 epochs. Do you think this means yolov3 is fully trained in 68 epochs?

@okanlv
Copy link

okanlv commented Sep 24, 2018

Could you point out where have the authors specified the epoch number in yolov3 paper (or somewhere else)? I might have missed that.

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Sep 24, 2018

Section 3 of the yolov2 paper (aka yolo "9000") has many training details. v3 paper is completely missing details though, this is why everyone is so confused translating it to pytorch. I think I finally found the right loss function to use though, my latest commit can continue training at lr = 1e-5 without performance losses I think. I haven't tested a full epoch yet but the first ~2000 batches show stable P and R values. The main change I made was to merge the obj and noobj confidence loss terms. I think you or @ydixon might have recommended the same change a few days ago. I'm hoping this is the missing link.

https://pjreddie.com/media/files/papers/YOLO9000.pdf
"Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.1, polynomial rate decay with a power of 4, weight decay of 0.0005 and momentum of 0.9 using the Darknet neural network framework [13]."

@glenn-jocher
Copy link
Member Author

Ah I forgot to mention, in the spirit on this issue, I've implemented the correct yolov3 step lr policy now. This assumes 68 total epochs, and 0.1 lr drops at 80% and 90% completion, just like the cfg.

yolov3/train.py

Lines 106 to 114 in 7416c18

# Update scheduler (manual)
if epoch < 54:
lr = 1e-3
elif epoch < 61:
lr = 1e-4
else:
lr = 1e-5
for g in optimizer.param_groups:
g['lr'] = lr

@okanlv
Copy link

okanlv commented Sep 24, 2018

Ahh, they probably did not use the same training config in yolov3. I hope the training converges with the new loss term. Btw, you referenced the training of the classification network, not the detection network. The detection training in yolo2 should be

We train the network for 160 epochs with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.0005 and momentum of 0.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC.

@okanlv
Copy link

okanlv commented Sep 25, 2018

It seems good to schedule the learning rate with the total number of epochs. You probably already know that but the darknet schedules the learning rate with the total number of batches processed during the training. I am not sure which one is the better practice, although both methods will give the same result for the standard .cfg file.

@glenn-jocher
Copy link
Member Author

@okanlv yes darknet tracks total batches, with 16 images per batch. I tracked the epochs instead. There's probably not much effect one way or the other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants