Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is image_weights automatically disabled in the last epochs? #11999

Closed
1 task done
tino926 opened this issue Aug 17, 2023 · 2 comments
Closed
1 task done

is image_weights automatically disabled in the last epochs? #11999

tino926 opened this issue Aug 17, 2023 · 2 comments
Labels
question Further information is requested Stale

Comments

@tino926
Copy link
Contributor

tino926 commented Aug 17, 2023

Search before asking

Question

I am new to using the --image-weights flag to train a model. The training process is going well except for the last 25 epochs. During these epochs, the training and validation losses continue to improve, but the mAPs start to decrease. I am wondering if there is some mechanism (such as --image-weights) that is automatically disabled in the last few epochs? (I have searched through the code but cannot find any related code)

image
image
image

Additional

the training command:

CUDA_VISIBLE_DEVICES=0 python train.py \
  --project ... --cfg ... --data ... --img 480 --batch-size 64 --workers 16 --weights ... \
  --hyp ./data/hyps/hyp.scratch-low_mosYes_0.0072.yaml \
  --image-weights \
  --patience 200 --epoch 400 \
  --name test_imWgt \
  --resume ".../last.pt"

the content in hyp.scratch-low_mosYes_0.0072.yaml

lr0: 0.0072  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 0.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.001  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.5  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 1.0  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)
@tino926 tino926 added the question Further information is requested label Aug 17, 2023
@github-actions
Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Sep 17, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 28, 2023
@glenn-jocher
Copy link
Member

@tino926 the --image-weights flag is not automatically disabled in the last epochs. It continues to be applied during the entire training process. It's likely that other factors, such as learning rate, data augmentation, or model capacity, may be affecting your mAP in the end stages. Feel free to tweak these hyperparameters to further diagnose the issue. If you have other questions, check out the Ultralytics Docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants