Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLO-WORLD-S在coco上finetune无法复现,且validation map呈现下降趋势 #160

Open
shupinghu opened this issue Mar 20, 2024 · 20 comments
Labels
bug Something isn't working Working on it now!

Comments

@shupinghu
Copy link

【复现步骤】
你好,我下载了yolo_world_s_clip_base_dual_vlpan_2e-3adamw_32xb16_100e_o365_goldg_train_pretrained-18bea4d2.pth,加载它进行预训练,config文件是从configs/finetune_coco/yolo_world_l_dual_vlpan_2e-4_80e_8gpus_finetune_coco.py拷贝后修改的,即将_base_里面的yolov8_l_syncbn_fast_8xb16-500e_coco.py直接修改为yolov8_s_syncbn_fast_8xb16-500e_coco.py,然后在yolov8_s_syncbn_fast_8xb16-500e_coco.py里增加了“mixup_prob = 0.1”以解决mmengine报错的问题。

此外,我只用了单卡V100进行训练,担心单卡比8卡batch size差距太大,我将单卡的batch size从16修改成了32。

【实验现象】
其他没有做修改,开始训练后发现validation map从第15个epoch开始,就呈现下降趋势(从epoch 10的0.414、0.576下降到epoch 15的0.410、0.574,当前训练到epoch55,已经下降到了0.377、0.541)。

【问题咨询及分析】
想请问是否是我哪里复现有问题?(check了一下貌似直接改成s之后mixup好像没有开,有可能是这个原因,但是我觉得不至于会让validation map呈下降趋势)

@wondervictor
Copy link
Collaborator

Hi @shupinghu, we have also met the same problem: fine-tuning YOLO-World on COCO without mask-refine leads to performance degradation. We're checking it. However, you can enable mask-refine=True for better results currently.

@shupinghu
Copy link
Author

Hi @shupinghu, we have also met the same problem: fine-tuning YOLO-World on COCO without mask-refine leads to performance degradation. We're checking it. However, you can enable mask-refine=True for better results currently.

Does the validation map in your experiment gradually decrease during fine-tuning as well?

In my experiment, not only was map unable to reproduce the values in the paper, but the bigger problem was that map was getting worse and worse during fine-tuning.

@wondervictor
Copy link
Collaborator

@shupinghu, w/ mask-refine, the fine-tuning results are normal and consistent with the results from the paper. However, removing mask-refine will produce abnormal results.

@shupinghu
Copy link
Author

@shupinghu, w/ mask-refine, the fine-tuning results are normal and consistent with the results from the paper. However, removing mask-refine will produce abnormal results.

OK, I will try this config file and feedback the experiment result. Does use "mask-refine" mean that we use the segmentation annotation to reproduce the bbox annoration?

@wondervictor
Copy link
Collaborator

mask-refine provides box refinements and supports copypaste during training.

@taofuyu
Copy link
Contributor

taofuyu commented Mar 21, 2024

  1. I'm confused that in transform YOLOv5RandomAffine, use_mask_refine is deprecated in your version of mmyolo. So, it should not influence the result ?

  2. And, custom dataset usually dosen't have segmentation annotations, dose it mean that fine-tuning on custom dataset never yield out good result ?

@wondervictor
Copy link
Collaborator

@taofuyu

  1. Compared to w/o mask-refine, w/ mask-refine contains another copypaste augmentation.
  2. It should work well on datasets without segmentation annotations and we need to find out what's wrong under this setting.

@taofuyu
Copy link
Contributor

taofuyu commented Mar 21, 2024

@wondervictor
Thanks. For me, the problem is the decline of open-vocabulary ability after fine-tuning on custom dataset.

@wondervictor
Copy link
Collaborator

@taofuyu I'll add it in TODO and fix it soon.

@wondervictor
Copy link
Collaborator

Hi @shupinghu and @taofuyu, I've uploaded the fine-tuned weights and logs for models with mask-refine=True in configs/finetune_coco.

@wondervictor wondervictor added bug Something isn't working Working on it now! labels Mar 21, 2024
@wondervictor wondervictor pinned this issue Mar 21, 2024
@shupinghu
Copy link
Author

@shupinghu, w/ mask-refine, the fine-tuning results are normal and consistent with the results from the paper. However, removing mask-refine will produce abnormal results.

OK, I will try this config file and feedback the experiment result. Does use "mask-refine" mean that we use the segmentation annotation to reproduce the bbox annoration?

using "mask-refine" is OK.

@wondervictor
Copy link
Collaborator

Updates: the performance will be much worse without the CopyPaster augmentation.

@wondervictor
Copy link
Collaborator

wondervictor commented Mar 22, 2024

[Failed Update] : using SGD, lr=1e-3, wd=0.0005 seems good for fine-tuning.

optim_wrapper = dict(optimizer=dict(
    _delete_=True,
    type='SGD',
    lr=1e-3,
    momentum=0.937,
    nesterov=True,
    weight_decay=0.0005,
    batch_size_per_gpu=train_batch_size_per_gpu))

@wondervictor
Copy link
Collaborator

Hi all (@taofuyu, @shupinghu): happy to update a milestone,
now I've tried a new setting with SGD and fewer augmentation epochs, fine-tuning without mask-refine or copypaste works.

  1. reduce mosaic epochs, increase normal epochs
max_epochs = 40  # Maximum training epochs
close_mosaic_epochs = 30
  1. use SGD optimizer, add weight decay for BN and bias.
optim_wrapper = dict(
    optimizer=dict(_delete_=True,
                   type='SGD',
                   lr=1e-3,
                   momentum=0.937,
                   nesterov=True,
                   weight_decay=weight_decay,
                   batch_size_per_gpu=train_batch_size_per_gpu),
    paramwise_cfg=dict(custom_keys={'logit_scale': dict(weight_decay=0.0)}),
    constructor='YOLOWv5OptimizerConstructor')

Under this setting, YOLO-World-Large without mask-refine can achieve 52.8 AP on COCO (better than YOLOv8), and improve the former wrong baseline (48.6). BTW, fine-tuning with mask-refine now achieves 53.9 AP.

This is a milestone but not the terminus and we are still working on it for a better fine-tuning setting!

Those updates will be pushed in a day.

@wondervictor
Copy link
Collaborator

Hi all (@taofuyu, @shupinghu), we have preliminarily explored the errors about pre-training without maks-refine and fixed this issue. With mask-refine, YOLO-World performs significantly better than the paper version. Without mask-refine, YOLO-World still obtains competitive performance, e.g., YOLO-World-L obtains 52.8 AP on COCO.

You can find more details in configs/finetune_coco, especially for the version without mask-refine.

@JiayuanWang-JW
Copy link

JiayuanWang-JW commented Mar 27, 2024

Hi @wondervictor, I met the same issues for fine-tuning on my own dataset. The mAP50 will decrease after 15 epochs. Do you have any idea about that? I have tried the two fine-tuning config files yolo_world_l_dual_vlpan_2e-4_80e_8gpus_finetune_coco.py and yolo_world_v2_l_vlpan_bn_2e-4_80e_8gpus_finetune_coco_womixup.py (I think you delete this file in the current version). Both are decrest after 15 epochs. However, performance will increase in the last 10 epochs. I just modified the img_scale to 1280, 960 and max_epochs to 100. Other parameters are the same as your configs.

Unfortunately, my dataset does not include the mask, so I can not use mask-refine.

@wondervictor
Copy link
Collaborator

Hi @JiayuanWang-JW, could you try out the latest config for your custom data? I've preliminarily fixed the above issues. The new config does not require mask-refine and obtains steady improvement. Hope for your feedback :)

@JiayuanWang-JW
Copy link

Hi @JiayuanWang-JW, could you try out the latest config for your custom data? I've preliminarily fixed the above issues. The new config does not require mask-refine and obtains steady improvement. Hope for your feedback :)

Thanks for your rapid response. I have finished the experiment on my own dataset. It is much better than before config. The current result as shown
image

The previous is
image

If I continue fine-tuning more epochs, I need to change the close_mosaic_epochs and which parameters (such as base_lr, weight_decay, etc)? I want to try 100 epochs. The best mAP50 is 0.607. I think this is not enough, to be honest, some classical detection methods are much better than this value. And the number of parameters is much less than YOLO-World-L. Do you have any idea how to continue to improve the performance?

@wondervictor
Copy link
Collaborator

Hi @JiayuanWang-JW, is there any update?
I'm sorry for not getting back to you sooner. Exactly, there are several ways to improve the fine-tuning performance:
(1) replace with a better pre-trained model;
(2) increase the input resolution to 800 or higher (1280);
(3) increase the training epochs with a larger learning rate, and increase the mosaic epochs;

@JiayuanWang-JW
Copy link

Hi @JiayuanWang-JW, is there any update? I'm sorry for not getting back to you sooner. Exactly, there are several ways to improve the fine-tuning performance: (1) replace with a better pre-trained model; (2) increase the input resolution to 800 or higher (1280); (3) increase the training epochs with a larger learning rate, and increase the mosaic epochs;

Hi @wondervictor. Thanks for your reply. No. I didn't get the better YOLO-World results on my dataset.

Actually, I already used (1280, 960) size and different learning rate strategies(such as Cosine Annealing and different T_max) to fine-tune the detection tasks. I tried 100 epochs and different mosaic epochs. However, the best performance only achieved 60.7% for AP50. After that, the performance will decrease. Just select two examples to explain.
image
image

YOLOv8 achieved 75.1% on my dataset. They are a large gap.

Anyway, I will continue to explore it and update them here if I find any useful information.

@wondervictor wondervictor unpinned this issue May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Working on it now!
Projects
None yet
Development

No branches or pull requests

4 participants