Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在COCO数据集上使用sparse_inst_r50vd_dcn_giam_aug.yaml重头训练,精度并没有达到37.9,低了十几个点 #108

Open
116022017144 opened this issue Feb 16, 2023 · 2 comments

Comments

@116022017144
Copy link

未使用fp16,batch_size=32,BASE_LR: 0.00005 STEPS: (210000, 250000) MAX_ITER: 270000 WEIGHT_DECAY: 0.05,其余参数配置未改,训练了212559iter,AP只有12.6,这是怎么回事呢?fp16对最终精度结果有很大的影响么?
[02/16 06:24:14] d2.evaluation.fast_eval_api INFO: Evaluate annotation type segm
[02/16 06:24:34] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 19.81 seconds.
[02/16 06:24:34] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results...
[02/16 06:24:36] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 2.17 seconds.
[02/16 06:24:36] d2.evaluation.coco_evaluation INFO: Evaluation results for segm:

AP AP50 AP75 APs APm APl
12.604 22.949 12.255 3.839 12.047 19.988
@wondervictor
Copy link
Member

Hi @116022017144, FP16 is not crucial to the final performance. Could you provide the log file with me?

@lsm140
Copy link

lsm140 commented Aug 11, 2023

未使用fp16,batch_size=32

未使用fp16,batch_size都能有32。泰裤辣!不使用fp16,我只能bs=4,开了就bs=8。即使选一块3090,不开fp16也就bs=8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants