Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce finetuning results on FMOW-RGB #6

Open
techmn opened this issue Oct 23, 2023 · 1 comment
Open

Cannot reproduce finetuning results on FMOW-RGB #6

techmn opened this issue Oct 23, 2023 · 1 comment

Comments

@techmn
Copy link

techmn commented Oct 23, 2023

Hi,
Thanks for sharing the interesting work.
I tried to finetune the model on FMOW-RGB dataset by using the pretrained weights of 800th epoch provided at the repository. I used the default hyperparameters in the file main_linprobe.py and set
args.finetune=True
args.nb_classes=62
args.eval_path=./fmow-rgb/val
args.eval_dataset=fmow
args.batch_size=16

and as mentioned in paper
args.blr=5e-3
args.weight_decay=5e-3

But the I am not able to reproduce the results mentioned in paper. Instead I get maximum accuracy of 62.
Using the blr=5e-3 gives NaN loss issue, so I reduced its value until it runs fine.
Surprisingly, train loss decreases and test loss increases. after 2nd epoch of finetuning accuracy started decreasing from 62 to 58.
Could you please tell me the exact values of hyperparameters and settings? And what could be the reason of not reproducing the finetuning results?

Here is param settings from log file
Namespace(accum_iter=1,
base_resolution=2.5,
batch_size=16,
blr=7e-05,
checkpoint_interval=10,
checkpoint_path='scalemae-vitlarge-800.pth',
config='./config/fmow.yaml',
device='cuda',
dist_backend='nccl',
dist_eval=False,
dist_on_itp=False,
dist_url='env://',
distributed=True,
drop_path=0.2,
epochs=50,
eval=False,
eval_base_resolution=1.0,
eval_dataset='fmow',
eval_gsd=False,
eval_only=False,
eval_path='./fmow-rgb/val',
eval_reference_resolution=224,
eval_scale=224,
finetune=True,
global_pool=False,
gpu=0,
input_size=224,
layer_decay=0.75,
linear_layer_scale=1.0,
local_rank=0,
log_dir='./finetune_dir',
lr=None,
mask_ratio=0.75,
min_lr=0.0,
model='vit_large_patch16',
name='',
nb_classes=62,
no_autoresume=False,
norm_pix_loss=False,
num_workers=10,
output_dir='./finetune_dir',
pin_mem=True,
print_freq=20,
rank=0,
restart=False,
resume='',
scale_max=1.0,
scale_min=0.5,
seed=0,
source_size=[224],
start_epoch=0,
target_size=[224],
wandb_id=None,
warmup_epochs=0,
weight_decay=0.005,
world_size=8)

@Jerry-jing
Copy link

Hi,你可以尝试修改eval_base_resolution=2.5,可能会有意想不到的效果。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants