Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad results when evaluating pretrained checkpoints #47

Open
huangzhengxiang opened this issue Oct 3, 2022 · 1 comment
Open

Bad results when evaluating pretrained checkpoints #47

huangzhengxiang opened this issue Oct 3, 2022 · 1 comment

Comments

@huangzhengxiang
Copy link

Hi. Thanks for your great work.
I followed your instructions in README.md to extract nuscenes dataset.
I ran evaluate.py with official pretrained checkpoint (https://github.com/wayveai/fiery/releases/download/v1.0/fiery.ckpt) but got the output as follows:
iou
53.5 & 28.6
pq
39.8 & 18.0
sq
69.4 & 66.3
rq
57.4 & 27.1
Is there something wrong? It seems to be much lower than the results you got.

@gaohao-dev
Copy link

Hi. Thanks for your great work. I followed your instructions in README.md to extract nuscenes dataset. I ran evaluate.py with official pretrained checkpoint (https://github.com/wayveai/fiery/releases/download/v1.0/fiery.ckpt) but got the output as follows: iou 53.5 & 28.6 pq 39.8 & 18.0 sq 69.4 & 66.3 rq 57.4 & 27.1 Is there something wrong? It seems to be much lower than the results you got.

I meet the same problem with you. Have you solved it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants