Skip to content

Some errors happened in the evaluate mode in test.py #11

Answered by Dylan-H-Wang
BugMaker2002 asked this question in Q&A
Discussion options

You must be logged in to vote

Because the provided weights are SSL-pre-trained weights and it is expected to be fine-tuned on some datasets.
The '--evaluate' flag is used for inference which means the weights for--pretrained should be the result of fine-tuned model.
In a word, run test.py with provided pre-trained weights without --evaluate flag first. Then, you can run test.py with '--evaluate' using the output weights from the last test.py run.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Dylan-H-Wang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants