Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I trained a model to eval another category .... #16

Open
simo-an opened this issue Dec 16, 2021 · 1 comment
Open

I trained a model to eval another category .... #16

simo-an opened this issue Dec 16, 2021 · 1 comment

Comments

@simo-an
Copy link

simo-an commented Dec 16, 2021

I train a model of wood using the command:

python run_training.py  --head_layer 2 --type wood

Then in the evaluate process, is use the model to evaluate the type of bottle , I got a surprisingly excellent result:

AUC: 0.9949454642192073

Is there any problem here?


I just used the following hard code to evaluate (in eval.py line 258):

roc_auc = eval_model(model_name, 'bottle', save_plots=args.save_plots, device=device, head_layer=args.head_layer, density=density())

Thanks!

@Runinho
Copy link
Owner

Runinho commented Dec 21, 2021

I just used the following hard code to evaluate (in eval.py line 258):

roc_auc = eval_model(model_name, 'bottle', save_plots=args.save_plots, device=device, head_layer=args.head_layer, density=density())

Thanks!

You hard coded bottle as the second argument. So I suspect you get the AUC of that class.

Instead of hardcoding you could also use the eval script:

python eval.py --head_layer 2 --type wood

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants