Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about cross-validation #13

Open
beautifulnight opened this issue May 28, 2020 · 2 comments
Open

Question about cross-validation #13

beautifulnight opened this issue May 28, 2020 · 2 comments

Comments

@beautifulnight
Copy link

Sorry to bother you!
I am wondering did you train a MLP for each fold?
If there were 10 MLP models for 10 folds, why the models could fit well on the entire dataset?
I am really confused about it. Looking forward to your reply, thank you!

@anibalsolon
Copy link
Member

Hi,

for each fold, we trained the MLP with [dataset - fold], and computed the accuracy for [fold].
We reported the average accuracy of the 10 folds. I hope it makes things clearer.

@beautifulnight
Copy link
Author

Thanks for your quick reply.
I'm a newbie to machine learning so maybe I am wrong.
I read the code and found you pre-trained two AEs in every individual folds, and fine-tuned the MLP for every individual folds. It seems like you trained 10 independent MLPs and computed the mean value of accuracy of those 10 mlps as average accuracy.
My question is how can I pre-train AEs and fine-tune the model using the cross-validation method. I mean the parameters of the model should be updated constantly with each fold.
Thanks again for your time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants