Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training accuracy 3.41? #12

Open
liqidong opened this issue Mar 24, 2022 · 5 comments
Open

training accuracy 3.41? #12

liqidong opened this issue Mar 24, 2022 · 5 comments

Comments

@liqidong
Copy link

I followed the training steps, but my final accuracy was only more than 3.8. How can I reproduce the accuracy of 3.41?

@choyingw
Copy link
Owner

We chose the best checkpoint among all epochs. You can save more checkpoints during training and evaluate them to find the best one.

@liqidong
Copy link
Author

I tried loading the pre-trained model and saving each batch. The highest accuracy is about 3.58, but it cannot reach 3.41. At the same time, I load the best model (3.41) for iterative training, and usually the accuracy will decrease. How to solve this problem?

@vujadeyoon
Copy link

vujadeyoon commented Oct 27, 2022

Dear all,

I am interested in the SynergeNet.
I also encountered the issue and wonder if @liqidong solved the issue.

As far as @choyingw mentioned above, I saved all checkpoints every epochs and also saved the checkpoint corresponding to best performance in terms of NME. However, I could not reproduce the result (i.e. AFLW2000-3D-All-NME: 3.41) unforuntaetly. In my case, the lowest NME is 3.628.

I understand there may be some error between orinals and mine because of random seed, although I set fixed seed.
Thus, there is two questions.

  1. Is the released code in this repository orinal code corresponding to the paper, Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry?
    In the paper, the experiments are conducted on four NVIDIA GTX 1080Ti, whereas the experiment may be performed with a single NVIDIA GTX 3090 in README.md in this reposiory.
    Also, unfortunately the performance of the released pretrained model is not qual to that of the paper.

  2. As far as I mentioned question 1, I think the uploaded checkpoints is not the checkpoint corresponding to the official paper.
    Could I get the official checkpoint which can makes same results as the recorded in the paper?

Best regards,
Vujadeyoon

@choyingw
Copy link
Owner

choyingw commented Nov 4, 2022

This is the official repo, and the released checkpoints correspond to the results we reported in the paper.

The code base we test was based on available newer computing arch, and the computing arch shouldn’t affect the results

@starhiking
Copy link

starhiking commented Mar 27, 2023

In my reproduce procedure, the whole training process is very unstable.
And the NME also reaches 3.6 with mobilenet_v2 backbone.

Could you please share the training log file about the 3.41 nme? @choyingw

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants