Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best pose estimation model #14

Open
ghost opened this issue Apr 18, 2022 · 6 comments
Open

Best pose estimation model #14

ghost opened this issue Apr 18, 2022 · 6 comments

Comments

@ghost
Copy link

ghost commented Apr 18, 2022

Hi @choyingw,
I am trying to use your pose estimation model - the one reproduces the results in the paper (https://drive.google.com/file/d/13LagnHnPvBjWoQwkR3p7egYC6_MVtmG0/view?usp=sharing) - but get fixed pose predicted angles for images with different poses.
When I am using with the regular model you published (https://drive.google.com/file/d/1BVHbiLTfX6iTeJcNbh-jgHjWDoemfrzG/view?usp=sharing) this phenomenon doesn't happen and I get more logical results (but not SOTA for pose estimation).

I was wondering if this happens to you as well and is there a problem in the model you published?

@ghost
Copy link
Author

ghost commented Apr 19, 2022

@choyingw
I also took a look into the state_dict of the "best_pose.pth.tar" and found keys with "IGM" prefix (instead of "I2P") - when we cleaned this inconsistency and compared the rest of state dict with the current published PyTorch model class we found keys that are not appeared in what you published. For example the following keys:
'.classifier_pitch.1.bias', '.classifier_pitch.1.weight', '.classifier_roll.1.bias', '.classifier_roll.1.weight', '.classifier_scale.1.bias', '.classifier_scale.1.weight', '.classifier_texture.1.bias', '.classifier_texture.1.weight', '.classifier_trans.1.bias', '.classifier_trans.1.weight', '.classifier_yaw.1.bias', '.classifier_yaw.1.weight',

We believe that the model state dict you published for pose estimation doesn't match the PyTorch model class.

Can you please clarify what to do?

@choyingw
Copy link
Owner

Oops. I uploaded the wrong model. I've updated the readme and the link. Please check.

@ghost
Copy link
Author

ghost commented Apr 21, 2022

@choyingw Thanks for your quick response. I still get different results from what you reported on aflw2000 ( Yaw: 5.537 Pitch: 8.978 Roll: 6.132 || MAE: 6.882).

My implementation is similar to https://github.com/vitoralbiero/img2pose/blob/main/evaluation/jupyter_notebooks/aflw_2000_3d_evaluation.ipynb
load image and pose (from mat file) -> face detection -> bbox+margin (as you did) -> crop -> extract pose

I tried to follow your evaluation code, but found several files that you didn't share. For example, './aflw2000_data/AFLW2000-3D_crop', './aflw2000_data/AFLW2000-3D_crop.list', './aflw2000_data/eval/ALFW2000-3D_pose_3ANG_excl.npy', './aflw2000_data/eval/ALFW2000-3D_pose_3ANG_skip.npy'.

can you please share how you created 'aflw2000_data' folder and share the folder itself using google drive, or alternatively create full evaluation code that use the original data to estimate pose - it will be great.

@choyingw
Copy link
Owner

choyingw commented Apr 21, 2022

aflw2000-3D is shared in the link (ReadME, Single Image Inference Demo - Step4, Download the data)

python benchmark.py -w "pathToPoseModel", you'll get the reported number.

@bigdelys
Copy link

@choyingw Even using the latest "best_pose.pth.tar" I am still getting constant poses for any input image, equal to
[ 0.35037747 -4.45007563 -0.32709743]

@choyingw
Copy link
Owner

choyingw commented Feb 4, 2023

@bigdelys Hi, I didn't find this issue on my end. As I print out pose using AFLW2000-3D, the head pose angles are different.

Screenshot from 2023-02-03 20-44-04

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants