Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 version of annotation of benchmark are inaccurate.. if I did any wrong? #30

Open
ken881015 opened this issue Jun 27, 2023 · 4 comments

Comments

@ken881015
Copy link

ken881015 commented Jun 27, 2023

  • Hello, I'm really appreciate the work you completed, SynergyNet is not only light-weighted but also keep an acceptable accuracy on AFLW2000-3D.

  • Although I'm also one of trainer who can't reproduce NME 3.4% (best is 3.674% after fix code problem in here) on original annotation of benchmark, I keep trying to analysis what kind of images model be failed on it and try improve through training process.

  • So, I sorted NME of 2000 images, make a grid of 48 worst images and the model alignment on it, and show the ground truth of annotation beside it. (for each pair, left is model output and right is ground truth. and it's reannotated version).
    grid_of_worst_alignment_0~47_re_v2_fix_loss_problem_80

  • As you can see, some annotation of Ground Truth is not accurate, (index start from 1) pair of (1,1) (1,2) is obvious that its annotation is not worth for reference..., but by other pair (e.g. (8,6)), it shows that reason of large NME is due to model performance instead of inaccurate annotation, namely, it's still have chance to be improved.

  • Here is part of my code to post process the file (roi_box, pts68...) you offer in the repo, and visualize the alignment on image. For the inaccurate problem, did I do anything wrong? or is there any opinion you can share for us? I'll be really appreciated for it.

    # put this code in ./aflw2000_data/ and you can run it
    import matplotlib.pyplot as plt
    import numpy as np
    from pathlib import Path
    
    # you can select by image name
    img_name = "image02156.jpg"
    
    img = plt.imread("./AFLW2000-3D_crop/"+img_name)
    
    # choose the version of benchmark annotation (ori or re)
    pts68 = np.load("./eval/AFLW2000-3D.pts68.npy")
    pts68 = np.load("./eval/AFLW2000-3D-Reannotated.pts68.npy")
    
    bbox = np.load("./eval/AFLW2000-3D_crop.roi_box.npy")
    fname_list = Path("./AFLW2000-3D_crop.list").read_text().strip().split('\n')
    
    # coordinate process
    pts68[:,0,:] = (pts68[:,0,:] - bbox[:,[0]]) / (bbox[:,[2]] - bbox[:,[0]]) * 120
    pts68[:,1,:] = (pts68[:,1,:] - bbox[:,[1]]) / (bbox[:,[3]] - bbox[:,[1]]) * 120
    
    fig, ax = plt.subplots()
    
    # plot image
    ax.imshow(img)
    
    # scatter landmarks
    idx = fname_list.index(img_name)
    ax.scatter(pts68[idx,0,:], pts68[idx,1,:])
    
    fig.savefig("alignment.jpg")
    
@choyingw
Copy link
Owner

choyingw commented Jun 27, 2023 via email

@ken881015
Copy link
Author

  • Thanks for the reply.
  • The pic I show is reannotated version of benchmark. However, what surprises me is that it still has some bad annotations (such as pair(1,1), pair(1,2)). So, maybe I will find a new face alignment dataset for validation. Thank you for your suggestions.
  • Regarding occlusion, I am currently trying to add an augmentation technique of randomly erasing parts of the input image to improve the model's ability in handling occlusions.
  • Lastly, while tuning parameters and fixing some issues in the code, I recorded the NME (Normalized Mean Error) throughout the training process. I have a few questions that I would like to ask you:
    image
    • Coincidentally, between epoch 25~50 almost all of the processes show a curve in the shape of a hill.
    • Surprisingly, the best NME in each process happened after milestone (default: 48, 64)
  • Do you think such phenomenon is explainable or just heuristic ?

@choyingw
Copy link
Owner

choyingw commented Jun 28, 2023 via email

@choyingw
Copy link
Owner

choyingw commented Jun 28, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants