Skip to content

pranoyr/End-to-End-Trainable-Multi-Instance-Pose-Estimation-with-Transformers

Repository files navigation

End-to-End Trainable Multi-Instance Pose Estimation with Transformers

POET

Implementation of POET for pose estimation

Getting Started

COCO Dataset

Dowload the COCO Dataset and create the folder structure as mentioned below.

+ data 
    + annotations   
        - 1.xml
        _ 2.xml
        .
        .
    + train2017 
        - 1.jpg
        - 2.jpg
        .
        .

Train

Once you have downloaded the dataset, start training ->

python -m torch.distributed.launch --nproc_per_node=<number-of-gpus> --use_env main.py --coco_path ./data/ --batch_size <batch-size>

I trained using 2 Tesla-V100 with a batch size of 6.

python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --coco_path ./data/ --batch_size 6

Resume from a checkpoint

python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --coco_path ./data/ --batch_size <batch-size> --resume ./snapshots/model.pth

Inference

python inference.py

To Do

Evaluation script

References

License

This project is licensed under the Apache License

About

pose-detection-transformer

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages