This is a forked modification of StrongSORT-YOLO.
This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by YOLOv7 are passed to StrongSORT which combines motion and appearance information based on OSNet in order to track the objects. It can track any object that your YOLOv7 model was trained to detect. The algorith uses a forked version of YOLOv7, since the original is no longer maintained.
- Clone the repository recursively:
git clone --recurse-submodules https://github.com/nelioasousa/strongsort-yolo.git
If you already cloned and forgot to use --recurse-submodules
you can run git submodule update --init
- Make sure that you fulfill all the requirements: Python 3.8 or 3.9; torch (>=1.7.0 and !=1.12.0), torchvision (>=0.8.1 and !=0.13.0) and compatible CUDA driver; all required dependencies installed. Check requirements.txt for more informations.
Tracking can be runned on videos compatible with OpenCV's VideoCapture class. Also supports sequence of images, as long as those can be opened with cv2.imread() method.
There is a clear trade-off between model inference speed and accuracy.
$ python track.py ... --yolo-weights weights/yolov7-tiny.pt --img 640
yolov7.pt 1280
yolov7x.pt ...
yolov7-w6.pt
yolov7-e6.pt
yolov7-d6.pt
yolov7-e6e.pt
...
By default the tracker tracks all classes. If you want to track a subset of the classes, add their corresponding index after the --classes
flag. The indexing is zero-based.
python track.py ... --classes 16 17 # tracks classes with ids 16 e 17
To draw the trajectory lines of the objects in the output video, call the flags --save-vid
AND --draw-trajectory
. Without --draw-trajectory
the output video will contain only tracking bouding boxes. Customize the bboxes labels with --hide-labels
, --hide-conf
and --hide-class
.
$ python track.py --source test.mp4 --yolo-weights weights/*.pt --save-vid --draw-trajectory
MOT compliant results can be saved to project/name/labels/example.txt
by calling --save-txt
AND --mot-format
flags.
python track.py ... --source example.mp4 --project results --name exp1 --save-txt --mot-format
The above snippet will save the MOT compliant annotation into results/exp1/labels/example.txt
This project was only possible thanks to the efforts of the following:
@article{wang2022yolov7,
title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
journal={arXiv preprint arXiv:2207.02696},
year={2022}
}
@article{du2023strongsort,
title={Strongsort: Make deepsort great again},
author={Du, Yunhao and Zhao, Zhicheng and Song, Yang and Zhao, Yanyun and Su, Fei and Gong, Tao and Meng, Hongying},
journal={IEEE Transactions on Multimedia},
year={2023},
publisher={IEEE}
}
@article{torchreid,
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
author={Zhou, Kaiyang and Xiang, Tao},
journal={arXiv preprint arXiv:1910.10093},
year={2019}
}
@misc{luiten2020trackeval,
author={Jonathon Luiten, Arne Hoffhues},
title={TrackEval},
howpublished={https://github.com/JonathonLuiten/TrackEval},
year={2020}
}
Others
- https://github.com/bharath5673/StrongSORT-YOLO
- https://github.com/AlexeyAB/darknet
- https://github.com/WongKinYiu/yolor
- https://github.com/WongKinYiu/PyTorch_YOLOv4
- https://github.com/WongKinYiu/ScaledYOLOv4
- https://github.com/Megvii-BaseDetection/YOLOX
- https://github.com/ultralytics/yolov3
- https://github.com/ultralytics/yolov5
- https://github.com/DingXiaoH/RepVGG
- https://github.com/JUGGHM/OREPA_CVPR2022
- https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose