This Fork docker-onnx branch
is an experimental environment to experiment with Docker execution environment and inference with onnxruntime.
-
Inference test
git clone https://github.com/PINTO0309/SMILEtrack && cd SMILEtrack docker pull docker.io/pinto0309/smiletrack:latest docker run --rm -it --gpus all \ -v `pwd`:/workdir \ docker.io/pinto0309/smiletrack:latest cd BoT-SORT # Tracking test # - Weights are automatically dunloaded at runtime. # - Image data sets for verification are not automatically downloaded. python tools/track.py \ /workdir/BoT-SORT/MOT17Det/train/MOT17-04/img1 \ --default-parameters \ --with-reid \ --benchmark MOT17 \ --eval test \ --fp16 \ --fuse \ --save-frames
output.mp4
-
onnx export
-
BoT-SORT
-fast_reid
-fastreid
-modeling
-meta_arch
-baseline.py
-onnx_export=True
def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs, onnx_export=True)
-
-
Similarity validation
Comparison
Patternsimage.1 image.2 Comparison
Patternsimage.1 image.2 30 vs 31⬇️ 1 vs 2⏫ 30 vs 1⬇️ 1 vs 3⏫ 31 vs 2⬇️ 1 vs 4⏫ python validation.py
Model 30
vs
31
⬇️30
vs
1
⬇️31
vs
2
⬇️1
vs
2
⏫1
vs
3
⏫1
vs
4
⏫mot17_sbs_S50_NMx3x256x128_post 0.148 0.046 0.219 0.359 0.611 0.543 mot17_sbs_S50_NMx3x288x128_post 0.154 0.036 0.223 0.375 0.643 0.562 mot17_sbs_S50_NMx3x320x128_post 0.093 0.002 0.180 0.386 0.635 0.631 mot17_sbs_S50_NMx3x352x128_post 0.057 0.000 0.153 0.366 0.642 0.649 mot17_sbs_S50_NMx3x384x128_post 0.044 0.000 0.139 0.359 0.629 0.686 mot20_sbs_S50_NMx3x256x128_post 0.406 0.318 0.309 0.538 0.727 0.778 mot20_sbs_S50_NMx3x288x128_post 0.393 0.288 0.324 0.544 0.724 0.770 mot20_sbs_S50_NMx3x320x128_post 0.372 0.253 0.293 0.543 0.701 0.775 mot20_sbs_S50_NMx3x352x128_post 0.351 0.243 0.301 0.578 0.695 0.756 mot20_sbs_S50_NMx3x384x128_post 0.325 0.226 0.289 0.559 0.698 0.757 OSNet osnet_x1_0_msmt17_combineall_256x128_amsgrad_NMx3x256x128 0.341 0.285 0.265 0.476 0.686 0.504 resnet50_msmt17_combineall_256x128_amsgrad_NMx3x256x128 0.418 0.373 0.329 0.593 0.810 0.752
This code is based on the implementation of ByteTrack, BoT-SORT
SMILEtrack: SiMIlarity LEarning for Multiple Object Tracking
Preprint will be appearing soon
SMILEtrack code is based on ByteTrack and BoT-SORT
Visit their installation guides for more setup options.
PRBNet MOT17 weight link
PRBNet MOT20 weight link
SLM weight link
Download MOT17 from the official website. And put them in the following structure:
<dataets_dir>
│
├── MOT17
│ ├── train
│ └── test
└——————crowdhuman
| └——————Crowdhuman_train
| └——————Crowdhuman_val
| └——————annotation_train.odgt
| └——————annotation_val.odgt
└——————MOT20
| └——————train
| └——————test
└——————Cityscapes
└——————images
└——————labels_with_ids
Single GPU training
cd <prb_dir>
$ python train_aux.py --workers 8 --device 0 --batch-size 4 --data data/mot.yaml --img 1280 1280 --cfg cfg/training/PRB_Series/yolov7-PRB-2PY-e6e-tune-auxpy1.yaml --weights './yolov7-prb-2py-e6e.pt' --name yolov7-prb --hyp data/hyp.scratch.p6.yaml --epochs 100
<dataets_dir>
├─A
├─B
├─label
└─list
A: images of t1 phase;
B: images of t2 phase;
label: label maps;
list: contains train.txt, val.txt and test.txt, each file records the image names (XXX.png) in the change detection dataset.
For the more detail of the training setting, you can follow BIT_CD training code.
By submitting the txt files produced in this part to MOTChallenge website and you can get the same results as in the paper. Tuning the tracking parameters carefully could lead to higher performance. In the paper we apply ByteTrack's calibration.
cd <BoT-SORT_dir>
$ python3 tools/track.py <dataets_dir/MOT17> --default-parameters --with-reid --benchmark "MOT17" --eval "test" --fp16 --fuse
$ python3 tools/interpolation.py --txt_path <path_to_track_result>
cd <BoT-SORT_dir>
$ python3 tools/track_prb.py <dataets_dir/MOT17> --default-parameters --with-reid --benchmark "MOT17" --eval "test" --fp16 --fuse
$ python3 tools/interpolation.py --txt_path <path_to_track_result>
Tracker | MOTA | IDF1 | HOTA |
---|---|---|---|
SMILEtrack | 81.06 | 80.5 | 65.28 |
Tracker | MOTA | IDF1 | HOTA |
---|---|---|---|
SMILEtrack | 78.19 | 77.53 | 65.28 |
A large part of the codes, ideas and results are borrowed from PRBNet, ByteTrack, BoT-SORT, yolov7, thanks for their excellent work!