Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error when inference on mp4 #9

Closed
mrfsc opened this issue Apr 6, 2022 · 18 comments
Closed

error when inference on mp4 #9

mrfsc opened this issue Apr 6, 2022 · 18 comments

Comments

@mrfsc
Copy link

mrfsc commented Apr 6, 2022

i meet some error when i run demo.py by:

python demo.py --config-file configs/sparse_inst_r50_giam.yaml --video-input test.mp4 --output results --opt MODEL.WEIGHTS sparse_inst_r50_giam_aug_2b7d68.pth INPUT.MIN_SIZE_TEST 512

it returns:

**[ERROR:0@4.053] global /io/opencv/modules/videoio/src/cap.cpp (595) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.5.5) /io/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): results in function 'icvExtractPattern'

0%| | 0/266 [00:00<?, ?it/s]/home/user/InstanceSeg/detectron2/detectron2/structures/image_list.py:114: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
max_size = (max_size + (stride - 1)) // stride * stride
/home/user/anaconda3/envs/seg/lib/python3.7/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined]
0%| | 0/266 [00:00<?, ?it/s]
Traceback (most recent call last):
File "demo.py", line 160, in
for vis_frame in tqdm.tqdm(demo.run_on_video(video, args.confidence_threshold), total=num_frames):
File "/home/user/anaconda3/envs/seg/lib/python3.7/site-packages/tqdm-4.63.1-py3.7.egg/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/home/user/InstanceSeg/SparseInst/sparseinst/d2_predictor.py", line 138, in run_on_video
yield process_predictions(frame, self.predictor(frame))
File "/home/user/InstanceSeg/SparseInst/sparseinst/d2_predictor.py", line 106, in process_predictions
frame, predictions)
File "/home/user/InstanceSeg/detectron2/detectron2/utils/video_visualizer.py", line 86, in draw_instance_predictions
for i in range(num_instances)
File "/home/user/InstanceSeg/detectron2/detectron2/utils/video_visualizer.py", line 86, in
for i in range(num_instances)
TypeError: 'NoneType' object is not subscriptable

@wondervictor
Copy link
Member

Hi @mrfsc, this problem is because detectron2 requires bounding boxes to compute IoU to associate objects between frames for assigning colors. You can refer to the line: https://github.com/facebookresearch/detectron2/blob/221448e4dcfbebf215b8d21ae7e4b1dfbf422d29/detectron2/utils/video_visualizer.py#L104
However, SparseInst does not predict bounding boxes. This problem can be solved by using predicted masks to associate objects through masks, in detectron2/utils/video_visualizer.py, Line 85:

if boxes is None:
    masks_rles = mask_util.encode(
        np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
    )
    detected = [
        _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
        for i in range(num_instances)
    ]
else:
    detected = [
        _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
        for i in range(num_instances)
    ]

@mrfsc
Copy link
Author

mrfsc commented Apr 6, 2022

it's been solved by alter video_visualizer.py, thanks very much!

by the way, there is a small bug in demo.py line 104:
time.time() -,

it works by the modification:
time.time() - start_time,

@wondervictor
Copy link
Member

Thanks and I will fix it!

@WXLL579
Copy link

WXLL579 commented Jun 14, 2022

if boxes is None:
masks_rles = mask_util.encode(
np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
)
detected = [
_DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
for i in range(num_instances)
]
else:
detected = [
_DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
for i in range(num_instances)
]

这段代码应该放在哪里啊???我的video_visualizer.py 中L85是这样的:
[True] * len(predictions)
detectron2 0.6
麻烦帮看一下,感谢!!

@mrfsc
Copy link
Author

mrfsc commented Jun 14, 2022

if boxes is None: masks_rles = mask_util.encode( np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F") ) detected = [ _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8) for i in range(num_instances) ] else: detected = [ _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8) for i in range(num_instances) ]

这段代码应该放在哪里啊???我的video_visualizer.py 中L85是这样的: [True] * len(predictions) detectron2 0.6 麻烦帮看一下,感谢!!

The versions of detectron2 is required as v0.3, please confirm the version of your detectron2. You may installed the latest version of detectron2:)

@WXLL579
Copy link

WXLL579 commented Jun 15, 2022

thx!

@BibibNanana
Copy link

I meet some different error, what should I do to fix this problem?
opt/conda/lib/python3.7/site-packages/detectron2/structures/image_list.py:88: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
max_size = (max_size + (stride - 1)) // stride * stride
/opt/conda/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1634272168290/work/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
26%|█████████████████████████████████████ | 256/994 [00:29<01:26, 8.56it/s]
Traceback (most recent call last):
File "demo.py", line 160, in
for vis_frame in tqdm.tqdm(demo.run_on_video(video, args.confidence_threshold), total=num_frames):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1185, in iter
for obj in iterable:
File "/sparseinst_new/SparseInst-main/sparseinst/d2_predictor.py", line 136, in run_on_video
yield process_predictions(frame, self.predictor(frame))
File "/sparseinst_new/SparseInst-main/sparseinst/d2_predictor.py", line 104, in process_predictions
frame, predictions)
File "/opt/conda/lib/python3.7/site-packages/detectron2/utils/video_visualizer.py", line 87, in draw_instance_predictions
np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
AttributeError: 'Tensor' object has no attribute 'tensor'

@fshamsafar
Copy link

Hi @mrfsc, this problem is because detectron2 requires bounding boxes to compute IoU to associate objects between frames for assigning colors. You can refer to the line: https://github.com/facebookresearch/detectron2/blob/221448e4dcfbebf215b8d21ae7e4b1dfbf422d29/detectron2/utils/video_visualizer.py#L104 However, SparseInst does not predict bounding boxes. This problem can be solved by using predicted masks to associate objects through masks, in detectron2/utils/video_visualizer.py, Line 85:

if boxes is None:
    masks_rles = mask_util.encode(
        np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
    )
    detected = [
        _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
        for i in range(num_instances)
    ]
else:
    detected = [
        _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
        for i in range(num_instances)
    ]

I got this error solved by using masks.permute(1, 2, 0) instead of masks.tensor.permute(1, 2, 0). :)

@BibibNanana
Copy link

ooooh, Thanks, this solution also solved my problem

@BibibNanana
Copy link

Hi @mrfsc, this problem is because detectron2 requires bounding boxes to compute IoU to associate objects between frames for assigning colors. You can refer to the line: https://github.com/facebookresearch/detectron2/blob/221448e4dcfbebf215b8d21ae7e4b1dfbf422d29/detectron2/utils/video_visualizer.py#L104 However, SparseInst does not predict bounding boxes. This problem can be solved by using predicted masks to associate objects through masks, in detectron2/utils/video_visualizer.py, Line 85:

if boxes is None:
    masks_rles = mask_util.encode(
        np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
    )
    detected = [
        _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
        for i in range(num_instances)
    ]
else:
    detected = [
        _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
        for i in range(num_instances)
    ]

I got this error solved by using masks.permute(1, 2, 0) instead of masks.tensor.permute(1, 2, 0). :)

But, when I finished run my code, I can‘t find my result, I want to know how to save my video result?

@fshamsafar
Copy link

But, when I finished run my code, I can‘t find my result, I want to know how to save my video result?

If you run demo.py with arguments like below, the frames will be saved in ./results:

--config-file
configs/sparse_inst_r50_giam.yaml
--video-input
./video.avi
--output
results
--opt
MODEL.WEIGHTS
sparse_inst_r50_giam_aug_2b7d68.pth

@BibibNanana
Copy link

But, when I finished run my code, I can‘t find my result, I want to know how to save my video result?

If you run demo.py with arguments like below, the frames will be saved in ./results:

--config-file
configs/sparse_inst_r50_giam.yaml
--video-input
./video.avi
--output
results
--opt
MODEL.WEIGHTS
sparse_inst_r50_giam_aug_2b7d68.pth

This is my order:
python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output results --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

But when I finished run, nothing here is preserved

image

@fshamsafar
Copy link

python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output results --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

I believe you should see the results in ./output if you change the arg as --output output.

@BibibNanana
Copy link

python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output results --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

I believe you should see the results in ./output if you change the arg as --output output.

oooh, I chang my code like this:
python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output output --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

But I still not find any result in my project:
image
image
image

@BibibNanana
Copy link

python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output results --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

I believe you should see the results in ./output if you change the arg as --output output.

Can you help me?? I still can't find my video result.

@BibibNanana
Copy link

Hi @mrfsc, this problem is because detectron2 requires bounding boxes to compute IoU to associate objects between frames for assigning colors. You can refer to the line: https://github.com/facebookresearch/detectron2/blob/221448e4dcfbebf215b8d21ae7e4b1dfbf422d29/detectron2/utils/video_visualizer.py#L104 However, SparseInst does not predict bounding boxes. This problem can be solved by using predicted masks to associate objects through masks, in detectron2/utils/video_visualizer.py, Line 85:

if boxes is None:
    masks_rles = mask_util.encode(
        np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
    )
    detected = [
        _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
        for i in range(num_instances)
    ]
else:
    detected = [
        _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
        for i in range(num_instances)
    ]

I meet a problem, this is my order:
python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output video --opt MODEL.WEIGHTS model_water_leaf_fissure.pth INPUT.MIN_SIZE_TEST 512
But, when I finished, I can't find my result, can you help me??

@MrL-CV
Copy link

MrL-CV commented Nov 3, 2022

Hi @mrfsc, this problem is because detectron2 requires bounding boxes to compute IoU to associate objects between frames for assigning colors. You can refer to the line: https://github.com/facebookresearch/detectron2/blob/221448e4dcfbebf215b8d21ae7e4b1dfbf422d29/detectron2/utils/video_visualizer.py#L104 However, SparseInst does not predict bounding boxes. This problem can be solved by using predicted masks to associate objects through masks, in detectron2/utils/video_visualizer.py, Line 85:

if boxes is None:
    masks_rles = mask_util.encode(
        np.asarray(np.asarray(masks.tensor.permute(1, 2, 0)), dtype=np.uint8, order="F")
    )
    detected = [
        _DetectedInstance(classes[i], None, mask_rle=masks_rles[i], color=None, ttl=8)
        for i in range(num_instances)
    ]
else:
    detected = [
        _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8)
        for i in range(num_instances)
    ]

I meet a problem, this is my order: python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output video --opt MODEL.WEIGHTS model_water_leaf_fissure.pth INPUT.MIN_SIZE_TEST 512 But, when I finished, I can't find my result, can you help me??

you can try replace "--output video" to "--output video01.mp4"

@adityashukla17
Copy link

video_visualizer

Can you tell exacty which changes did you do in video_visualizer.py?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants