Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can i use ZED camera for this approach? #1566

Closed
1 task done
AloysiusChua0822 opened this issue Aug 5, 2024 · 3 comments
Closed
1 task done

How can i use ZED camera for this approach? #1566

AloysiusChua0822 opened this issue Aug 5, 2024 · 3 comments
Labels
question Further information is requested

Comments

@AloysiusChua0822
Copy link

Search before asking

  • I have searched the Yolo Tracking issues and found no similar bug report.

Question

Hi, I am currently playing around with a ZED camera, is it possible to intergrate this tracker with ZED camera, highly appreciate on your guidance.

@AloysiusChua0822 AloysiusChua0822 added the question Further information is requested label Aug 5, 2024
@mikel-brostrom
Copy link
Owner

Maybe something like this?

import cv2
import numpy as np
from pathlib import Path
import pyzed.sl as sl

from boxmot import DeepOCSORT

# Initialize the ZED camera
zed = sl.Camera()
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720
init_params.coordinate_units = sl.UNIT.METER
init_params.depth_mode = sl.DEPTH_MODE.ULTRA

if not zed.is_opened():
    print("Opening ZED Camera...")
status = zed.open(init_params)
if status != sl.ERROR_CODE.SUCCESS:
    print(f"Error opening ZED camera: {status}")
    exit(1)

runtime_params = sl.RuntimeParameters()
mat = sl.Mat()

tracker = DeepOCSORT(
    model_weights=Path('osnet_x0_25_msmt17.pt'),  # which ReID model to use
    device='cuda:0',
    fp16=False,
)

while True:
    # Grab an image from the ZED camera
    if zed.grab(runtime_params) == sl.ERROR_CODE.SUCCESS:
        zed.retrieve_image(mat, sl.VIEW.LEFT)
        im = mat.get_data()

        # substitute by your object detector, output has to be N X (x, y, x, y, conf, cls)
        dets = np.array([[144, 212, 578, 480, 0.82, 0],
                        [425, 281, 576, 472, 0.56, 65]])

        # Check if there are any detections
        if dets.size > 0:
            tracker.update(dets, im)  # --> M X (x, y, x, y, id, conf, cls, ind)
        else:
            dets = np.empty((0, 6))  # empty N X (x, y, x, y, conf, cls)
            tracker.update(dets, im)  # --> M X (x, y, x, y, id, conf, cls, ind)
        tracker.plot_results(im, show_trajectories=True)

        # break on pressing q or space
        cv2.imshow('BoxMOT detection', im)
        key = cv2.waitKey(1) & 0xFF
        if key == ord(' ') or key == ord('q'):
            break

zed.close()
cv2.destroyAllWindows()

You of course need an object detector as well

Copy link

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

@github-actions github-actions bot added the Stale label Aug 18, 2024
@AloysiusChua0822
Copy link
Author

Thankss! It works really well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants