modes/track/ #7906
Replies: 106 comments 262 replies
-
Can I run two models simultaneously in one video? I want that two models will works simultaneously with cumulative results? Is it possible? Please let me know. Thanks in advance!! |
Beta Was this translation helpful? Give feedback.
-
Hi, First of all, I have been loving working with Yolov8. Great tool! However, I have been having difficulties with a certain task. I want to use model.track on videos that I have, and then use save_crop = True, but save with a naming convention where I can track each persons ID. Currently, save_crop just gives me the cropped images of the objectes detected, but there is not way to know from which frame of the video are the crops, also, which ID is attached which cropped image. The visualization through cv2.imshow shows the IDs accross the different frames, but I cant find a way to save them. The naming convention I am looking for is something like this: "frame_30_ID_1.jpg" My current code looks something like this: from ultralytics import YOLO model = YOLO("yolov8n.pt") # load model video_path = "path/to/video.mp4" ret = True while ret:
cap.release() Any help would be greatly apprecitated! Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @pderrenger . Can I run the models using my phones camera? Can you please share the code to invoke my mobile's camera to test the model? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, help me understand why I get this error when tracking with segmentation model . My ultimate goal is to use a custom car plate segmentation model for tracking. Thank you very much
|
Beta Was this translation helpful? Give feedback.
-
Yolov8 has very high overall practicality. Can I implement tracking with two cameras? I hope that when a car tracked by camera A moves to camera B, its frame ID remains the same. However, currently there is always an ID switch happening. Is it because of the model's accuracy? def cam2(): cap=cam a = threading.Thread(target=cam1) a.start() |
Beta Was this translation helpful? Give feedback.
-
Hey there,
|
Beta Was this translation helpful? Give feedback.
-
Hi I saw that I can use an openvino IR format model just like any other pytorch model and then run tracking like normal. I was wondering how I would load the IR '.xml' and '.bin' files as arguments into YOLO(), or if I should load my model using openvino library? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Can i use yolov8 model to track and reidentify person with same id assigned to it in multiple camera feed ? |
Beta Was this translation helpful? Give feedback.
-
How can we only track moving objects in the Plotting Tracks Over Time code: from collections import defaultdict import cv2 from ultralytics import YOLO Load the YOLOv8 modelmodel = YOLO('yolov8n.pt') Open the video filevideo_path = "path/to/video.mp4" Store the track historytrack_history = defaultdict(lambda: []) Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() |
Beta Was this translation helpful? Give feedback.
-
import cv2 model = YOLO('yolov8_custom_train.engine', task="detect") Path to the input video fileinput_video_path = '/content/gdrive/MyDrive/yolov8-tensorrt/inference/output_video.mp4' Path to the output video fileoutput_video_path = 'outputtest_video.mp4' Define the coordinates of the polygonpolygon_points = [(670, 66), (1237, 550), (514, 1054), (161, 295)] Open the input video filecap = cv2.VideoCapture(input_video_path) Get video propertiesframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) Define the codec and create VideoWriter objectfourcc = cv2.VideoWriter_fourcc(*'mp4v') Function for finding the centroiddef calculate_centroid(box): Function to check if two bounding boxes overlapdef check_overlap(box1, box2): Read until video is completedwhile cap.isOpened():
Release video objectscap.release() Close all OpenCV windowscv2.destroyAllWindows() In this, I am tracking a label Person but in the next 2 to 3 frames, ids are changing so any solution for this? |
Beta Was this translation helpful? Give feedback.
-
What is the difference between these attributes of results[0].boxes: |
Beta Was this translation helpful? Give feedback.
-
is it possible to use our own weighs as a model to track? or we must include the yolov8n.pt? |
Beta Was this translation helpful? Give feedback.
-
So I am using Yolov8 for my current project, it's been a breeze so far. I do have a question on the tracking method provided by the Yolov8. When I am using the generic yolov8n model(or even a custom model mixed with few objects), I know I can specifically filter out things that doesn't interest me by their ID as below:
But, when I caught the an object that I am interested, can I at that time or at that frame, issue a track command to start tracking it? if it can be done, can you tell me how? an short example will be even better! thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, I want some detailed help and guidance on how to use custom tracker models with my custom yolov8 pose model, the Re-identification problem is being face using bytetrack.yaml so I think I should use StrongSORT or DeepSORT. Therefore, I want the ultralytics team to help me on selecting my tracker model or use multiple tracker models, and guide me properly on how to use them with my YOLOv8 custom trained model. |
Beta Was this translation helpful? Give feedback.
-
import random opening the file in read modemy_file = open("utils/coco.txt", "r") reading the filedata = my_file.read() replacing end splitting the text | when newline ('\n') is seen.class_list = data.split("\n") Generate random colors for class listdetection_colors = [] load a pretrained YOLOv8n modelmodel = YOLO("weights/yolov8n.pt", "v8") Vals to resize video frames | small frame optimise the runframe_wid = 640 def CarBehaviour(frame, color_threshold=1100):
def detect_and_draw(frame, model, class_list, detection_colors):
Open video capturecap = cv2.VideoCapture("/home/opencv_env/Vehicle-rear-lights-analyser-master/testing_data/road_2.mp4") if not cap.isOpened(): while True:
When everything done, release the capturecap.release() |
Beta Was this translation helpful? Give feedback.
-
Is there a way to change the text color or remove a bounding box and label background color. An object is occasionally displayed in White with White text, which is not visible on the image displayed. All suggestions are appreciated. |
Beta Was this translation helpful? Give feedback.
-
I am tracking five small objects that are being transferred from one bowl to another using a grasping instrument. I need to detect if an object falls from the instrument during the transfer. I am using Ultralytics YOLO for tracking, but I encounter an issue: during the transfer, when an object is held by the instrument or falls, it changes its ID and assigns a new one. This ID change is preventing me from proceeding further. How can I address this issue and ensure consistent tracking of the objects? |
Beta Was this translation helpful? Give feedback.
-
I see that the examples here are using multi-threading for concurrent processing, but would that be necessary if have a GPU which can handle processing in parallel rather than in concurrency. Maybe I'm missing something. I'm trying to process (track) in multiple streams with access to one Nvidia GPU. And I have the following questions:
Any help would be much appreciated. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I really need help to do this programm: |
Beta Was this translation helpful? Give feedback.
-
import cv2 Mount Google Drivedrive.mount('/content/drive') Camera calibration parameters (example values, should be adjusted for your camera)fx, fy, cx, cy = 1000, 1000, 640, 360 # Example values for focal lengths and optical centers Homography matrix (example values, should be derived from calibration)H = np.array([[1, 0, 0], Function to convert pixels to metersdef pixel_to_meter(x, y, H): Function to apply a simple moving averagedef moving_average(data, window_size): Function to interpolate missing framesdef interpolate(data, timestamps):
Load the YOLOv8 model trained on VisDrone datasetmodel_path = "/content/drive/MyDrive/drone/runs/detect/train3/weights/best.pt" Open the video filevideo_path = "/content/drive/MyDrive/drone/les résultats/vidéojdidzone1.mp4" fps = cap.get(cv2.CAP_PROP_FPS) Create a VideoWriter object to save the output videofourcc = cv2.VideoWriter_fourcc(*'mp4v') Drone altitudedrone_height = 100 # meters Camera field of viewfov_degrees = 62.2 Calculate the visible width in meters at the given altitudevisible_width = 2 * (drone_height * np.tan(fov_radians / 2)) Dictionary to store vehicle informationvehicle_data = defaultdict(lambda: {'timestamps': [], 'positions': [], 'speeds': [], 'accelerations': []}) Vehicle class IDs (based on YOLO's COCO dataset)vehicle_classes = {2, 3, 5, 8} # COCO classes: car, motorcycle, bus, truck
cap.release() Apply interpolation and smoothing to the positionswindow_size = 5 # Define the window size for moving average
Calculate speed and acceleration based on smoothed positionsfor obj_id, info in vehicle_data.items():
Convert the data to a DataFramedata = [] df = pd.DataFrame(data, columns=['Timestamp', 'ID', 'Position X (m)', 'Position Y (m)', 'Speed (m/s)', 'Acceleration (m/s^2)']) Save DataFrame to Excel fileexcel_path = '/content/drive/MyDrive/drone/les résultats/excelzone1.xlsx' Check if the file exists and download itif os.path.exists(excel_path): |
Beta Was this translation helpful? Give feedback.
-
please help me to connect many cameras logic import cv2 model = YOLO("yolov8n.pt") camera_ids = [ 'rtsp://admin:admin12345@172.17.67.20:554', ] caps = [cv2.VideoCapture(camera_id) for camera_id in camera_ids] roi_start = [(0, 0)] * len(caps) tracked_persons = [[] for _ in range(len(caps))] def draw_rectangle(event, x, y, flags, param): for idx in range(len(caps)): def is_inside_roi(x, y, roi_start, roi_end): while True:
for cap in caps: |
Beta Was this translation helpful? Give feedback.
-
I want to track multiple persons in multiple cameras at least in 3. If a person moves from one camera to another he should be assigned the same ID as he has in the first help me how can i do this? |
Beta Was this translation helpful? Give feedback.
-
Hello, Now Ive wrapped the classification and tracking functionality into a flask app, that returns the count ID's when accessed by a REST call like get, meaning it can be accessed by a separate app for processing and stuff. I want to be able to reset the counting through some other request. My camera feed is always live, i need a way to reset the track id's to zero when neccessary, they just keep counting until i exit the program. |
Beta Was this translation helpful? Give feedback.
-
Yes, i actually managed to do so by reloading a model. I coupled this to a
button in my react frontend through a flask endpoint id created
```
def reset_tracker():
global model
model = YOLO("best.pt")
global track_ids
track_ids = []
@app.route('/track-ids', methods=['GET'])
def get_track_ids():
return jsonify({"track_ids": track_ids})
# Route
@app.route('/reset-tracker', methods=['GET'])
def reset_tracker_endpoint():
reset_tracker()
return jsonify({"message": "Tracker has been reset."})
def process_video():
global track_ids
# Loop through the video frames
while cap.isOpened():
# Read a frame
success, frame = cap.read()
if success:
results = model.track(frame, persist=True, conf=0.95)
track_ids = results[0].boxes.id.tolist() if results[0].boxes.id is not None else []
annotated_frame = results[0].plot()
cv2.imshow("Bag Counter", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
break
cap.release()
cv2.destroyAllWindows()
```
…On Fri, 19 Jul 2024, 13:00 vivekpayasi, ***@***.***> wrote:
@glenn-jocher <https://github.com/glenn-jocher> Thanks for your reply.
I tried the following code by giving .streams file which contains paths
to 5 videos.
model = YOLO("yolov8x-worldv2.pt")
classes_list = ["person", "helmet", "gloves", "mask", "shoes", "glasses"]
model.set_classes(classes_list)model.to('cuda')
results = model.track(source='video_sources.streams',
half=True, device='cuda:0',
stream=True, show=False, vid_stride=vid_stride)
frame_idx = 0
for result in results:
check_ppe_compliance(result, frame_idx)
frame_idx += 1
But only the first frames of each video seem to be processed, not the rest
of the frames of the videos. (ss attached)
Screenshot.2024-07-19.at.15.27.59.png (view on web)
<https://github.com/user-attachments/assets/f265915f-5979-428a-bab4-5af07c69aa59>
Am I doing this correctly?
Shall I spawn a separate thread for each stream/video and initialize a
model for each thread, because I've seen this way in many docs and
examples? What is the difference between giving the .streams file
directly vs spawning a thread for each stream separately?
Thanks!
—
Reply to this email directly, view it on GitHub
<#7906 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACXRGWDEQ6CV4BSKX75H44DZNDPT3AVCNFSM6AAAAABCROZ3EGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMBZGQYTGMI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
how to track only one class in yolov8 model but it want to detect all classes |
Beta Was this translation helpful? Give feedback.
-
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
How to do this in this code
to track only one class while detecting all classes in a YOLOv8 model, you
can filter the tracking results by class after detection. This way, the
model detects all classes, but the tracker only processes the specified
class. You can achieve this by modifying the tracking code to include a
class filter.
…On Sat, 27 Jul, 2024, 9:01 am Glenn Jocher, ***@***.***> wrote:
@mariswarycharan <https://github.com/mariswarycharan> to track only one
class while detecting all classes in a YOLOv8 model, you can filter the
tracking results by class after detection. This way, the model detects all
classes, but the tracker only processes the specified class. You can
achieve this by modifying the tracking code to include a class filter. For
more detailed guidance, please refer to the Ultralytics documentation on
tracking <https://docs.ultralytics.com/modes/track/>.
—
Reply to this email directly, view it on GitHub
<#7906 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AY7YK3AKOZ36IG63QH4N44DZOMIBXAVCNFSM6AAAAABCROZ3EGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMJWGU2DAOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
i am extracting features of a bounding box of person and then matching these features if any person appears so if the features matches how can i assign a custom id to the detected person |
Beta Was this translation helpful? Give feedback.
-
can i use Tracking on YOLO v9 / YOLO v10? |
Beta Was this translation helpful? Give feedback.
-
In the default setting of Ultralytics tracking, each person's ID is counted only once, even if they cross from the left side to the right side of a line. |
Beta Was this translation helpful? Give feedback.
-
Is it possible to speed up inference by processing video frames in batches? |
Beta Was this translation helpful? Give feedback.
-
modes/track/
Learn how to use Ultralytics YOLO for object tracking in video streams. Guides to use different trackers and customise tracker configurations.
https://docs.ultralytics.com/modes/track/
Beta Was this translation helpful? Give feedback.
All reactions