modes/predict/ #7932
Replies: 111 comments 248 replies
-
thx, its amazing |
Beta Was this translation helpful? Give feedback.
-
Can we set different confidences for different classes? It would be a nice addition. |
Beta Was this translation helpful? Give feedback.
-
Can you please share more details on the I understand that by default we receive 160x160 masks in segment models, but what does setting the above True do? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am trying to do a simple yolo model and every time I try predicting it doesn't detect anything. Here is what I am doing. I am. downloading this dataset and then trying to train a model on it and then once its trained I am trying to predict with it. I pasted all of my code so that you can replicate. https://universe.roboflow.com/damage-4yhkc/damaged-bchyj/dataset/5/downloadfrom roboflow import Roboflow model = YOLO('yolov8n-seg.pt') # build from YAML and transfer weights #Predict |
Beta Was this translation helpful? Give feedback.
-
Hello if i want to know these pixel coordinates represent which object, What should i do? For example this is my code
And this code give to me the pixel coordinates of objects so how can i learn which coordinate represents which object? |
Beta Was this translation helpful? Give feedback.
-
With that code :
it print that results: |
Beta Was this translation helpful? Give feedback.
-
how can we count total number of objects in the set of images after training the model? |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for your response. I appreciate it.
I want to know can we use this code for directory because I have 760 images
in the directory I want count the total predicted boxes all in once.
Thanks
…On Tue, Feb 13, 2024, 9:55 PM Glenn Jocher ***@***.***> wrote:
Hey there! 👋 To count the total number of objects across a set of images
after training your model, you can use the predict mode to process your
images and then sum up the detections. Here's a quick example using Python:
from ultralytics import YOLO
# Load your trained modelmodel = YOLO('path/to/your/trained_model.pt')
# List of images to run inference onimages = ['image1.jpg', 'image2.jpg', ...]
# Run inferenceresults = model(images)
# Count total objectstotal_objects = sum(len(result.boxes) for result in results)print(f'Total objects detected: {total_objects}')
This will give you the total count of objects detected across all your
images. Happy counting! 😊
—
Reply to this email directly, view it on GitHub
<#7932 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BAHRMJFIOOEU34WCWMTYWELYTNWFDAVCNFSM6AAAAABCTDRDM6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DINJUGA2TM>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
If i want to run predictions on several images at once, I shoumd use a tensor as |
Beta Was this translation helpful? Give feedback.
-
If I want to predict on several images at once I should use a tensor with the format |
Beta Was this translation helpful? Give feedback.
-
If I want to predict on several images at once I should use a tensor with the format |
Beta Was this translation helpful? Give feedback.
-
This aint working: Load a pretrained YOLOv8n modelmodel = YOLO('yolov8n.pt') Create a random torch tensor of BCHW shape (1, 3, 640, 640) with values in range [0, 1] and type float32source = torch.rand(1, 3, 640, 640, dtype=torch.float32) Run inference on the sourceresults = model(source) # list of Results objects ERROR:
|
Beta Was this translation helpful? Give feedback.
-
Hello, i export the model (yolov8-seg.onnx), and i use this model in react. how can i take the outline coordinates like 'masks.xy' in react app? |
Beta Was this translation helpful? Give feedback.
-
I want to use YOLOv8 for multi-class image classification but yolov8n-cls have 1k classes which is way more than i want , i want to classify for just <80 class for example if image have Chihuahua and catfish model must return only dog and fish not the specific kind of dogs , how to do that ? |
Beta Was this translation helpful? Give feedback.
-
How to get masks only conf is over 0.8? realtime capture by webcam |
Beta Was this translation helpful? Give feedback.
-
Hey! I was wondering if there's any way I can do this by changing some code in the plot method to achieve this since it's completely hard coded. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hey when I run this code Load the YOLOv8 modelmodel = YOLO("yolov8n.pt") Open the video filevideo_path = '/Users/aastha/Desktop/yolov8/video_sample2.mp4' Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() It processes each and every frame and is therefore very slow. How can i make it faster? I don't want to randomly skip frames as it is not very efficient. So is there any way to process the video very fast without losing accuracy. |
Beta Was this translation helpful? Give feedback.
-
Good afternoon, please advise me what arguments I should specify when using the model, if I use a pipeline that contains 4 cameras and on all of them I perform processing, but the video card is loaded by 25 percent, but there is a significant delay and lag from the playback in real time, how to be in this case ? |
Beta Was this translation helpful? Give feedback.
-
Hey how can we integrate DeepSparse in this code while cap.isOpened():
cap.release() |
Beta Was this translation helpful? Give feedback.
-
I have a Raspberry pi 5 and a 64 bit operating system is installed. Camera Module V3 Wide is installed on the system. When you type the results = model.predict(0, show=True, save=True) code you specified in the video as 0, 1 or 2, the camera does not work. I ask you, can you write this code for Raspberry Pi5 64 bit camera module v3? |
Beta Was this translation helpful? Give feedback.
-
For this code , I'm getting an error which is given below Error: |
Beta Was this translation helpful? Give feedback.
-
Hello, I recently installed CUDA version-12, after that I am getting this type of error, previously I've never seen this type of error while performing tracking, started getting right after the installation of cuda. May I know what kind of 'NotImplementedError' is this. { NotImplementedError Traceback (most recent call last) File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\utils\_contextlib.py:35, in _wrap_generator..generator_context(*args, **kwargs) File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\Desktop\THESIS\DT_flow\det_models\detect_track File c:\Users\ashis\AppData\Local\Programs\Python\Python312\Lib\site-packages\torchvision\ops\boxes.py:41, in nms(boxes, scores, iou_threshold) File c:\Users\ashis\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\ops.py:854, in OpOverloadPacket.call(self, *args, **kwargs) NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. " Thank you |
Beta Was this translation helpful? Give feedback.
-
Can I set different conf for different labels? |
Beta Was this translation helpful? Give feedback.
-
Hello , I'm new for machine learning, Can you fix about my code. import os cap = cv2.VideoCapture('rtsp://admin:Admin123456@192.168.178.145/H264?ch=1&subtype=0') model_path = os.path.join('.', 'runs', 'detect', 'train', 'weights', 'last.pt') Load a modelmodel = YOLO(model_path) # load a custom model threshold = 0.5 while ret:
cap.release() |
Beta Was this translation helpful? Give feedback.
-
Hi. I am developing a model for device (phone, tablet) and hand detection. I trained the model in different environments (dark, indoors with/without windows, corridors, outdoors) and then trained with model with Yolov8n. I found the metrics were good but when I tested in live webcam, the boxes flicker. What can I do to make boxes stable? Also kindly let me know how can I reduce the false positives. Thanks. |
Beta Was this translation helpful? Give feedback.
-
I train yolov8 model with my custom dataset. But when I predict an image, it not show any of class name of my dataset. It shows person, bicycle etc. How I fix this? |
Beta Was this translation helpful? Give feedback.
-
Hi there, I am working on a multi-label segmentation problem. During prediction, the colors assigned to some labels can be confusingly similar to the image background. Are there parameters in the prediction mode that can be used to customize label colors for instance segmentation? Can anyone help? |
Beta Was this translation helpful? Give feedback.
-
Hi I want to detect objects in a video using OpenViNO and yolov8 but on CPU. I have implemented that but is it possible to make it even faster? Because it takes 4 secs minimum to detect objects in a 4 sec video which is not good. I tried implementing deepsparse but couldn't figure out a way to implement it. |
Beta Was this translation helpful? Give feedback.
-
Hi I want to detect objects in a video, time at which it appeared and the confidence score in a file. I am able to do this using yolov8 but it is very slow so I thought of integrating it with deepsparse(as it give 10x better performance on CPU). But the output of deepsparse is completely different from yolov8 so I'm not not able to store the info. Can you please help me out. |
Beta Was this translation helpful? Give feedback.
-
I have used from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("runs/barrel/yolov8n_custom/weights/best.pt")
# Define the path or URL to the image
image_path = "path/to/image"
# Run inference on the image
results = model(image_path)
# Get the first result (assuming only one image was passed)
result = results[0]
keypoints_data = result.keypoints.xy.cpu().numpy() Is there any option to extract and print both the keypoint labels as well as the corresponding keypoint IDs? |
Beta Was this translation helpful? Give feedback.
-
modes/predict/
Discover how to use YOLOv8 predict mode for various tasks. Learn about different inference sources like images, videos, and data formats.
https://docs.ultralytics.com/modes/predict/
Beta Was this translation helpful? Give feedback.
All reactions