help/FAQ/ #7928
Replies: 29 comments 84 replies
-
suppose I am fine-tuning a yolov8 model with custom dataset, so can the fine tuned model detect the newly trained class images as well as pretrained class of images? If yes how to do that |
Beta Was this translation helpful? Give feedback.
-
How can i convert my trained YOLOv8 model to .tflite format while keeping the tensors intact like 'locations', 'classes', 'scores', 'number of detections' in the converted .tflite file? |
Beta Was this translation helpful? Give feedback.
-
In YOLOv8, when training with the option 'Pretrained = True', is the pretrained dataset the COCO dataset? Or was it pretrained on a different dataset? |
Beta Was this translation helpful? Give feedback.
-
Hello, i want to train the pretrained yolov8 detection model to detect only 1 class then use base yolo segmentation model to segment this trained object. This 1 class is involved in segmentation model classes so I am hoping I won't have to train the segmentation model. Is it possible? So basically my problem is yolov8 detection model doesnt detect my object accurate enough but segmentation works amazing. Is there any other solution that I can't see? |
Beta Was this translation helpful? Give feedback.
-
I wonder about the documentation for the parameter I have tested this and this seems untrue. When I changed from the default of 0.7 to 0.9 I got more overlapping boxes. When I decreased it to 0.5 I got fewer overlapping boxes. I am wondering if there is an issue with the documentation or the backend. |
Beta Was this translation helpful? Give feedback.
-
Very simple question, but I can't find the answer anywhere :-) The in yolonv8 small model, the output tensor "Output0" has a last layer name of "/model.22/concat_5". The tensor output is the following "output0 (data type: Float_32; tensor dimension: [1, 84, 8400]; tensor type: APP_READ)" Can you explain what is the data structure of the [1, 84, 8400] so I can decode the inference results I'm in C++ code. |
Beta Was this translation helpful? Give feedback.
-
I am going through your training scripts but I am a little confused on the pything script. I see you create a new model with yolov8n.yaml and then you train the model using coco128.yaml. Is the yolov8n.yaml a file used to just initialize the model for the script to work and then your "custom" data set its the coco128.yaml? |
Beta Was this translation helpful? Give feedback.
-
I have trained a yolov8 model with images of road surface cracks. I have annotated the cracks with masks. And then I do prediction on model and saves the prediction results to a json file using prediction[0].tojson() function. When I compare the image I get with the plot() method with the images I plot based on the result in the json, file I observe that the predict result is different. This is the case if the image contains several cracks. But the bounding rectangle is correct. I have plotted points at the image based on the xy coordinates given in the Segments object of the json file. Why do I get different prediction result by using the predictions found in the json file? |
Beta Was this translation helpful? Give feedback.
-
I want to perform classification using yolo-v8 and I want to classify object based on no. of boxes predicted on a image. If the model detect single bounding box then it is of class A and if it detects two bounding boxex then it belongs to class B. Is it possible, if yes then how |
Beta Was this translation helpful? Give feedback.
-
I have used the same image size 640 for both training and prediction. Have you dowladed the weight file and run the test your self?
Best Aslak
Sendt fra Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
Fra: Glenn Jocher ***@***.***>
Sendt: Friday, March 15, 2024 12:44:21 PM
Til: ultralytics/ultralytics ***@***.***>
Kopi: Aslak Myklatun ***@***.***>; Mention ***@***.***>
Emne: Re: [ultralytics/ultralytics] help/FAQ/ (Discussion #7928)
@aslakm<https://github.com/aslakm> hey there! 👋 Thanks for sharing your model and the details of your setup. If the issue persists even with imgsz=640 during prediction on an image of size 2448x2048, it's possible that there's a discrepancy in how the model is handling different image sizes.
Given the size difference, here's something you might try to ensure consistency: use the same imgsz for both training and prediction. Since you trained with an imgsz of 640, predictions should theoretically align better when the input images are resized to the same dimensions. However, given the nature of your issue, there seems to be something else at play.
# Example prediction command
model.predict('path/to/image.jpg', imgsz=640)
Could you also check if your annotations/masks are correctly scaled or if there's any preprocessing step that might be affecting the results unexpectedly?
In the meantime, I'll take a look at your weight file and investigate further. Thanks for flagging this, and let's aim to resolve it soon! 🚀
—
Reply to this email directly, view it on GitHub<#7928 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABJGLUENIVQIV3F5T3UN6LDYYLNJLAVCNFSM6AAAAABCS5GCYGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DQMBQHAYDK>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I'm attempting to generate a heatmap across multiple streams in my instance. To achieve this, I've created multiple instances for each stream. However, I'm encountering a floating-point issue when fitting the heatmap. The heatmap generates floats at a time for a single stream, but I'm unsure how to address this problem across multiple streams. Any assistance on resolving this would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working with custom train yolo v8 model in Colab. I am using image as a source. How can I write a text on result image? I want to write the summary of prediction on the top of result image. Please help. |
Beta Was this translation helpful? Give feedback.
-
How can I impose my own optimizer and learning rate to my yolov8OBB script. it is always set to auto and the LR is chosen automatically. |
Beta Was this translation helpful? Give feedback.
-
Anyone seeing any difference when using apple gpu with the following I get no validation prediction without it i do although its much slow. With same test data, sample output of an epoc with mps. (notice some values not populated such as cls_loss and dfl_loss
sample output without just using cpu
have verified that MPS is available with the following script python metalTest.py cat metalTest.py many thanks. |
Beta Was this translation helpful? Give feedback.
-
Hello, I have PCB that include 1 LED, 1 red wire, and 1 black wire. I want to detect reject/good LED and do color recognition on wire using open cv. Is it possible to do both at the same time and show the bounding box (put object detection bounding box and color recognition bounding box at the same frame). please help. maybe I need example code. Thank You. |
Beta Was this translation helpful? Give feedback.
-
Hello! I want to get the metric values like map score for object detection for hyper parameter tuning using yolov8 model. I used the same code the above for finetuning with ASHA and hyperopt search algorithm. In the training function I returned the evaluation score .But in the result I got the same metric values in every trial .Can u pls help me with this. |
Beta Was this translation helpful? Give feedback.
-
Hello, If I set resume=False the training starts with my current settings. So is this error occuring because I previously trained with two devices? If yes, can I somehow change this? |
Beta Was this translation helpful? Give feedback.
-
hi all i want to ask regading supervision |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. I have one dout I detect and count the objects
vertically but not able to count horizontal count. Only single object for
both vertical and horizontal.
Here in this code in and out count vertically but not able to count
horizontally.
Can you please share a code real time object detection tracking and
counting and find duplication of object I used video file video is up and
down,tracking, counting but stuck in duplication,And horizontal counting
share the solution.
Regards,
Amruta patil
…On Sat, 4 May, 2024, 11:15 pm Glenn Jocher, ***@***.***> wrote:
@Amu1620 <https://github.com/Amu1620> you're welcome! If you need further
assistance or have more questions, feel free to reach out. Happy coding! 😊
If you need more detailed guidance, don't forget to check out our FAQ
section <https://docs.ultralytics.com/help/FAQ/> for common issues and
solutions.
—
Reply to this email directly, view it on GitHub
<#7928 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BANL2BTRMET7GGEZR3LD6Y3ZAUNCTAVCNFSM6AAAAABCS5GCYGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TGMJVGQZTI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thank you so much Glenn for your provided logic.I implement this logic.I
need one more logic can I apply stitching or any other method to calculate
count of object per wall.I working on room data and i want to find out
different object at each wall and count of that objects also.can you give
me any logic or method
…On Mon, May 6, 2024 at 12:19 AM Glenn Jocher ***@***.***> wrote:
Hello Amruta,
For counting objects horizontally in addition to vertically, you'll need
to modify your approach based on the direction of object movement and area
counting. You may use different tracking IDs assigned to each unique object
to handle duplication concerns effectively.
Here’s a basic example to adjust tracking for both horizontal and vertical
movements:
from ultralytics import YOLO
# Load a pretrained modelmodel = YOLO('yolov8n.pt')
# Detect objects in a videoresults = model.track('path/to/your/video.mp4', stream=True)
horizontal_count = 0vertical_count = 0track_ids = set()
for result in results:
for *box, conf, cls, track_id in result.boxes:
if track_id not in track_ids:
if box[0] > box[2]: # Horizontal movement
horizontal_count += 1
else: # Vertical movement
vertical_count += 1
track_ids.add(track_id)
print(f'Horizontal count: {horizontal_count}, Vertical count: {vertical_count}')
This code checks the orientation of detected bounding boxes to determine
the direction of object movement and increments the respective counters.
The track_id helps in recognizing new and existing objects, avoiding
duplicates in your counts.
If you need further customization, you might need to adjust detection
thresholds, area of interest (AOI) settings, or use more advanced tracking
algorithms like Deep SORT for better handling of occlusions and object
interactions.
Don't hesitate to revisit the FAQ <https://docs.ultralytics.com/help/FAQ/>
or check out our more detailed guides on object tracking.
—
Reply to this email directly, view it on GitHub
<#7928 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BANL2BQYNWT5MHVDXKFSXZTZAZ5K7AVCNFSM6AAAAABCS5GCYGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TGMRRGU4DS>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
how can i hand the problem where i have press machine and i have actually see if it i running and how many presses it did with in particular time-feed on in a video |
Beta Was this translation helpful? Give feedback.
-
I'm trying to build a model to detect knifes and handguns (pistols and revolvers), the training batch is like 1000 images (i know it's little), and a question that i have is: get a picture of a handgun in a desk, showing her entirely, and get another pic of a man holding a handgun, showing only the upper part of the pistol, and some times the trigger. I make another class for this situation (only the upper part), or I leave this way? |
Beta Was this translation helpful? Give feedback.
-
Hello, could you give me some help and advice? I hope I'm not bothering you too much. |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO model = YOLO("best.pt") Hi, I have to export model into tflite format but the error I'm getting is as given below. TensorFlow SavedModel: export failure ❌ 19.1s: generic_type: cannot initialize type "StatusCode": an object with that name is already definedImportError Traceback (most recent call last) 11 frames ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined. Please give suggestion on this. |
Beta Was this translation helpful? Give feedback.
-
Hello, I am using the YOLOv5 model to perform object detection on a pile of stacked parts. However, there have been instances where the recognition performance of individual parts is not good, as well as instances where the val/obj loss value increases and the val/cla loss first decreases and then increases. I don't know how to solve it, may I ask how to solve it? I really appreciate it |
Beta Was this translation helpful? Give feedback.
-
how can I avoid Catostrophic forgetting in the fine tune YOLO model ? |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm using yolov8 and I just can't use it from CLI. I have installed ultralytics as below
but everytime when I want to use the yolo command, it appears like this. 'yolo' is not recognized as an internal or external command operable program or batch file |
Beta Was this translation helpful? Give feedback.
-
Hello, I am trying to use the YOLOv8 model to track bikes and pedestrians passing a defined line. I am using the code available here on Ultralytics. I am pasting it below. It seems to be working however it takes a very long time. The video I am analyzing is one hour and so far the script has been running for over 3 hours and it's still not finished. Is there anything I can do to speed up the process? import cv2 from ultralytics import YOLO, solutions model = YOLO("yolov8n.pt") line_points = [(1127, 256), (1131, 306)] # line or region points Video writervideo_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) Init Object Countercounter = solutions.ObjectCounter( while cap.isOpened():
cap.release() |
Beta Was this translation helpful? Give feedback.
-
Hello, my YOLOV8 application scenario is a farmland scene. The training effect using only RGB three-channel data did not meet my expectations, so I want to add some vegetation index information as the fourth channel to assist training based on the original RGB three-channel data. I would like to ask if YOLOV8 now supports four-channel data for training? If I want to use YOLOV8 to train four-channel image data, what should I do? |
Beta Was this translation helpful? Give feedback.
-
help/FAQ/
Find solutions to your common Ultralytics YOLO related queries. Learn about hardware requirements, fine-tuning YOLO models, conversion to ONNX/TensorFlow, and more.
https://docs.ultralytics.com/help/FAQ/
Beta Was this translation helpful? Give feedback.
All reactions