models/yolov8/ #10285
Replies: 28 comments 56 replies
-
I can't find score of yolov8x6.pt... |
Beta Was this translation helpful? Give feedback.
-
Hi ! |
Beta Was this translation helpful? Give feedback.
-
I used yolov8 to successfully detect objects in nuscene camera image dataset for autonomus driving. However, i am finding it dificult to extract or retrieve bounding boxes, classes/labels and confidence scores from the processed images. I will need to use these information (bounding boxes cordinates, confidence scores, labels). I tried using format [xmin, ymin, xmax, ymax] format, and using logic that relies on the 'xyxy' attribute but to no avail. @pderrenger i really need your help as I need to move on to the next task. Thanks |
Beta Was this translation helpful? Give feedback.
-
HI! |
Beta Was this translation helpful? Give feedback.
-
Does YOLOV8 have any inherent object tracking across images? |
Beta Was this translation helpful? Give feedback.
-
Have an issues with class loading and labels. |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO model = YOLO("best.pt") Hi, I have to export model into tflite format but the error I'm getting is as given below. TensorFlow SavedModel: export failure ❌ 19.1s: generic_type: cannot initialize type "StatusCode": an object with that name is already definedImportError Traceback (most recent call last) 11 frames ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined. Please give suggestion on this. |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO model = YOLO("best.pt") Hi, I have to export model into tflite format but the error I'm getting is as given below. TensorFlow SavedModel: export failure ❌ 19.1s: generic_type: cannot initialize type "StatusCode": an object with that name is already definedImportError Traceback (most recent call last) 11 frames ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined. Please give suggestion on this. |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
Hi! I am currently involved in developing a vehicle detection system, with a particular focus on determining whether vehicles are parked or in motion through pixel speed estimation. I have been experimenting with the speed_estimator function, in terms of both kilometers per hour and pixels per frame, but so far, I have not achieved satisfactory results. Could any of you suggest advanced methodologies or configuration adjustments that could improve the accuracy of the detection? Any recommendations on libraries, algorithms, or alternative approaches would also be greatly appreciated. I thank you in advance for any guidance or advice you can provide. Best regards. |
Beta Was this translation helpful? Give feedback.
-
Muchas gracias!
def plot_box_and_track(self, track_id, box, cls, track):
"""Plots track and bounding box."""
speed = self.dist_data.get(track_id, 0)
status_label = "Stopped" if speed < 1 else f"Moving at {speed:.2f}
px/frame" # rango de confianza
bbox_color = (0, 255, 0) if speed < 1 else (0, 0, 255)
# Draw bounding box
cv2.rectangle(self.im0, (int(box[0]), int(box[1])), (int(box[2]),
int(box[3])), bbox_color, 2)
cv2.putText(self.im0, status_label, (int(box[0]), int(box[1]) - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, bbox_color, 2)
cv2.polylines(self.im0, [self.trk_pts], isClosed=False, color=
bbox_color, thickness=1)
cv2.circle(self.im0, (int(track[-1][0]), int(track[-1][1])), 5,
bbox_color, -1)
print(f"Plotted bounding box at ({box[0]}, {box[1]}, {box[2]}, {box[
3]}) with label '{status_label}'")
def calculate_speed(self, trk_id, track):
"""Calculates the speed of an object in pixels per frame."""
if len(track) < 2:
self.dist_data[trk_id] = 0
return
previous_point = track[-2]
current_point = track[-1]
distance = np.sqrt((current_point[0] - previous_point[0]) ** 2 + (
current_point[1] - previous_point[1]) ** 2)
self.dist_data[trk_id] = distance
This was the modification I made and it works quite well. Now I'm having
problems when vehicles start moving away from the camera, then the bounding
boxes get smaller and start determining that the vehicle is not moving. I'm
working in it
Thank you very much for taking the time and responding, I am really
fascinated with everything you are doing, incredible. I like it a lot,
thank you
El lun, 3 jun 2024 a la(s) 4:22 p.m., Glenn Jocher ***@***.***)
escribió:
… Hello,
Thank you for reaching out with your query on vehicle detection and speed
estimation using the speed_estimator function. To enhance the accuracy of
your system, you might consider a few advanced methodologies and
adjustments:
1.
*Model Fine-tuning:* If you haven't already, fine-tuning your YOLOv8
model on a dataset specifically annotated with vehicle speeds and states
(parked or in motion) could significantly improve detection accuracy.
2.
*Optical Flow Techniques:* For estimating pixel speed, optical flow
methods can be very effective. Libraries like OpenCV offer functions like
calcOpticalFlowFarneback, which might provide more precise speed
estimations.
3.
*Data Augmentation:* Incorporating variations in vehicle speeds and
lighting conditions during training can help the model generalize better
over different real-world scenarios.
4.
*Temporal Models:* Consider using LSTM networks or 3D ConvNets that
can leverage temporal information across frames to better estimate speeds
and detect motion.
5.
*Ensemble Methods:* Combining predictions from multiple models or
different configurations of the same model can sometimes yield better
results.
For libraries, aside from OpenCV, you might look into PyTorch and
TensorFlow for implementing and training any deep learning models. Both
frameworks support the advanced techniques mentioned above and are
compatible with YOLOv8.
I hope these suggestions help you enhance your vehicle detection system.
If you have further questions or need more detailed assistance, feel free
to ask.
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJOWIKZXFJUAIH6VF2LZFTGA3AVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNJTHEZDA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Muchisimas gracias la verdad tengo un millon de dudas, si me dices que te
puedo preguntar te puedo dar un listado de preguntas enorme, es broma!
De verdad que estoy gratamente sorprendido por la comunidad, la atención,
la documentación, muy interesante todo, y la verdad que muy agradecido,
tienes un entusiasta en potencia con todo el mundo del computer vision,
gracias a ustedes!
En estos momentos estoy trabajando como te comente principalmente, en la
detección de colisiones (aun me falta por mejorar, pero va bastante bien),
vehiculos en excesos de velocidad, la detección de la senda peatonal
crosswalk, esto me va a servir para detectar las infracciones, me
gustaría empezar a trabajar cuando los vehículos se pasen un semáforo en
rojo.
bueno como te decía muchas dudas, pero esas son las principales donde estoy
trabajando y donde quiero empezar a trabajar! cualquier ayuda, tips, alguna
función que me pueda servir, o cualquier cosa con la que me pueda guiar,
estaré muy atento y más que agradecido.
Estoy abierto también a cualquier colaboración, o algo que pueda aportar,
por mi experiencia pues no será mucho, pero bueno, las ganas son enormes!
Saludos!
__
Roberto Schaefer
…__
El mar, 4 jun 2024 a la(s) 9:58 a.m., Glenn Jocher ***@***.***)
escribió:
¡Hola! Gracias por compartir tus modificaciones y por tus amables
palabras. 😊
En cuanto al problema que mencionas con los vehículos que se alejan de la
cámara, una posible solución podría ser ajustar la escala de los bounding
boxes en función de la profundidad estimada o la perspectiva. Esto podría
ayudar a mantener la consistencia del tamaño de los bounding boxes a medida
que los vehículos se mueven.
Otra opción sería implementar un filtro de seguimiento más robusto que
pueda adaptarse a cambios rápidos en el tamaño y la posición de los
objetos, como un filtro de Kalman, que es común en aplicaciones de
seguimiento.
Espero que estas sugerencias te sean útiles. ¡Sigue experimentando y no
dudes en preguntar si necesitas más ayuda! 🚀
—
Glenn Jocher
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJLN5VEELLOELI445RLZFXBYFAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNRVGM2TG>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
¡Muy interesante lo que dices!
Si tienes algo que me puedas compartir en cuanto a la detección de
semáforos en rojo, sería genial, y he estado estudiando Kalman para mejorar
la precisión en cuanto a la detección! cualquier documentacion especifica
puede ser muy util para mi! estoy buscando seguir aprendiendo
El mié, 5 jun 2024 a la(s) 12:49 a.m., Glenn Jocher (
***@***.***) escribió:
… @roscha10 <https://github.com/roscha10> ¡Hola Roberto!
Muchas gracias por tus amables palabras y por compartir tu entusiasmo por
la visión por computadora. Es genial escuchar sobre tus proyectos en
detección de colisiones y otras aplicaciones de tráfico. 🚗💡
Para tus proyectos actuales y futuros, te recomendaría explorar las
capacidades de seguimiento y detección de YOLOv8, que pueden ser muy útiles
para detectar infracciones como el paso de semáforos en rojo. Además, el
uso de filtros como Kalman, mencionado por Glenn, puede mejorar la
precisión en la detección de objetos en movimiento.
Si tienes preguntas específicas o necesitas consejos sobre funciones
específicas, no dudes en preguntar. La comunidad está aquí para ayudarte.
También, cualquier contribución o idea que quieras compartir será
bienvenida; las ganas y el entusiasmo son tan importantes como la
experiencia.
¡Saludos y éxito en tus proyectos!
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJJFIANACWM4GE6MMZDZF2KG5AVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNZSHEYTI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello, I use the yolov8n, yolov8s, yolov8m models to identify thermal people, but when I train, when the results of the above 3 models come out, the yolov8n model has Precision=0.8645568 while yolov8s only has 0.8404626 and yolov8m only has there are 0.7639084. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm using yolov8 and I just can't use it from CLI. I have installed ultralytics as below
but everytime when I want to use the yolo command, it appears like this. 'yolo' is not recognized as an internal or external command operable program or batch file |
Beta Was this translation helpful? Give feedback.
-
With Yolo v8 (or the other versions) can I access .pt model weights after training on a custom dataset? Thank u :) |
Beta Was this translation helpful? Give feedback.
-
Hi, Im having a problem when training a detection NN. I have done a classification DNN and it works very good, now i want to do a detection NN but i get the next error: RuntimeError: Dataset '/home/perez/Desktop/Master_particulas/Segundo_cuatrimestre/TFM/Neural_Network/dataset' error ❌ [Errno 21] Is a directory: '/home/perez/Desktop/Master_particulas/Segundo_cuatrimestre/TFM/Neural_Network/dataset/' The code I have is the next: DETECTION
Do i have to do something in the dataset? Thanks |
Beta Was this translation helpful? Give feedback.
-
I encountered the error "Courrupt JPEG data: 608 extraneous bytes before marker 0xfe" for several images during training on custom dataset. The images are all in .jpg format. I could open all these photos and they does not seem to be corrupted. Is there any fix for this? |
Beta Was this translation helpful? Give feedback.
-
I would like if you could provide a little guidance, regarding the counting
of people entering a supermarket.
I would like to know how I can define the line or how to create a band
that determines if they are entering or leaving the supermarket in order to
count the people who are inside
El jue, 20 jun 2024 a la(s) 9:54 a.m., GaviraghiElia (
***@***.***) escribió:
… hello, can you tell me the difference when training 30 images, 300 images
and 3000 images. If there is a difference, what is that difference? Thanks
It depends on the task and the dataset, but in general a larger data set
can help you generalize better: the more (quality) images you have relative
to the number of classes, better performance you get from your model.
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJPBKSFDWOWGOHU6B63ZILNIBAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMRZGQ3TC>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
If it works very well, I already had it. Thank you very much, it is super
useful for me.
But what I really want to do is count the people entering and leaving an
establishment to know how many people are inside at a given time.
El jue, 20 jun 2024 a la(s) 10:58 p.m., Glenn Jocher (
***@***.***) escribió:
… Hello @roscha10 <https://github.com/roscha10>,
Thank you for your question! To count people entering and exiting a
supermarket, you can use a combination of object detection and tracking.
Here's a step-by-step guide to help you set this up:
1.
*Define the Entry/Exit Line*: You can define a virtual line or region
in your video frame that people must cross to be counted as entering or
exiting. This can be done using simple coordinates.
2.
*Object Detection*: Use YOLOv8 to detect people in each frame of your
video. YOLOv8 is highly efficient and accurate for real-time object
detection tasks.
3.
*Object Tracking*: Track the detected people across frames to
determine their movement direction. You can use trackers like BoT-SORT or
ByteTrack, which are supported by YOLOv8.
4.
*Count Logic*: Implement logic to count people based on their movement
across the defined line. If a person crosses the line from one side to the
other, increment the count for entering or exiting accordingly.
Here’s a basic example in Python to get you started:
import cv2from ultralytics import YOLO
# Load the YOLOv8 modelmodel = YOLO("yolov8n.pt")
# Define the entry/exit line coordinatesline_coords = [(100, 200), (400, 200)] # Example coordinates
# Initialize video capturecap = cv2.VideoCapture("path/to/your/video.mp4")
# Initialize countersenter_count = 0exit_count = 0
# Function to check if a point crosses the linedef crosses_line(point, line_coords):
x, y = point
(x1, y1), (x2, y2) = line_coords
return y1 <= y <= y2 and x1 <= x <= x2
# Loop through video frameswhile cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Run YOLOv8 inference
results = model(frame)
# Extract bounding boxes and track IDs
boxes = results[0].boxes.xywh.cpu()
track_ids = results[0].boxes.id.int().cpu().tolist()
for box, track_id in zip(boxes, track_ids):
x, y, w, h = box
center_point = (int(x), int(y))
# Check if the person crosses the line
if crosses_line(center_point, line_coords):
# Implement your counting logic here
# For example, increment enter_count or exit_count based on direction
pass
# Visualize the line
cv2.line(frame, line_coords[0], line_coords[1], (0, 255, 0), 2)
# Display the frame
cv2.imshow("Frame", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release resourcescap.release()cv2.destroyAllWindows()
This script sets up a basic framework for counting people entering and
exiting a supermarket. You'll need to refine the counting logic based on
the direction of movement and possibly improve the tracking to handle
occlusions and re-identifications.
For more detailed information and advanced configurations, you can refer
to the YOLOv8 documentation <https://docs.ultralytics.com/models/yolov8/>.
If you encounter any issues or have further questions, feel free to ask!
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJOVTL4QJKKNQYN3JADZIOJFRAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMZVGA3DE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO URL stream video dari ESP32-CAMesp32_cam_url = 'http://192.168.100.12/stream' model = YOLO('runs/detect/train2/weights/best.pt') Hello Everyone, I want to integrate my model with esp32-cam. i've tried with this code and nothing happen. please help me. |
Beta Was this translation helpful? Give feedback.
-
Hello, I have some negative data label that I want to train in as a background, so how do I do that in yolo data format, make the txt file empty? |
Beta Was this translation helpful? Give feedback.
-
hi, |
Beta Was this translation helpful? Give feedback.
-
Hello! I wonder how can I adjust the default anchor sizes of YOLO. Can you help with that? Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
Hello to the ultralytics group. |
Beta Was this translation helpful? Give feedback.
-
Hey I have trained a YOLO V8 model of image segmentation but it is not somehow it is not able to print the labels and class name on the resulted image, and also i want the output as : I want classify different item in the image and then the segmentation result of the image. |
Beta Was this translation helpful? Give feedback.
-
I don't have separate models for classification and segmentation i just
trained a segmentation model:
This one: model = YOLO(r"F:\Anuj\YoloV8\YoloV8m-segmentCustom.pt")
from this model i want it to give me the classification result as well as
the segmentation result
like what i am expecting output as :
1. Classified items in the image
2. Segmentation mask of those classified items
using one model
Please help me with that through code
…On Sat, 6 Jul 2024 at 16:33, Paula Derrenger ***@***.***> wrote:
Hello again,
Thank you for updating the package and applying the new code. I appreciate
your patience as we work through this issue.
To further diagnose the problem, could you please provide a minimum
reproducible example? This will help us better understand the context and
pinpoint the issue. You can find guidelines on creating a reproducible
example here: Minimum Reproducible Example
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Additionally, please ensure that the issue persists with the latest
versions of all relevant packages. Sometimes, dependencies or updates can
resolve unexpected behavior.
Here's a refined approach to ensure we correctly access the class
probabilities:
from ultralytics import YOLO
# Load a pretrained YOLOv8 modelmodel = YOLO("yolov8n.pt")
# Run inference on an imageresults = model("path/to/image.jpg")
# Access the resultsfor result in results:
for box in result.boxes:
top_class = box.cls # Most likely class
top_conf = box.conf # Confidence of the most likely class
print(f"Top class: {top_class}, Confidence: {top_conf}")
# Access class probabilities
class_probs = result.probs
if class_probs is not None:
print(f"Class probabilities: {class_probs}")
else:
print("Class probabilities: None")
If the class probabilities are still None, it might indicate that the
model or the specific configuration does not support this feature. In such
cases, providing the reproducible example will be crucial for us to assist
you further.
Feel free to reach out if you have any more questions or need additional
assistance. We're here to help! 😊
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3SBLC5BGQBIIKNOTAXW46DZK7FGVAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TSNZUGIYDM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
I am trying to determine whether a person is walking or running inside a
shopping center, but the options I see are to draw a line and determine
their speed. I would like to know if it is possible to identify the person
and, through their tracker_id, be able to determine if they are running or
walking anywhere in the video. Is this possible?
El lun, 8 jul 2024 a la(s) 7:34 a.m., Paula Derrenger (
***@***.***) escribió:
… @p1anuj2 <https://github.com/p1anuj2> hi Anuj,
Thank you for reaching out! To achieve both classification and
segmentation results using your trained YOLOv8 segmentation model, you can
follow these steps. YOLOv8 models are versatile and can provide both
segmentation masks and classification results from a single model.
Here's a Python code example to help you get started:
from ultralytics import YOLO
# Load your custom-trained YOLOv8 segmentation modelmodel = YOLO(r"F:\Anuj\YoloV8\YoloV8m-segmentCustom.pt")
# Run inference on an imageresults = model("path/to/your/image.jpg")
# Process the resultsfor result in results:
# Get segmentation masks
masks = result.masks
if masks is not None:
print("Segmentation masks:", masks)
# Get classification results
for box in result.boxes:
top_class = box.cls # Most likely class
top_conf = box.conf # Confidence of the most likely class
print(f"Classified item: {top_class}, Confidence: {top_conf}")
# Optionally, visualize the results
result.show()
This script will load your segmentation model, run inference on an image,
and print out both the segmentation masks and the classification results.
The result.show() method will display the image with the segmentation
masks and bounding boxes.
If you encounter any issues, please ensure you are using the latest
version of the Ultralytics package. If the problem persists, providing a
minimum reproducible example would be very helpful. You can find guidelines
on creating one here: Minimum Reproducible Example
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Feel free to reach out if you have any further questions! 😊
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJPTSOHILPKK26KCHN3ZLJ2NNAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TSOBWGQ3TA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
models/yolov8/
Explore the thrilling features of YOLOv8, the latest version of our real-time object detector! Learn how advanced architectures, pre-trained models and optimal balance between accuracy & speed make YOLOv8 the perfect choice for your object detection tasks.
https://docs.ultralytics.com/models/yolov8/
Beta Was this translation helpful? Give feedback.
All reactions