help/ #8027
Replies: 97 comments 315 replies
-
I want to save the image in which the object is detected and not detected into different folders. |
Beta Was this translation helpful? Give feedback.
-
I'm seeking clarification regarding the imgsz parameter in YOLO (You Only Look Once) and its impact on image resizing and bounding boxes. In my dataset, all images have a consistent size of 1920x1080 pixels. If I set the imgsz parameter to 640, will the images be internally downscaled to 640x640 pixels by YOLO during training or inference? In the context of this resizing, I'm curious about the effect on bounding boxes. Do their coordinates change, or does YOLO handle the calculation of new bounding boxes internally to accommodate the downscaled images? I want to ensure that I understand how YOLO manages the resizing process and its implications for object detection accuracy. Any insights or pointers to relevant documentation would be greatly appreciated. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi, WRT inference/predict in yolov8 how can you obtain the run number ? Or the folder to /run/detect/predictXX where XX is the sequential number ? Reason would like to automatically get the image with all the bounding boxes. example /run/detect/predict46 Using :- Thanks |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher thanks for being so responsive and helpful. It's really impressive. My question is: What color does yolov8 use as infill when a training image not square? Maybe I'm misunderstanding, but if the algorithm pads the image to a square size, I'm just curious what color it pads with (zeros, i.e., black?). If it makes any difference, I'm currently training a classification model. |
Beta Was this translation helpful? Give feedback.
-
Is it possible to combine two YOLOv8 weights? |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a question regarding the |
Beta Was this translation helpful? Give feedback.
-
Hello, I was wondering if you could give me some clarification on the freeze parameter for yolov8. When training begins, the training script automatically prints the layers of the model. There seem to be 23 blocks but over 100 different components. In your examples, you always use In my personal training I used |
Beta Was this translation helpful? Give feedback.
-
i was using YOLOv8 for number detection ( meter readings ), it worked pretty good. but i need small help. |
Beta Was this translation helpful? Give feedback.
-
Dear community thank you to each of the members I want to extract the tree crown boundary using the Yolo8 model. After training the model when I predict the RGB image each tree has multiple polygons but it should be a single polygon for a tree. Have you any idea how to address this issue? |
Beta Was this translation helpful? Give feedback.
-
in my YAML file i have 11 labels, with their respective values. But when I use save_txt command the labels are saved in a text file, but i want the exact values to be saved, because label 10 has value ".", which is important for meter reading value. How can I save the values in text file rather than labels. Below is my YAML file: names: |
Beta Was this translation helpful? Give feedback.
-
I have not found any resolution to the following issue with yolov8. Whenever I start training my model, the val-set score is immediately 1 for Precision and Recall. I believe this is an error. Therefore, it's really hard to monitor training. Here's an example:
|
Beta Was this translation helpful? Give feedback.
-
I want to load model before doing prediction. model.load_state_dict(torch.load(opt.saved_model, map_location=device)) like this one. I want load model when I run file right after. now i'm using custom yolov8 model. |
Beta Was this translation helpful? Give feedback.
-
Hi Guys at ultralytics { Still, there's the general route of training using defining everything as torch variables and then have a training loop |
Beta Was this translation helpful? Give feedback.
-
hello |
Beta Was this translation helpful? Give feedback.
-
i have trained the YOLOv8n with a deck of cards, it was detecting most of the cards well but there was confusion with the similar cards such as 5 and 3 etc. So i tried to fine tune that best.pt model by providing the data sets of cards 5 and 3 only but now it only detects 5 and 3 not the other cards. I have a lot of deck of cards to train and i want to have only one model to detect them. So how can i do that...I have already tried to retrain that best.pt but it forgets the previously trained deck...So how can i do that because i do not found any appropriate answer in the official website. Is it possible, if yes then how...Reply as soon as possible... |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. It is working now😊
…On Wed, 19 Jun, 2024, 3:26 pm Glenn Jocher, ***@***.***> wrote:
@swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for reaching out! 😊 It sounds like you're encountering an
environment issue where Jupyter Notebook isn't recognizing the torch
module, while it works fine in IDLE. Let's troubleshoot this step-by-step:
1.
*Verify CUDA Installation*:
Ensure that CUDA is correctly installed by running:
nvcc --version
This should display the CUDA version installed on your system.
2.
*Check PyTorch Installation*:
Make sure PyTorch is installed in the same environment that Jupyter
Notebook is using. You can verify this by running:
import torchprint(torch.cuda.is_available())print(torch.cuda.get_device_name(0))
This should return True and the name of your GPU if everything is set
up correctly.
3.
*Update Packages*:
Ensure you are using the latest versions of torch and ultralytics. You
can update them using:
pip install --upgrade torch ultralytics
4.
*Check Jupyter Kernel Environment*:
Ensure that your Jupyter Notebook kernel is running in the same
environment where CUDA and PyTorch are installed. You can check this by
running:
!which python
This should point to the Python executable in your environment with
CUDA and PyTorch.
5.
*Set the Device in Your Training Script*:
Explicitly set the device to GPU in your training script. Here’s an
example:
from ultralytics import YOLO
# Load the modelmodel = YOLO('yolov8n.pt')
# Set up training argumentsargs = {
'data': 'path/to/your_data.yaml',
'epochs': 100,
'batch': 12,
'imgsz': 640,
'device': 'cuda' # Ensure the model uses the GPU
}
# Train the modelresults = model.train(**args)
If the issue persists, please provide a minimum reproducible example (MRE)
of your code so we can better assist you. You can find more details on
creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Feel free to reach out if you need further assistance. We're here to help!
🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KGTN2V6FUMZ77GE3VDZIFIUNAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMJVG4YDC>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for your support.
After training, the results were saved in the run--detect--train13 folder.
All the images given for validation are detected, but all images are
arranged in tile format (12 images given for validation, all images
converted to single image 4X3 tile format).
And I loaded my custom model (best.pt) for testing on the images. Here is
the code snippet
model = YOLO('best.pt')
img = 'test.tif'
result = model(img)
result.print()
But it is showing like print is not in the list.
How to display my input image with bounding boxes?
Kindly help.
…On Wed, Jun 19, 2024 at 11:20 PM Paula Derrenger ***@***.***> wrote:
@swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for the update! I'm glad to hear that everything is working now
😊.
If you encounter any further issues or have additional questions, please
don't hesitate to reach out. For more detailed guides and resources, you
can always visit our Help Page <https://docs.ultralytics.com/help/>.
Happy coding! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KGI4DSLPFJSNQGQVD3ZIHAFNAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMRQGU3DG>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Hi, I try to load and inference yolov8s model in opencv, c++, I faced a bug:
the result is seems empty.
on exact same image I get full results hen loading with ultralytics in
python.
my code:
DNN* net;
Mat blob, prob;
*net = readNet("yolov8.onnx");
setNetInputSize("yolov8.onnx");
net->setPreferableBackend(DNN_BACKEND_CUDA);
net->setPreferableTarget(DNN_TARGET_CUDA);
std::cout << "loading succeeded" << std::endl;
blob = blobFromImage(image, 1.0, Size(640, 640), (0,0,0), true);
net->setInput(blob);
prob = net->forward();
than I try to print the result:
std::cout<<"prob.rows="<<prob.rows<<" prob.cols="<<prob.cols<<'\n';
for (int i = 0; i < prob.rows; i++) {
for (int j = 0; j < prob.cols; j++) {
std::cout << (int)prob.at<uchar>(i, j) << " ";
}
std::cout << std::endl;
}
and get:
prob.rows=-1 prob.cols=-1
what am I doing wrong?
בתאריך יום ה׳, 20 ביוני 2024 ב-12:39 מאת Paula Derrenger <
***@***.***>:
… @swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for reaching out and for your support! 😊
To address your issue with displaying the input image with bounding boxes,
it seems like there might be a small confusion with the method names. The
result.print() method is not available, but you can use the result.show()
method to display the image with bounding boxes. Here’s how you can modify
your code snippet:
from ultralytics import YOLO
# Load your custom modelmodel = YOLO('best.pt')
# Specify the image for testingimg = 'test.tif'
# Run the model on the imageresult = model(img)
# Display the image with bounding boxesresult.show()
This should display your input image with the detected bounding boxes.
Regarding the arrangement of validation images in a tile format, this is a
default behavior for visualizing multiple images together. If you prefer to
view them individually, you can save the results and view them separately:
# Save the results to a directoryresult.save(save_dir='path/to/save/directory')
This will save each image with bounding boxes in the specified directory.
If you encounter any further issues or have additional questions, please
don't hesitate to reach out. For more detailed guides and resources, you
can always visit our Help Page <https://docs.ultralytics.com/help/>.
Happy coding! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AO26URDIUZDVRR77BLDY743ZIKPNDAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMRWHEZDK>
.
You are receiving this because you were mentioned.
Message ID:
***@***.***
com>
--
אתי כהן (א.שאול)
|
Beta Was this translation helpful? Give feedback.
-
import cv2 Email settingspassword = "" # Your email password server = smtplib.SMTP("smtp.gmail.com: 587") def send_email(to_email, from_email, object_detected=1): class ObjectDetection:
Usage#region_points = [(51, 832), (287, 820), (567, 452), (159, 428)] Hello Glen , Iam working on trepassing project, My goal is to detect the object that touched or entered into into that region, iam getting alerts when object detected and iam not able to change the color of bounding boxes of the detected objects to red , when they crossed or touched the region , i have mentioned. can you help me with the adjusting the code |
Beta Was this translation helpful? Give feedback.
-
Hello, I want to know the information of business license. |
Beta Was this translation helpful? Give feedback.
-
Does Oriented Bounding Box Object Detection support annotations using non-rectangular shapes, such as trapezoids or any 4-point polygons? I want to train for License Plate Detection where the license plates are not viewed from the front angle and need to transform the 4 points to a front-facing angle. |
Beta Was this translation helpful? Give feedback.
-
Hi Ultralytics team, I'm currently using YOLOv8, which provides a pre-trained .pt model that detects 80 objects. I have a dataset containing new objects that I want to add to this model, but I do not have the dataset for the original 80 objects. Could you please guide me on how to add these new objects to the existing YOLOv8 model? What is the best approach to fine-tune the model so that it can detect both the original 80 classes and the new classes? Additionally, do I need the dataset for the original 80 classes to successfully train the model with the new objects? |
Beta Was this translation helpful? Give feedback.
-
Hi Ultralytics team, I have been trying to use YOLOv10 on the Windows platform these days, but unfortunately it has not been realized, because of a small problem I once encountered. I have searched for all similar problems, but their solution cannot solve my problem, I hope you can help me, thank you very much! from ultralytics import YOLOv10 classes = { model = YOLOv10(r'D:\python3_12\yolov10-main\weights\yolov10b.pt') image = cv2.imread("E:\coco_test_renamed") results = model(source=image, conf=0.25, verbose=False)[0] labels = [ cv2.imshow('result', annotated_image) After I run the code, the program will report an ERROR, thus terminating the run D:\python3_12\Anaconda\anaconda3\envs\yolov10_torch\python.exe D:\python3_12\yolov10-main\test.py Process finished with exit code 1 I have tried all methods to solve this problem without success, including changing file permissions on windows and using python to change file permissions and running pycharm with administrator permissions, but all have failed, especially since this problem only occurs when running YOLO. If you can help me, I will appreciate it very much. Thank you very much. |
Beta Was this translation helpful? Give feedback.
-
How to perform inference on video, using the model Load the YOLO modelmodel = YOLO('/content/best.pt') # Adjust the model path as necessary def process_video(input_video_path, output_video_path):
Example usageinput_video_path = '/content/855262-hd_1280_720_25fps.mp4' |
Beta Was this translation helpful? Give feedback.
-
Hello, cap = cv2.VideoCapture(video_path) codec = cv2.VideoWriter_fourcc(*"mp4v")
ith_suffix('.mp4')) |
Beta Was this translation helpful? Give feedback.
-
I have followed the video but when executing Visual Studio Code it shows this error cannot import name 'YOLO' from 'ultralytics' How can I solve it? |
Beta Was this translation helpful? Give feedback.
-
i update my package so many times but every time i got this errror , i tried on linux and window as well ----> 7 model.export(format="ncnn") # creates '/yolov8n_ncnn_model' File ~/Desktop/venv/lib/python3.10/site-packages/ultralytics/yolo/engine/model.py:342, in YOLO.export(self, **kwargs) File ~/Desktop/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs) File ~/Desktop/venv/lib/python3.10/site-packages/ultralytics/yolo/engine/exporter.py:155, in Exporter.call(self, model) Invalid export format='ncnn'. Valid formats are ('torchscript', 'onnx', 'openvino', 'engine', 'coreml', 'saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs', 'paddle') |
Beta Was this translation helpful? Give feedback.
-
ayone can give scrren shot of camparing the noraml person detection benchmarking with cpu and gpu utilization of yollov8 with python and c++ |
Beta Was this translation helpful? Give feedback.
-
Hi Yolo support, For a project I am trying to use my own configuration model with pre trained weights from yolov8.
I simply replace the Conv layer with my SpinningConv layer. See my Yaml file:
This is my SpinningConv class:
however dont get the layers right, because i get the following error:
Im using the Conv class in my SpinningClass so I suspect everything should be the same. Do you see any mistakes? |
Beta Was this translation helpful? Give feedback.
-
help/
Find comprehensive guides and documents on Ultralytics YOLO tasks. Includes FAQs, contributing guides, CI guide, CLA, MRE guide, code of conduct & more.
https://docs.ultralytics.com/help/
Beta Was this translation helpful? Give feedback.
All reactions