modes/val/ #8153
Replies: 47 comments 132 replies
-
This line of code "metrics.box.map50" gives error saying that "'NoneType' object has no attribute 'box' ". My code is described below: Load a YOLOv8 modelmodel = YOLO('yolov8n.pt') Train the modelresults_train = model.train(data='japan5.yaml', epochs=1, imgsz=600) Validate the modelmetrics = model.val() # no arguments needed, dataset and settings remembered Please suggest. |
Beta Was this translation helpful? Give feedback.
-
i was trying to train my model but, i dont find de predict file after training either best.pt |
Beta Was this translation helpful? Give feedback.
-
After the training of YOLOv8, I got my metrics results in csv file. However, I did validation. But I cannot get my results in csv format. Only images (png & jpeg). How to get validation results in csv? |
Beta Was this translation helpful? Give feedback.
-
Hey, I am trying to get metrics such as recall and also trying to save some images in validation. I am using coco.yaml. Any idea how I might do it? |
Beta Was this translation helpful? Give feedback.
-
I want to get parameters of a class in metrics and save it to csv. Thank you very much. |
Beta Was this translation helpful? Give feedback.
-
hey i want to do the evaluate my model i already get the detection images using ignition gazebo and ros2 but now i want to evaluate my model like accuracy or MAP or recall how to do it i did not have any dataset or any annotations i am using pretrained yolov8 model and using coco8.yaml |
Beta Was this translation helpful? Give feedback.
-
Hello i have two questions. |
Beta Was this translation helpful? Give feedback.
-
Hello i have two questions kindly help me out. when i have custom trained model. if i load the model and then to detect use" !yolo detect val model=/weights/best.pt data=/data.yaml save=True save_json=True conf=0.85 iou=0.5 split=val " can i give it a new yaml file other than the one which model was trained. For example i used an old dataset to train the model. Now i have new data on which i want to validation of the model i upload data in valid folder and give its yaml file to the model to load those images and validate against the model. (THOSE ARE NEW IMAGES ADDED AFTER TRAINING IN VALID) will it give me accurate results on the new dataset against the model trained? I added 10k images to my Valid folder and want to detection on those new images and see the validation of the model. Second question is that does conf threshold matters while doing validation? and can you please explain how is the confusion matrix is build after validation because its prediction against classes so it goes and compares with the ground truth? Please explain thankyou |
Beta Was this translation helpful? Give feedback.
-
Hi, I want to use the validate function to finetune my trained model and see how it performs against real life pictures. There is however a difference in parameters available between "predict" mode and val "mode". For instance, I would like to use specific settings for Augment and retina_mask as part of the validation. Is this possible? And out of curiosity, why are the default settings for Val different from Predict? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am completing a vision project for an automated industrial packing cell. Currently I have trained 3 custom models for the different tasks. I want to test the performance of each with different scenarios like bright, dim, natural, artificial lighting and varied backgrounds or similar objects. What would be the best method to do so? Would it be best to create these standard test datasets, link with yaml then use the val function? Or use predict and count correct / incorrect detections and work out metrics from there? Once I have done this I also want to investigate performance with different model architecture sizes and training times so has to be easily repeatable. How is best to implement basically, thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
Hi, an iou (1) threshold is used during detection to prevent duplicate bounding boxes. |
Beta Was this translation helpful? Give feedback.
-
hi, the error is
|
Beta Was this translation helpful? Give feedback.
-
hi, |
Beta Was this translation helpful? Give feedback.
-
I use my own post-processing logic on the txts that i get from running inference the validation set. So now I have the post processed txts for the validation set and the original validation txts that i used for training. How to compute the val performance results using the txt files ? |
Beta Was this translation helpful? Give feedback.
-
When I convert the YOLO v8 model weights to int16 and validate, I'm getting 0 accuracy, but with float32 model weights, I'm getting 0.87 accuracy. |
Beta Was this translation helpful? Give feedback.
-
When I convert the YOLO v8 model weights to int16 and validate, I'm getting 0 accuracy, but with float32 model weights, I'm getting 0.87 accuracy. |
Beta Was this translation helpful? Give feedback.
-
When I convert the YOLO v8 model weights to int16 and validate, I'm getting 0 accuracy, but with float32 model weights, I'm getting 0.87 accuracy. |
Beta Was this translation helpful? Give feedback.
-
I am trying to run model.val() on GPU with batch=-1 for autoadjust of batch size but I encounter the following error message "ValueError: batch_size should be a positive integer value, but got batch_size=-1". While using autoadjust batch size of batch=-1 with model.tarin() on GPU works fine. |
Beta Was this translation helpful? Give feedback.
-
Hello, when I run the default yolov8 train code, the val_batch_pred image automatically generated within the "train" folder is the same as the val_batch_pred image within the "val" folder that is generated when I run the val function. However, when I run yolov8 train code using raytune, the val_batch_pred image automatically generated in the "train" folder and the val_batch_pred image automatically generated when I run the val function are different. I questioned this part because I thought that the val_batch_pred image generated by executing the train function should be the same as the val_batch_pred image generated by executing the val function. Why do the two images appear differently if I adjust the raytune? Below is the raytune code I used. Thank you. best_model_tuning = YOLO("yolov8n.pt") result_grid = best_model_tuning.tune( best_result = result_grid.get_best_result(metric=metric, mode="max") best_model = YOLO("yolov8n.pt") |
Beta Was this translation helpful? Give feedback.
-
I am using a yolo classification model which I believe doesn't require a data.yaml file. How can I turn off all augmentation setting for experiment purpose? |
Beta Was this translation helpful? Give feedback.
-
Hi there, I wrote the following function: |
Beta Was this translation helpful? Give feedback.
-
Hi There, In the result Confusion matrix png don't have values in the chart(inside small rectangle boxes). how can I get those ? |
Beta Was this translation helpful? Give feedback.
-
model = YOLO("./runs/detect/train/weights/best.pt") test_image = '../base/images/test' I used the code above to check the mAP50 performance on the test dataset and saved the prediction results. By the way, the val_batch_pred.jpg generated by running the val function (.val) on the test dataset shows a completely different result from the predicted image saved using the prediction function (.predict). |
Beta Was this translation helpful? Give feedback.
-
Hello |
Beta Was this translation helpful? Give feedback.
-
Hello,How to use VFLoss?Use ”loss[1] = self.varifocal_loss(pred_scores, target_scores, target_labels) / target_scores_sum # VFL way“ but an error say:”The size of tensor a (7) must match the size of tensor b (8400) at non-singleton dimension 2 |
Beta Was this translation helpful? Give feedback.
-
Is there any way to output the results of the validation (specifically the valid images with the ground truth labels and predicted labels) without the labels of the class of the object and the confidence? |
Beta Was this translation helpful? Give feedback.
-
I ran my validation with bath=-1 for AutoBatch as suggested and I got the following error, "ValueError: batch_size should be a positive integer value, but got batch_size=-1". This is the command I ran, " |
Beta Was this translation helpful? Give feedback.
-
Hi! We want to use statistical analysis to determine if there is a significant difference in precision, recall, and mAP 50-95 for each class. How can we obtain the data points from the validation process for all images to perform the statistical analysis? We need to extract the detailed per-image metrics, we may need to modify the validation code or use additional tools to capture these data, how can we achieve this? |
Beta Was this translation helpful? Give feedback.
-
What does the code that sets the split="train" parameter mean? Since the model learned with the training dataset is verified with the training dataset, don't all the evaluation indicators (P,R,mAP) come out as 1 unconditionally? |
Beta Was this translation helpful? Give feedback.
-
modes/val/
Guide for Validating YOLOv8 Models. Learn how to evaluate the performance of your YOLO models using validation settings and metrics with Python and CLI examples.
https://docs.ultralytics.com/modes/val/
Beta Was this translation helpful? Give feedback.
All reactions