Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train YOLOv5 on Tiff (image and label format) #11840

Closed
1 task done
HelenVe opened this issue Jul 9, 2023 · 24 comments
Closed
1 task done

Train YOLOv5 on Tiff (image and label format) #11840

HelenVe opened this issue Jul 9, 2023 · 24 comments
Labels
question Further information is requested Stale

Comments

@HelenVe
Copy link

HelenVe commented Jul 9, 2023

Search before asking

Question

Hello!

I have a question regarding training of YOLOv5 with .tiff files.
I have 10 grayscale images, each of them corresponds to a class. I stack and save them on a tif file, the shape per tif is [10, image_width, image_height]. Some of these grayscale images can also be empty. I have a label file per tif, containing all the class bounding boxes.
However when I start training, my metrics values are very low (see screenshot) so I must be doing something wrong. Is my logic incorrect, should I be saving the images in a different way?

Thank you for your time!

image

Additional

No response

@HelenVe HelenVe added the question Further information is requested label Jul 9, 2023
@glenn-jocher
Copy link
Member

@HelenVe hi there!

Thank you for reaching out with your question. Training YOLOv5 with .tiff files should be possible. However, based on your description and the screenshot you provided, it seems that you might be encountering some issues with your data preparation or file format.

To help you further, could you please provide more details? Specifically:

  1. How are you currently converting and saving the grayscale images into the .tif file? Are you using any specific library or method?
  2. Could you please share an example of your label file formatting, so I can better understand how the bounding boxes are represented?

Having these additional details will allow me to provide you with more accurate guidance. Looking forward to assisting you!

Best regards.

@HelenVe
Copy link
Author

HelenVe commented Jul 9, 2023

@glenn-jocher Thank you very much for the quick reply!

I am saving the tif files with tiffile, this is how I create them.
image

  1. The labels are based on the YOLOV5 formatting, one tif corresponds to one txt file. And one grayscale image only represents one class. Here is an example:

1 0.17875 0.5375234521575984 0.010625 0.03470919324577861
3 0.1525 0.5590994371482176 0.026875 0.06660412757973734
5 0.176875 0.5121951219512195 0.04875 0.012195121951219513
0 0.161875 0.5234521575984991 0.015 0.0028142589118198874
0 0.19125 0.525328330206379 0.01125 0.001876172607879925
8 0.175625 0.5375234521575984 0.0425 0.03470919324577861
4 0.19875 0.5675422138836773 0.021875 0.04971857410881801
9 0.180625 0.5656660412757973 0.028125 0.009380863039399626
2 0.175 0.550656660412758 0.071875 0.08818011257035648
7 0.154375 0.5572232645403377 0.03 0.07035647279549719
6 0.1975 0.5581613508442776 0.024375 0.06848030018761726

@glenn-jocher
Copy link
Member

@HelenVe You're welcome!

Thank you for providing more information. Your approach to saving the .tif files using the tiffile library seems correct.

Regarding the label file formatting, it appears to follow the YOLOv5 format where each line represents a bounding box annotation. Each line starts with the class index (e.g., 1, 3, 5) followed by the normalized coordinates of the bounding box (x_center, y_center, width, height).

From what you've described and the example you provided, there doesn't seem to be any issues with the file formats or labeling. However, there might be other factors affecting your training performance.

To investigate this further, here are a few suggestions:

  1. Ensure your dataset is diverse and representative: Make sure you have a sufficient number of training samples for each class and that they cover different variations, scales, and perspectives. This helps the model generalize better.

  2. Check for class imbalance: If some classes have significantly fewer instances than others, it can affect the model's ability to learn and classify objects accurately. Consider augmenting your dataset or using techniques like class weighting to address this.

  3. Experiment with different training configurations: Adjusting hyperparameters such as learning rate, batch size, and image size can significantly impact training performance. Try experimenting with different values to find the optimal configuration for your dataset.

  4. Increase the number of training steps: If the model's performance is still low after training for a sufficient number of epochs, consider increasing the number of training steps. This allows the model more time to learn and improve its performance.

Please give these suggestions a try and let me know if you observe any improvements in your training results. If you have any further questions or concerns, don't hesitate to ask.

Happy training!

@RanwanWu
Copy link

RanwanWu commented Aug 1, 2023

@glenn-jocher Hello, I have the same issues. I have trained my model with .tiff images, but the results are extremely poor. After running detect.py, the predicted results have no prediction frame, unless adjusting the conf-threshold to 0. I know there must be something wrong with my project, but I cannot find it. Help me, please!

@glenn-jocher
Copy link
Member

@RanwanWu hi,

Thank you for reaching out and bringing this issue to our attention. I understand that you have trained your YOLOv5 model using .tiff images, but the results are not satisfactory.

To address this, let's try to troubleshoot the problem together. Here are a few suggestions to get you started:

  1. Data preparation: Double-check the conversion and saving process of your .tiff images. Ensure that the images are properly converted, stacked, and saved in the expected format.

  2. Labeling accuracy: Verify the accuracy of your label files and annotations. Ensure that the bounding box annotations are correctly aligned with the corresponding objects in the images.

  3. Dataset diversity: Confirm that your dataset is diverse enough, with an adequate number of training samples for each class. This helps the model generalize better and improves detection accuracy.

  4. Hyperparameter tuning: Experiment with different training configurations, such as adjusting the learning rate, batch size, and image size. Sometimes, small tweaks to these parameters can significantly impact model performance.

  5. Evaluation metrics: While adjusting the confidence threshold can help improve detections, it is crucial to evaluate the performance using appropriate metrics such as precision, recall, and mAP (mean Average Precision).

Please try these suggestions and let us know if you observe any improvements in your results. If the issue persists, provide us with more specific information about your project setup and any error messages you encounter, so that we can further assist you.

Keep in mind that the YOLOv5 model is a product of the collective efforts of the YOLO community and the Ultralytics team. We are here to support you in troubleshooting and finding a solution.

Thank you for your patience, and we'll do our best to help you resolve this issue.

Glenn Jocher
Ultralytics YOLOv5 Team

@HelenVe HelenVe closed this as completed Aug 1, 2023
@HelenVe
Copy link
Author

HelenVe commented Aug 1, 2023

@RanwanWu for me, the training and validation results are now very good, I adjusted the channel number in yolo.py file for my case, and also in the dataloader.py I changed the cv2.imread() in the load_image() function into tiffile.imread() so that the correct number of channels is read. cv2.imread() always converts the images to 3 channel images so if you have grayscale tiffs it's going to convert them. However when running detection I also get poor results, but maybe that's the model not performing well. Hope this relates to your issue.

@HelenVe HelenVe reopened this Aug 1, 2023
@HelenVe HelenVe changed the title Train OYLOv5 on Tiff (image and label format) Train YOLOv5 on Tiff (image and label format) Aug 1, 2023
@glenn-jocher
Copy link
Member

@HelenVe, I'm glad to hear that you were able to improve the training and validation results by making adjustments to the code. The modifications you mentioned in yolo.py and dataloader.py are indeed important for handling grayscale TIFF images correctly.

Regarding poor results during detection, it's possible that the model's performance might be a factor. Keep in mind that YOLOv5's performance can vary depending on several factors, such as the dataset, training configuration, and model architecture. It's recommended to experiment with different training configurations, hyperparameters, and model architectures to potentially improve the detection results.

Additionally, evaluating the model's performance using appropriate metrics like precision, recall, and mAP can help provide a better understanding of its capabilities.

If you have any further questions or need assistance with any specific aspect, please let us know. We're here to help you.

Glenn Jocher
Ultralytics YOLOv5 Team

@HelenVe
Copy link
Author

HelenVe commented Aug 1, 2023

@glenn-jocher You're right, could you maybe give me some insight on the training results? I trained a YOLOv5m model from scratch, by stacking 10 heatmaps produced by the GradCAM++ explainability algorithm. I also did the same for HiresCAM (which has more precise heatmaps). These are my training results for the GradCAM base. I didn't use any data augmentation.
results

@glenn-jocher
Copy link
Member

@HelenVe training YOLOv5 models from scratch with stacked heatmaps from the GradCAM++ algorithm is an interesting approach. However, it's important to note that the training results you shared indicate that the model is currently not performing well.

To further investigate and improve the training results, I would recommend considering the following points:

  1. Dataset diversity: Ensure that your dataset is diverse enough, covering a wide range of object variations, scales, and perspectives. This helps the model generalize better and perform well on unseen data.

  2. Labeling accuracy: Verify the accuracy of your label files and annotations. Ensure that the bounding box annotations align accurately with the objects in the images to avoid any misalignments during training.

  3. Hyperparameter tuning: Experiment with different training configurations, such as adjusting the learning rate, batch size, and number of training steps. These hyperparameters can significantly impact the model's performance and convergence.

  4. Data augmentation: Consider applying data augmentation techniques to increase the variability in your training data. This can include random transformations like rotation, scaling, and flipping, which can help improve model generalization.

  5. Loss function: Consider experimenting with different loss functions, such as focal loss or CIoU loss, to help improve training convergence and detection accuracy.

By addressing these points, you may be able to enhance the training results of your YOLOv5m model. Remember to evaluate the model's performance using appropriate metrics like mAP (mean Average Precision) and precision/recall to assess its detection capabilities accurately.

If you have any further questions or need additional guidance, please don't hesitate to ask. We're here to help you.

Glenn Jocher
Ultralytics YOLOv5 Team

@HelenVe
Copy link
Author

HelenVe commented Aug 1, 2023

Thank you for the reply. I am using only one class to train the model which I specify in the include_class array in dataloaders.py, but in the yaml files I have more classes defined. Also I was wondering how can you tell from the plots that the model isn't performing well?

@glenn-jocher
Copy link
Member

@HelenVe thank you for your question. Regarding using only one class to train the model, if you have specified the desired class in the include_class array in dataloaders.py, the model should focus on that specific class during training. However, it's important to note that the presence of other classes defined in the YAML files might still have an impact on the model's behavior during training.

As for the performance evaluation, the plots you provided do not directly indicate whether the model is performing well or not. More detailed evaluation is typically required to assess the model's performance accurately. This can include calculating metrics like mAP (mean Average Precision), precision, recall, and/or using an evaluation tool such as CocoEval.

To better understand the model's performance, you might want to consider evaluating it on a separate validation or test dataset. This evaluation can help assess the model's ability to accurately detect and classify objects.

If you have any further questions or need additional guidance, please feel free to ask. We're here to assist you.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2023

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Sep 1, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 12, 2023
@mpapadomanolaki
Copy link

@RanwanWu for me, the training and validation results are now very good, I adjusted the channel number in yolo.py file for my case, and also in the dataloader.py I changed the cv2.imread() in the load_image() function into tiffile.imread() so that the correct number of channels is read. cv2.imread() always converts the images to 3 channel images so if you have grayscale tiffs it's going to convert them. However when running detection I also get poor results, but maybe that's the model not performing well. Hope this relates to your issue.

Hello @HelenVe , I am also trying to train yolov5 with more than 3 band images.
Did you make any other changes except in the yolo.py file and the dataloader.py?
I made the same changes but it keeps saying that these tif images are corrupt :/

Thank you for your time.

@glenn-jocher
Copy link
Member

Hello @mpapadomanolaki,

Great to hear that your training and validation results have improved! Regarding the issue with detection, it might be beneficial to review the detection settings such as the confidence threshold and non-max suppression threshold to ensure they are optimally configured for your specific use case.

For the error about corrupt TIFF images, it sounds like there might be an issue with how the images are being read or processed. Ensure that tifffile.imread() is correctly implemented in your load_image() function. Here's a quick snippet to check if your images are being loaded correctly:

import tifffile

image = tifffile.imread('path_to_your_image.tif')
print(image.shape)

If this prints the expected dimensions without errors, your images are likely fine, and the issue might be elsewhere in the data handling or preprocessing steps.

If you continue to face issues, please provide more details about the error messages you're seeing, and we can troubleshoot further!

@hafidhhusna
Copy link

hi, i also want to ask you about possibility of retrieving the bounding boxes coordinates from detected tiff image using yolov5, is that possible that i detect tiff image with yolov5 and getting the bounding boxes coordinate so that i can display the geospatial reference data in software such as ArcGIS and QGIS?

@glenn-jocher
Copy link
Member

Hello @hafidhhusna,

Thank you for your question! Yes, it is indeed possible to retrieve the bounding box coordinates from detected TIFF images using YOLOv5. Here's a step-by-step guide to help you achieve this:

  1. Load the Model and Image: First, ensure you have a trained YOLOv5 model and the necessary libraries installed. You can load your model and TIFF image as follows:

    import torch
    import tifffile
    
    # Load the YOLOv5 model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='path_to_your_model.pt')
    
    # Load the TIFF image
    image = tifffile.imread('path_to_your_image.tif')
  2. Perform Inference: Use the model to perform inference on the image. YOLOv5 will return the bounding box coordinates along with other information such as confidence scores and class labels.

    # Perform inference
    results = model(image)
    
    # Print results
    results.print()  # Print results to console
  3. Extract Bounding Box Coordinates: You can extract the bounding box coordinates from the results. The coordinates are typically in the format [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner of the bounding box.

    # Extract bounding box coordinates
    bounding_boxes = results.xyxy[0].numpy()  # xyxy format
    for bbox in bounding_boxes:
        x1, y1, x2, y2, conf, cls = bbox
        print(f"Bounding Box: ({x1}, {y1}), ({x2}, {y2})")
  4. Geospatial Reference Data: To display the bounding boxes with geospatial reference data in software like ArcGIS or QGIS, you'll need to convert the pixel coordinates to geographic coordinates. This typically involves using the metadata from the TIFF file and possibly a transformation library like rasterio.

    import rasterio
    
    # Open the TIFF file with rasterio to get geospatial metadata
    with rasterio.open('path_to_your_image.tif') as src:
        transform = src.transform
    
    # Convert pixel coordinates to geographic coordinates
    for bbox in bounding_boxes:
        x1, y1, x2, y2, conf, cls = bbox
        lon1, lat1 = rasterio.transform.xy(transform, y1, x1)
        lon2, lat2 = rasterio.transform.xy(transform, y2, x2)
        print(f"Geographic Bounding Box: ({lon1}, {lat1}), ({lon2}, {lat2})")

By following these steps, you should be able to retrieve the bounding box coordinates from detected TIFF images and use them in geospatial software like ArcGIS and QGIS.

If you encounter any issues or have further questions, please provide a minimum reproducible code example so we can better assist you. You can refer to our minimum reproducible example guide for more details. Also, make sure you are using the latest versions of torch and https://github.com/ultralytics/yolov5 to ensure compatibility and access to the latest features and fixes.

Happy coding! 😊

@hafidhhusna
Copy link

thank you for your quick response! i also want to ask you if i'm gonna perform inference with tiff file as input, do i have to train the model with the tiff file as well? if so, can you give me the guide for that?

@glenn-jocher
Copy link
Member

Hello @hafidhhusna,

Thank you for your question! 😊

To answer your query: Yes, if you plan to perform inference with TIFF files as input, it is generally a good practice to train the model with TIFF files as well. This ensures that the model learns the specific characteristics and nuances of your TIFF images, leading to better performance during inference.

Here's a step-by-step guide to help you train YOLOv5 with TIFF files:

  1. Prepare Your Dataset:

    • Ensure your TIFF images and corresponding annotation files are organized properly.
    • YOLOv5 expects images and labels to be in specific directories. Typically, you would have a structure like this:
      /dataset
        /images
          /train
          /val
        /labels
          /train
          /val
      
  2. Modify Data Loading:

    • Since TIFF files are not natively supported by OpenCV (which YOLOv5 uses), you need to modify the data loading code to handle TIFF files. You can use the tifffile library for this purpose.
    • Update the dataloaders.py file to use tifffile.imread() instead of cv2.imread().
  3. Adjust Model Channels:

    • If your TIFF images have more than 3 channels, you need to adjust the input channels in the yolo.py file to match the number of channels in your TIFF images.
  4. Training:

    • Once the above modifications are done, you can proceed with training as usual. Use the train.py script to start training your model with the TIFF images.

Here's a brief example of how you might modify the data loading part:

import tifffile

def load_image(path):
    # Load TIFF image
    img = tifffile.imread(path)
    return img
  1. Inference:
    • For inference, you can use the same modified data loading function to read TIFF files and pass them to the model for prediction.
# Load the YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path_to_your_model.pt')

# Load the TIFF image
image = tifffile.imread('path_to_your_image.tif')

# Perform inference
results = model(image)

# Print results
results.print()

If you encounter any issues or have further questions, please provide a minimum reproducible code example so we can better assist you. You can refer to our minimum reproducible example guide for more details. Also, make sure you are using the latest versions of torch and https://github.com/ultralytics/yolov5 to ensure compatibility and access to the latest features and fixes.

I hope this helps! If you have any more questions, feel free to ask. Happy coding! 🚀

@hafidhhusna
Copy link

Thank you for the response @glenn-jocher !
I have a problem where my annotation is in GeoJSON format, how can i make those labels supported by yolov5? Or maybe is there any other alternative labelling beside GeoJSON with my tiff file?

@glenn-jocher
Copy link
Member

Hello @hafidhhusna,

Thank you for reaching out! 😊

To use your GeoJSON annotations with YOLOv5, you'll need to convert them into the YOLO format, which consists of text files where each line represents a bounding box in the format: class x_center y_center width height. These values are normalized between 0 and 1 relative to the image dimensions.

Here's a step-by-step guide to help you convert your GeoJSON annotations to YOLO format:

  1. Parse GeoJSON:

    • First, read and parse your GeoJSON file to extract the bounding box coordinates and class labels.
  2. Convert Coordinates:

    • Convert the bounding box coordinates from GeoJSON format to YOLO format. This involves normalizing the coordinates relative to the image dimensions.
  3. Save to YOLO Format:

    • Save the converted annotations to text files in the appropriate directory structure expected by YOLOv5.

Here's a sample script to help you get started:

import json
import os

def convert_geojson_to_yolo(geojson_path, output_dir, image_width, image_height):
    with open(geojson_path, 'r') as f:
        geojson_data = json.load(f)

    for feature in geojson_data['features']:
        # Extract class label and bounding box coordinates
        class_label = feature['properties']['class']
        bbox = feature['geometry']['coordinates'][0]

        # Convert to YOLO format
        x_min, y_min = bbox[0]
        x_max, y_max = bbox[2]
        x_center = (x_min + x_max) / 2 / image_width
        y_center = (y_min + y_max) / 2 / image_height
        width = (x_max - x_min) / image_width
        height = (y_max - y_min) / image_height

        # Save to YOLO format
        yolo_label = f"{class_label} {x_center} {y_center} {width} {height}\n"
        output_file = os.path.join(output_dir, f"{feature['properties']['image_id']}.txt")
        with open(output_file, 'a') as f_out:
            f_out.write(yolo_label)

# Example usage
convert_geojson_to_yolo('path_to_your_geojson.geojson', 'path_to_output_labels', image_width=1024, image_height=1024)

Alternative Labeling Tools:
If you prefer to use a different labeling tool that directly supports YOLO format, you might consider tools like LabelImg or Roboflow. These tools allow you to annotate images and export the annotations in YOLO format.

Next Steps:

  1. Verify Conversion: Ensure that the converted annotations are correct by visualizing them on the images.
  2. Training: Once the annotations are in YOLO format, you can proceed with training your model using the train.py script in YOLOv5.

If you encounter any issues or need further assistance, please provide a minimum reproducible code example so we can better assist you. You can refer to our minimum reproducible example guide for more details. Also, ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5 to benefit from the latest features and fixes.

I hope this helps! If you have any more questions, feel free to ask. Happy coding! 🚀

@hafidhhusna
Copy link

But those labelling tools such as labelimg and roboflow doesnt support tiff file isn't it? I have tried once using roboflow to label my tiff file, but roboflow just doesnt support tiff file as input

@glenn-jocher
Copy link
Member

Hello @hafidhhusna,

Thank you for your insightful question! 😊

You are correct that some popular labeling tools like LabelImg and Roboflow may not natively support TIFF files. However, there are a few workarounds and alternative approaches you can consider:

  1. Convert TIFF to PNG/JPEG:

    • One approach is to convert your TIFF files to a more commonly supported format like PNG or JPEG for the purpose of annotation. After labeling, you can convert the annotations back to match your original TIFF files.
    • Here's a quick example using Python to convert TIFF to PNG:
    from PIL import Image
    import os
    
    def convert_tiff_to_png(tiff_path, output_path):
        with Image.open(tiff_path) as img:
            img.save(output_path, format='PNG')
    
    # Example usage
    convert_tiff_to_png('path_to_your_image.tif', 'output_image.png')
  2. Use QGIS for Annotation:

    • Given that you are working with geospatial data, you might find QGIS to be a powerful tool for annotation. QGIS supports TIFF files and allows you to create and export annotations in various formats, including GeoJSON.
    • Once you have your annotations in GeoJSON format, you can use the conversion script provided earlier to convert them to YOLO format.
  3. Custom Annotation Tool:

    • If you have specific requirements, you might consider developing a custom annotation tool using libraries like tifffile and matplotlib in Python. This allows you to tailor the tool to your exact needs and handle TIFF files directly.
  4. Community Tools:

    • There are also community-developed tools that might support TIFF files. It's worth exploring repositories on GitHub or forums where other users might have developed solutions for similar use cases.

If you decide to convert your TIFF files to PNG or JPEG for annotation, remember to maintain the same aspect ratio and dimensions to ensure the annotations remain accurate when you convert them back.

If you encounter any issues during this process or have further questions, please provide a minimum reproducible code example so we can better assist you. You can refer to our minimum reproducible example guide for more details. Also, ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5 to benefit from the latest features and fixes.

I hope this helps! If you have any more questions, feel free to ask. Happy coding! 🚀

@hafidhhusna
Copy link

If the tiff file converted to png, does it keeps the geospatial references?

@glenn-jocher
Copy link
Member

Hello @hafidhhusna,

Thank you for your question! 😊

When converting a TIFF file to PNG, the geospatial references (metadata) are typically not preserved in the PNG file. TIFF files can store extensive metadata, including geospatial information, which is crucial for applications like GIS. However, PNG files do not support this kind of metadata natively.

If maintaining geospatial references is essential for your workflow, you have a couple of options:

  1. Separate Metadata Handling:

    • You can extract the geospatial metadata from the TIFF file before conversion and store it separately. After performing your annotations on the PNG files, you can then re-associate the metadata with the annotations.
    import rasterio
    from PIL import Image
    
    def convert_tiff_to_png_with_metadata(tiff_path, png_path):
        # Open the TIFF file with rasterio to get geospatial metadata
        with rasterio.open(tiff_path) as src:
            metadata = src.meta
            img = src.read()
        
        # Save the image as PNG
        img = Image.fromarray(img.transpose(1, 2, 0))  # Convert to (H, W, C) format
        img.save(png_path, format='PNG')
        
        return metadata
    
    # Example usage
    metadata = convert_tiff_to_png_with_metadata('path_to_your_image.tif', 'output_image.png')
    print(metadata)
  2. Use QGIS for Annotation:

    • As mentioned earlier, QGIS is a powerful tool for handling geospatial data and supports TIFF files directly. You can perform your annotations in QGIS and export the annotations in a format that retains the geospatial references.
  3. GeoTIFF:

    • If you need to maintain geospatial references throughout your workflow, consider using GeoTIFF format, which is an extension of TIFF that includes geospatial metadata. Many GIS tools and libraries support GeoTIFF.

If you encounter any issues or need further assistance, please provide a minimum reproducible code example so we can better assist you. You can refer to our minimum reproducible example guide for more details. Also, ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5 to benefit from the latest features and fixes.

I hope this helps! If you have any more questions, feel free to ask. Happy coding! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

5 participants