modes/export/ #7933
Replies: 49 comments 154 replies
-
Where can we find working examples of a tf.js exported model? |
Beta Was this translation helpful? Give feedback.
-
How to use exported engine file for inference of images in a directory? |
Beta Was this translation helpful? Give feedback.
-
I trained a custom model taking yolov8n.pt (backbone) and I want to do a model registry in MLFLOW of the model in the .engine format. It's possible directly without the export step? Someone deal with something similar? Tks for your help! |
Beta Was this translation helpful? Give feedback.
-
Hi, I appreciate the really awesome work within Ultralytics. I have a simple question. What is the difference between |
Beta Was this translation helpful? Give feedback.
-
Hello @pderrenger Can you plz help me out with how can i use Paddlepaddle Format to extract the text from the images? Your response is very imp to me i am waiting for your reply. |
Beta Was this translation helpful? Give feedback.
-
my code from ultralytics import YOLO model = YOLO('yolov8n_web_model/yolov8n.pt') # load an official model model = YOLO('/path_to_model/best.pt') i got an error ERROR: The trace log is below.
What you should do instead is wrap
ERROR: input_onnx_file_path: /home/ubuntu/Python/runs/detect/train155/weights/best.onnx TensorFlow SavedModel: export failure ❌ 7.4s: SavedModel file does not exist at: /home/ubuntu/Python/runs/detect/train155/weights/best_saved_model/{saved_model.pbtxt|saved_model.pb} what is wrong and what i need to do for fix? thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Hello! the error I get is "TypeError: Model.export() takes 1 positional argument but 2 were given" |
Beta Was this translation helpful? Give feedback.
-
Are there any examples of getting the output of a pose estimator model in C++ using a torchscript file. I'm getting an output of shape (1, 56, 8400) for an input of size (1, 3, 640, 640) with two people in the sample picture. How should I interpret/post-process this output? |
Beta Was this translation helpful? Give feedback.
-
I trained a yolov5 detection model a little while ago and have successfully converted that model to tensorflowjs. That tfjs model works as expected in code only slightly modified from the example available at https://github.com/zldrobit/tfjs-yolov5-example. My version of the relevant section:
I have now trained a yolov8 detection model on very similar data. The comments in https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py#L45-L49 However, that does not seem to be the case. The v5 model output is the 4 length array of tensors (which is why the destructuring assignment works), but the v8 model output is a single tensor of shape [1, X, 8400] thus the example code results in an error complaining that the model result is non-iterable when attempting to destructure. From what I understand, the [1, X, 8400] is the expected output shape of the v8 model. Is further processing of the v8 model required, or did I do something wrong during the pt -> tfjs export? |
Beta Was this translation helpful? Give feedback.
-
I was wondering if anyone could help me with this code: I exported my custom trained yolov8n.pt model to .onnx but now my code is not working(model.export(format='onnx', int8=True, dynamic=True)). I am having trouble using the outputs after running inference. My Code: def load_image(image_path):
def draw_bounding_boxes(image, detections, confidence_threshold=0.5): def main(model_path, image_path):
if name == "main": Error: |
Beta Was this translation helpful? Give feedback.
-
"batch_size" is not in arguments as previous versions? |
Beta Was this translation helpful? Give feedback.
-
I converted the model I trained with costum data to tflite format. Before converting, I set the int8 argument to true. But when I examined the tflite format from the netron website, I saw that the input information is still float32. Is this normal or is there a bug? Also thank you very much for answering every question without getting bored. |
Beta Was this translation helpful? Give feedback.
-
!yolo export model=/content/drive/MyDrive/best-1-1.pt format=tflite export failure ❌ 33.0s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined |
Beta Was this translation helpful? Give feedback.
-
Hi I havr tried all TFLITE export formats to convert the best.pt to .tflite but non is working. I have also checked my runtime and all the latest imports pip install -U ultralytics, and I have also tried the code you gave to someone in the comments but the issue is not resolvig Step 1: Export to TensorFlow SavedModel!yolo export model='/content/drive/MyDrive/best-1-1.pt' format=saved_model Step 2: Convert the exported SavedModel to TensorFlow Liteimport tensorflow as tf Save the TFLite modelwith open('/content/drive/MyDrive/yolov8_model.tflite', 'wb') as f: but the same error comes back. |
Beta Was this translation helpful? Give feedback.
-
can we export sam/mobile sam model to tensorRT or onnx? |
Beta Was this translation helpful? Give feedback.
-
Hi! Export YOLOv8 Model to Tensorrt! yolo export model="../../weights/M04-best_V2.pt" format=engine half=True device=0 TensorRT: export failure ❌ 6.3s: 'tensorrt_bindings.tensorrt.IBuilderConfig' object has no attribute 'max_workspace_size' Any idea? info: |
Beta Was this translation helpful? Give feedback.
-
suppose I have uploaded model in cloud, can I load the pt model through url?? |
Beta Was this translation helpful? Give feedback.
-
how can we set confidence using ncnn_model() and i also want to see frame also so any one help me please and i am using this code from ultralytics import YOLO Load the YOLOv8 modelmodel = YOLO("best.pt") Export the model to NCNN formatmodel.export(format="ncnn") # creates '/yolov8n_ncnn_model' Load the exported NCNN modelncnn_model = YOLO("./best_ncnn_model") Run inferenceresults = ncnn_model("bus.jpg") and also tell me can we further improve speed aas i am running it on raspberry pie 4 model B |
Beta Was this translation helpful? Give feedback.
-
CAN WE USE FP8 FOR NCNN FOR YOLO V8N if yes then please tell me procedure and code as currently in using code # Load the YOLOv8 model Export the model to NCNN formatmodel.export(format="ncnn") # creates '/yolov8n_ncnn_model' |
Beta Was this translation helpful? Give feedback.
-
when i reduce imgsz to 312 it shows many object detected like 200 etc in one frame while we only have one object in frame which is to be detected so kindly tell me the error and also provide me the code , tell me when should i set imgsz =312 in coverting process or while detecting process as i havee same issue in both |
Beta Was this translation helpful? Give feedback.
-
when i run my code first time detection take 3.5 sec and then 0.4 what the reason and also my camera start laging 3 to 4 sec behind the frame which should be current one when servo motor moves here is my code pplease guide me as i am making real time object detection and tracking robot import cv2 Initialize model and filterncnn_model = YOLO("./best_ncnn_model") def detection(frame): def kalman_filter_plot(frame, x1, y1): def main():
cap = cv2.VideoCapture(0) import pigpio #uses bcm pin numbering only import time import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) Pin assignmentsIN1 = 17 IN2 = 27 IN3 = 23 IN4 = 24 GPIO.setup(IN1, GPIO.OUT) GPIO.setup(IN2, GPIO.OUT) GPIO.setup(IN3, GPIO.OUT) GPIO.setup(IN4, GPIO.OUT) servo1_pin = 5 #for camera up and down movement servo_main_pin = 18 #for camera left right movement object_distance=15 # 15cm distance to stop def move_forward():
def move_backward():
def move_left():
def move_right():
def stop():
def movement(frame_center_x,frame_center_y, x_box , y_box , ultrasonic , speed , dutycycle_s1 ) : if frame_center_y != y_box : #camera up down
if ultrasonic > object_distance:
elif ultrasonic == object_distance:
else: #robot rotation on a single point
return dutycycle_s1 def set_servo_angle(dutycycle, servo_pin):
def camera_servo_reset():
def main_servo_reset():
def main_servo(dutycycle):
GPIO.cleanup() pigpio.pi().stop() and distance measure from ultasonic code is import RPi.GPIO as GPIO GPIO.setwarnings(False) Define GPIO pins for trigger and echo signalstrig_pin = 21 #trigger pin Set up GPIO pins (BCM numbering)GPIO.setmode(GPIO.BCM) def measure_distance():
as it aslo give erroe when it comes to measure distance as he says 0: 640x640 1 ball, 455.3ms please guide me |
Beta Was this translation helpful? Give feedback.
-
I have been reading the documentation carefully and I have managed to export a model in onnx, the results using the tool provided are the exact ones I need, however I would like to be able to use the model using only the onnx library and one or another for resizing the image, However, when resizing the image and executing the model inference, the results are not the same as those provided by the ultralytics library. I have tried to review all the steps in which it could occur and I consider that it is due to how I am providing the data. , I have used PIL and cv2 to resize the images but I do not have the result I want, I have even used the LetterBox call method that adds gray pixels without resizing the image but despite this the results delivered are not the same as what it provides me, I have even reviewed various related community projects but I can't get the results I need. Anyway, thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
when using onnx ,something went wrong: |
Beta Was this translation helpful? Give feedback.
-
Describe the issueI am currently facing significant challenges while attempting to execute YOLOv8-seg.onnx with dynamic batch sizes on GPU using ONNX Runtime for Web. Specifically, the model runs correctly only when the batch size is set to 1. However, increasing the batch size results in false detections and incorrect outputs. Notably, both output0 and output1 terminate with zeros in their data under these conditions. To reproduceTo optimize performance using GPU acceleration, I am utilizing ONNX Runtime for Web with WebGPU as the execution provider. 1-Export YOLOv8-seg model to ONNX format, supporting dynamic batch sizes.
I attempted to export the model using torch.onnx.export as well, but encountered the same issue. 2-Load the ONNX model using the provided JavaScript snippet, specifying WebGPU as the execution provide
3-Perform inference with various dynamic batch sizes (e.g., 1, 2, 4). Execution ProviderCUDA (AMD Radeon(TM) R5 Graphics) @pderrenger |
Beta Was this translation helpful? Give feedback.
-
Hi i trained custom segmentation model, but when i want to export .pt file to tflite format i got following error. |
Beta Was this translation helpful? Give feedback.
-
hello, when i used the example code: from ultralytics import YOLO there is a bug:PS D:\document\2024-6-29\transfer> & "D:/Program Files (x86)/anaconda/envs/tflite/python.exe" d:/document/2024-6-29/transfer/trans.py PyTorch: starting from 'yolov8n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB) TensorFlow SavedModel: starting export with tensorflow 2.13.0... ONNX: starting export with onnx 1.16.1 opset 17... ERROR: input_onnx_file_path: yolov8n.onnx the fact is:i don't know how to deal with it :( |
Beta Was this translation helpful? Give feedback.
-
I use python3.8 environment on windows11 operating system, ultralytics version 8.2.45 from ultralytics import YOLO Load a modelmodel = YOLO("yolov8n.pt") # load an official modelmodel = YOLO("./best.pt") # I trained the model myself Export the modelmodel.export(format="tflite") error message: TensorFlow SavedModel: starting export with tensorflow 2.12.0... ONNX: starting export with onnx 1.16.1 opset 17... ERROR: input_onnx_file_path: best.onnx |
Beta Was this translation helpful? Give feedback.
-
if a have my best,pt file for a YOLO v8 custom dataset train, how could I export it to tflite? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I tried async version but it took same times. |
Beta Was this translation helpful? Give feedback.
-
Why does it keep getting stuck in this place when I convert pt to tflite: |
Beta Was this translation helpful? Give feedback.
-
modes/export/
Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!.
https://docs.ultralytics.com/modes/export/
Beta Was this translation helpful? Give feedback.
All reactions