Skip to content

Inference with ONNX exported model #11882

Answered by karlhm76
karlhm76 asked this question in Q&A
Discussion options

You must be logged in to vote

I have worked it out

The model input doesn't need to be normalised.

The outputs don't need to be adjusted with sigmoid function.

I have another issue now, but I don't know whether it is a problem with YOLO or ORT.

The final code was:

import torch
import torchvision
import onnx
import onnxruntime as ort
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

def sigmoid(x):
    return 1 / (1 + np.exp(-x))

# Load the YOLOv5 model in ONNX format
ort_session = ort.InferenceSession("best.onnx")

# Preprocess the input image for YOLOv5
image_path = "20221128_00001L_00423.jpg"
image = Image.open(image_path)  # Load the image using PIL
original_width = image.width
original_heig…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@karlhm76
Comment options

Answer selected by karlhm76
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant