Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How export to onnx be used by opencv? #49

Open
wwzh2015 opened this issue Jul 8, 2022 · 27 comments
Open

How export to onnx be used by opencv? #49

wwzh2015 opened this issue Jul 8, 2022 · 27 comments

Comments

@wwzh2015
Copy link

wwzh2015 commented Jul 8, 2022

opencv 4.6.0

@YaoQ
Copy link

YaoQ commented Jul 10, 2022

When I export the official model yolov7 to onnx with export.py from u5 git brach, then use opencvdnn to do inference job. It works well.

But when I train the yolov7 (using cfg/train/yolov7.yaml, IDetection function) with the custom dataset, then export it into onnx format with same way.

Then opencv report the following error when it load the onnx model.

create yolov7 object detection ....
[ERROR:0@0.270] global /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp (906) handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 1 outputs: [Mul]:(461) from domain='ai.onnx'
Traceback (most recent call last):
  File "/home/jingfeng/project/yolov7/weights/yolov7_dnn.py", line 188, in <module>
    cocoDetect = yolov7()
  File "/home/jingfeng/project/yolov7/weights/yolov7_dnn.py", line 73, in __init__
    self.net = cv2.dnn.readNetFromONNX('yolov7_gun.onnx')
cv2.error: OpenCV(4.5.5) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:928: error: (-2:Unspecified error) in function 'handleNode'
> Node [Mul@ai.onnx]:(461) parse error: OpenCV(4.5.5) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1899: error: (-213:The function/feature is not implemented) Different shapes case is not supported with constant inputs: Mul in function 'parseMul'

Although I simplify the onnx model and try different opset, and I failed to fix that issue still.

Something different between the official yolov7 model and the trained model, but I have not find it yet.

If someone has any idea, I appreciate.

@WongKinYiu
Copy link
Owner

We use equation in section 4.4 of YOLOR paper to re-parameterize IDetect into Detect by merging implicit knowledge into convolutional layers. So training-time cfg are as in cfg/training, and inference-time cfg are in cfg/deploy.

@pewdspie24
Copy link

@YaoQ can you upload and send me the ONNX converted file that works with OpenCV DNN, please? I've converted the base model (YOLOv7) and the YOLOv7x to ONNX but none of that works.

@wwzh2015
Copy link
Author

@YaoQ can you upload and send me the ONNX converted file that works with OpenCV DNN, please? I've converted the base model (YOLOv7) and the YOLOv7x to ONNX but none of that works.

Your opencv is 4.6.0? That also no support yolo5s.onnx,But opencv 4.5.5 have worked normal!
opencv/opencv#22222

@pewdspie24
Copy link

@YaoQ can you upload and send me the ONNX converted file that works with OpenCV DNN, please? I've converted the base model (YOLOv7) and the YOLOv7x to ONNX but none of that works.

Your opencv is 4.6.0? That also no support yolo5s.onnx,But opencv 4.5.5 have worked normal! opencv/opencv#22222

I use OpenCV v4.5.5. YOLOv4 darknet and YOLOv5 exported from ONNX is working fine on my system, only the v7 is not working at the moment.

@wwzh2015
Copy link
Author

@YaoQ can you upload and send me the ONNX converted file that works with OpenCV DNN, please? I've converted the base model (YOLOv7) and the YOLOv7x to ONNX but none of that works.

Your opencv is 4.6.0? That also no support yolo5s.onnx,But opencv 4.5.5 have worked normal! opencv/opencv#22222

I use OpenCV v4.5.5. YOLOv4 darknet and YOLOv5 exported from ONNX is working fine on my system, only the v7 is not working at the moment.

Just try opencv 4.6.0 please.

@xinsuinizhuan
Copy link

xinsuinizhuan commented Jul 12, 2022

The same with you. I find some differents:#99

@YaoQ
Copy link

YaoQ commented Jul 14, 2022

Please check this issue, and someone export the onnx working with opencvDNN and onnxruntime in C++ or python.

#145

@msly
Copy link

msly commented Jul 15, 2022

opencv dnn success run custom yolov7

1 reparam trained best.pt
https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc
model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device)
for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model
3 run with the same as yolov5 6.1
https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

@YaoQ
Copy link

YaoQ commented Jul 16, 2022

opencv dnn success run custom yolov7

1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

Yes, it works!

@wwzh2015
Copy link
Author

opencv dnn success run custom yolov7
1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb
nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):
2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

Yes, it works!

Please try to use opencv 4.6.0!

@knoppmyth
Copy link

Please try to use opencv 4.6.0!
I can confirm, this does work with OpenCV 4.6.0.

@wwzh2015
Copy link
Author

wwzh2015 commented Jul 18, 2022

Please try to use opencv 4.6.0!
I can confirm, this does work with OpenCV 4.6.0.

When I have used so C++ code to detect, that no work.

The code:

#include <opencv2/opencv.hpp>
#include <*fstream>

std::vectorstd::string load_class_list()
{
std::vectorstd::string class_list;
std::ifstream ifs("class.names");
std::string line;
while (getline(ifs, line))
{
class_list.push_back(line);
}
return class_list;
}

void load_net(cv::dnn::Net &net, bool is_cuda)
{
auto result = cv::dnn::readNet("yolov5s.onnx");
if (is_cuda)
{
std::cout << "Attempty to use CUDA\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA_FP16);
}
else
{
std::cout << "Running on CPU\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
}
net = result;
}

const std::vectorcv::Scalar colors = {cv::Scalar(255, 255, 0), cv::Scalar(0, 255, 0), cv::Scalar(0, 255, 255), cv::Scalar(255, 0, 0)};

const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.2;
const float NMS_THRESHOLD = 0.4;
const float CONFIDENCE_THRESHOLD = 0.4;

struct Detection
{
int class_id;
float confidence;
cv::Rect box;
};

cv::Mat format_yolov5(const cv::Mat &source) {
int col = source.cols;
int row = source.rows;
int _max = MAX(col, row);
cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);
source.copyTo(result(cv::Rect(0, 0, col, row)));
return result;
}

void detect(cv::Mat &image, cv::dnn::Net &net, std::vector &output, const std::vectorstd::string &className) {
cv::Mat blob;

auto input_image = format_yolov5(image);

cv::dnn::blobFromImage(input_image, blob, 1./255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);
net.setInput(blob);
std::vectorcv::Mat outputs;
net.forward(outputs, net.getUnconnectedOutLayersNames());

float x_factor = input_image.cols / INPUT_WIDTH;
float y_factor = input_image.rows / INPUT_HEIGHT;

float *data = (float *)outputs[0].data;

const int dimensions = 85;
const int rows = 25200;

std::vector class_ids;
std::vector confidences;
std::vectorcv::Rect boxes;

for (int i = 0; i < rows; ++i) {

float confidence = data[4];
if (confidence >= CONFIDENCE_THRESHOLD) {

    float * classes_scores = data + 5;
    cv::Mat scores(1, className.size(), CV_32FC1, classes_scores);
    cv::Point class_id;
    double max_class_score;
    minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
    if (max_class_score > SCORE_THRESHOLD) {

        confidences.push_back(confidence);

        class_ids.push_back(class_id.x);

        float x = data[0];
        float y = data[1];
        float w = data[2];
        float h = data[3];
        int left = int((x - 0.5 * w) * x_factor);
        int top = int((y - 0.5 * h) * y_factor);
        int width = int(w * x_factor);
        int height = int(h * y_factor);
        boxes.push_back(cv::Rect(left, top, width, height));
    }

}

data += 85;

}

std::vector nms_result;
cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, nms_result);
for (int i = 0; i < nms_result.size(); i++) {
int idx = nms_result[i];
Detection result;
result.class_id = class_ids[idx];
result.confidence = confidences[idx];
result.box = boxes[idx];
output.push_back(result);
}
}

int main(int argc, char **argv)
{

std::vectorstd::string class_list = load_class_list();

cv::Mat frame;
cv::VideoCapture capture("1.mp4");
if (!capture.isOpened())
{
std::cerr << "Error opening video file\n";
return -1;
}

bool is_cuda = argc > 1 && strcmp(argv[1], "cuda") == 0;

cv::dnn::Net net;
load_net(net, is_cuda);

auto start = std::chrono::high_resolution_clock::now();
int frame_count = 0;
float fps = -1;
int total_frames = 0;

while (true)
{
capture.read(frame);
if (frame.empty())
{
std::cout << "End of stream\n";
break;
}

std::vector<Detection> output;
detect(frame, net, output, class_list);

frame_count++;
total_frames++;

int detections = output.size();

for (int i = 0; i < detections; ++i)
{

    auto detection = output[i];
    auto box = detection.box;
    auto classId = detection.class_id;
    const auto color = colors[classId % colors.size()];
    cv::rectangle(frame, box, color, 3);

    cv::rectangle(frame, cv::Point(box.x, box.y - 20), cv::Point(box.x + box.width, box.y), color, cv::FILLED);
    cv::putText(frame, class_list[classId].c_str(), cv::Point(box.x, box.y - 5), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
}

if (frame_count >= 30)
{

    auto end = std::chrono::high_resolution_clock::now();
    fps = frame_count * 1000.0 / std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();

    frame_count = 0;
    start = std::chrono::high_resolution_clock::now();
}

if (fps > 0)
{

    std::ostringstream fps_label;
    fps_label << std::fixed << std::setprecision(2);
    fps_label << "FPS: " << fps;
    std::string fps_label_str = fps_label.str();

    cv::putText(frame, fps_label_str.c_str(), cv::Point(10, 25), cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(0, 0, 255), 2);
}

cv::imshow("output", frame);

if (cv::waitKey(1) != -1)
{
    capture.release();
    std::cout << "finished by user\n";
    break;
}

}

std::cout << "Total frames: " << total_frames << "\n";

return 0;

@msly
Copy link

msly commented Jul 19, 2022

Please try to use opencv 4.6.0!

cv2.__version__
'4.6.0'

@linuxfedora2020
Copy link

linuxfedora2020 commented Jul 20, 2022

I used trasfer learning on yolov7 to train my custom dataset with 3 classes.

Transfer learning: (Changed batch-size to 8)
python train.py --workers 8 --device 0 --batch-size 8 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml

Inference:
python detect.py --weights ./runs/train/yolov7-custom/weights/last.pt --conf 0.5 --img-size 640 --source inference/images/test_image.jpg

the result test_image is ok.

Then i convert the best.pt to onnx:
python export.py --weights runs/train/yolov7-custom/weights/best.pt --include torchscript onnx

Then, I got best.onnx

but when i use OpenCV 4.6.0 readNet to load this best.onnx, i got exception.

But if i download the pre-trained yolov7.pt and convert to onnx using same command as below:
python export.py --weights ../../yolov7/yolov7.pt --include torchscript onnx

the output yolov7.onnx can load in OpenCV 4.6.0 readNet (Same source) without exception. So the testing OpenCV code should work, but the converted custom yolov7 onnx not works on it.

Does anyone know why? Thanks

My custom yaml, pt, onnx files, can be downloaded from:
https://drive.google.com/drive/folders/1ukP2zAY2vwYhe75peCHG60ITnXFbqQsW?usp=sharing

@terryll
Copy link

terryll commented Jul 29, 2022

Thanks for sharing the files.
I was able to use your best.pt and make it run on openCV 4.6.0 without error.
I think you are missing the step.
reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb
Can you provide an image with the custom objects? I can verify the detection.
Attached is the .pt file after reparam.

@linuxfedora2020
Copy link

An image with custom object: https://drive.google.com/file/d/1pw72rCi8U3TNRkAVRD4G5RzbiujUz635/view?usp=sharing
Where is your attached pt file?

Tried to create a py file with the following code to do reparam

`

import

from copy import deepcopy
from models.yolo import Model
import torch
from utils.torch_utils import select_device, is_parallel
import yaml

nc = 3 #change with your nc
device = select_device('0', batch_size=1)

model trained by cfg/training/*.yaml

ckpt = torch.load('cfg/training/yolov7-custom.pt', map_location=device)

reparameterized model in cfg/deploy/*.yaml

model = Model('cfg/training/yolov7-custom.yaml', ch=3, nc=nc).to(device)

with open('cfg/training/yolov7-custom.yaml') as f:
yml = yaml.load(f, Loader=yaml.SafeLoader)
anchors = len(yml['anchors'])

copy intersect weights

state_dict = ckpt['model'].float().state_dict()
exclude = []
intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}
model.load_state_dict(intersect_state_dict, strict=False)
model.names = ckpt['model'].names
model.nc = ckpt['model'].nc

reparametrized YOLOR

for i in range((nc + 5) * 3):
model.state_dict()['model.105.m.0.weight'].data[i, :, :, :] *= state_dict['model.105.im.0.implicit'].data[:, i, : :].squeeze()
model.state_dict()['model.105.m.1.weight'].data[i, :, :, :] *= state_dict['model.105.im.1.implicit'].data[:, i, : :].squeeze()
model.state_dict()['model.105.m.2.weight'].data[i, :, :, :] *= state_dict['model.105.im.2.implicit'].data[:, i, : :].squeeze()
model.state_dict()['model.105.m.0.bias'].data += state_dict['model.105.m.0.weight'].mul(state_dict['model.105.ia.0.implicit']).sum(1).squeeze()
model.state_dict()['model.105.m.1.bias'].data += state_dict['model.105.m.1.weight'].mul(state_dict['model.105.ia.1.implicit']).sum(1).squeeze()
model.state_dict()['model.105.m.2.bias'].data += state_dict['model.105.m.2.weight'].mul(state_dict['model.105.ia.2.implicit']).sum(1).squeeze()
model.state_dict()['model.105.m.0.bias'].data *= state_dict['model.105.im.0.implicit'].data.squeeze()
model.state_dict()['model.105.m.1.bias'].data *= state_dict['model.105.im.1.implicit'].data.squeeze()
model.state_dict()['model.105.m.2.bias'].data *= state_dict['model.105.im.2.implicit'].data.squeeze()

model to be saved

ckpt = {'model': deepcopy(model.module if is_parallel(model) else model).half(),
'optimizer': None,
'training_results': None,
'epoch': -1}

save reparameterized model

torch.save(ckpt, 'cfg/deploy/yolov7-custom-reparam.pt')
`

After run, got cfg/deploy/yolov7-custom-reparam.pt,
https://drive.google.com/file/d/1ac_Y3BFvtVtbhiU4To-E4_WSuhmcqAfK/view?usp=sharing

convert to onnx
python export.py --weights ../../yolov7/cfg/deploy/yolov7-custom-reparam.pt --include onnx

Got yolov7-custom-reparam.onnx
https://drive.google.com/file/d/1SA5o9OHyNrj7GXP-EORmPxKbd2Rpb0B0/view?usp=sharing

And try to load the yolov7-custom-reparam.onnx on OpenCV 4.6.0 with cuda, dnn:
Got exception on this->net = readNet(config.modelpath);

[ERROR:0@9.701] global C:\opencv\opencv-4.6.0\modules\dnn\src\onnx\onnx_importer
.cpp (1021) cv::dnn::dnn4_v20220524::ONNXImporter::handleNode DNN/ONNX: ERROR du
ring processing node with 3 inputs and 1 outputs: [Range]:(onnx_node!Range_341)
from domain='ai.onnx'

@holger-prause
Copy link

In section - reparameterized model in cfg/deploy/*.yaml
you do "with open('**cfg/training/yolov7-custom.yaml') as f:"

Shouldnt you take a config file from the deploy folder there? Also why not using this script https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

I am still new and will try out things in the next day - just stepped over this thread as i need to run on opecv too.
Hope it helped

@Nuwan1654
Copy link

Nuwan1654 commented Aug 23, 2022

We use equation in section 4.4 of YOLOR paper to re-parameterize IDetect into Detect by merging implicit knowledge into convolutional layers. So training-time cfg are as in cfg/training, and inference-time cfg are in cfg/deploy.

Can someone briefly, explain the difference between Detect and IDetect functions? I used the u5 branch to export the onnx model and was able to infer with the OpenCV without reparameterization. but I used the yolov7.pt trained on coco dataset. does this reparameterization affect the inference speed?

@Nuwan1654
Copy link

opencv dnn success run custom yolov7

1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

I am getting (-215:Assertion failed) total(srcShape, srcRange.start, srcRange.end) == maskTotal in function 'computeShapeByReshapeMask' after the reparameterization, any idea why ?

@KONGYOUYL
Copy link

opencv dnn success run custom yolov7

1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

really appreciate, it works, my env -opencv4.5.5 export.py(torch>1.7.0 && torch < 1.12.0)

@JimXu1989
Copy link

opencv dnn success run custom yolov7

1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

Hi, I use official yolov7.pt and export the model by export.py from u5 branch, and it show the following error

net = cv2.dnn.readNet('yolov7.onnx')
[ERROR:0@26.489] global /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/onnx/onnx_importer.cpp (1018) handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [Identity]:(onnx_node!Identity_0) from domain='ai.onnx'
Traceback (most recent call last):
File "", line 1, in
cv2.error: OpenCV(4.6.0) /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/onnx/onnx_importer.cpp:1040: error: (-2:Unspecified error) in function 'handleNode'
Node [Identity@ai.onnx]:(onnx_node!Identity_0) parse error: OpenCV(4.6.0) /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/layer.cpp:246: error: (-215:Assertion failed) inputs.size() in function 'getMemoryShapes'

Should I reparam the model? but when I reparam the offical model, it shows the following error:
[ERROR:0@0.253] global /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/onnx/onnx_importer.cpp (1018) handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [Identity]:(onnx_node!Identity_0) from domain='ai.onnx'
Traceback (most recent call last):
File "/home/xss/SoftWare/yolov7-opencv-onnxrun-cpp-py/opencv/main.py", line 163, in
yolov7_detector = YOLOv7(args.modelpath, conf_thres=args.confThreshold, iou_thres=args.nmsThreshold)
File "/home/xss/SoftWare/yolov7-opencv-onnxrun-cpp-py/opencv/main.py", line 13, in init
self.net = cv2.dnn.readNet(path)
cv2.error: OpenCV(4.6.0) /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/onnx/onnx_importer.cpp:1040: error: (-2:Unspecified error) in function 'handleNode'

Node [Identity@ai.onnx]:(onnx_node!Identity_0) parse error: OpenCV(4.6.0) /home/xss/SoftWare/opencv/opencv-4.6.0/modules/dnn/src/layer.cpp:246: error: (-215:Assertion failed) inputs.size() in function 'getMemoryShapes'

@Nuwan1654
Copy link

opencv dnn success run custom yolov7
1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb
nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):
2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

I am getting (-215:Assertion failed) total(srcShape, srcRange.start, srcRange.end) == maskTotal in function 'computeShapeByReshapeMask' after the reparameterization, any idea why ?

This was due to the wrong --imgsz used in the exporting to onnx.

@ajeshmonr7777
Copy link

@YaoQ can you upload and send me the ONNX converted file that works with OpenCV DNN, please? I've converted the base model (YOLOv7) and the YOLOv7x to ONNX but none of that works.

@YaoQ Can you please upload that working ONNX format file into google drive and post here. I'm not able to convert. I need YOLOv7.onnx andYOLOv7-tiny.onnx

@ajeshmonr7777
Copy link

Please try to use opencv 4.6.0!
I can confirm, this does work with OpenCV 4.6.0.

@knoppmyth Do you have .onnx file of yolov7 and yolov7-tiny ? IF you have please upload.

@ajeshmonr7777
Copy link

I used trasfer learning on yolov7 to train my custom dataset with 3 classes.

Transfer learning: (Changed batch-size to 8) python train.py --workers 8 --device 0 --batch-size 8 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml

Inference: python detect.py --weights ./runs/train/yolov7-custom/weights/last.pt --conf 0.5 --img-size 640 --source inference/images/test_image.jpg

the result test_image is ok.

Then i convert the best.pt to onnx: python export.py --weights runs/train/yolov7-custom/weights/best.pt --include torchscript onnx

Then, I got best.onnx

but when i use OpenCV 4.6.0 readNet to load this best.onnx, i got exception.

But if i download the pre-trained yolov7.pt and convert to onnx using same command as below: python export.py --weights ../../yolov7/yolov7.pt --include torchscript onnx

the output yolov7.onnx can load in OpenCV 4.6.0 readNet (Same source) without exception. So the testing OpenCV code should work, but the converted custom yolov7 onnx not works on it.

Does anyone know why? Thanks

My custom yaml, pt, onnx files, can be downloaded from: https://drive.google.com/drive/folders/1ukP2zAY2vwYhe75peCHG60ITnXFbqQsW?usp=sharing

@linuxfedora2020 If you have that yolov7.onnx file can you upload that ?

@ZainabHomoud
Copy link

ZainabHomoud commented Oct 30, 2022

opencv dnn success run custom yolov7

1 reparam trained best.pt https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb

nc = 1 #change with your nc model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=nc).to(device) for i in range( (nc + 5) * 3):

2 use branch https://github.com/WongKinYiu/yolov7/tree/u5 to export onnx model 3 run with the same as yolov5 6.1 https://github.com/VITA-Alchemy/yolov5_6.0_opencvdnn_python/blob/main/main_dnn.py

Thank you so much! This actually works very well, I follow these steps & now I can use YOLOv7 onnx file in OpenCV. If anyone wants to use ONNX file in OpenCV, these steps are safe to go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests