Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting ONNX Model with Fixed Batch Size of 1 Using export_tensorrt_engine #508

Closed
laugh12321 opened this issue Dec 31, 2023 · 0 comments · Fixed by #509
Closed

Exporting ONNX Model with Fixed Batch Size of 1 Using export_tensorrt_engine #508

laugh12321 opened this issue Dec 31, 2023 · 0 comments · Fixed by #509

Comments

@laugh12321
Copy link
Contributor

🐛 Describe the bug

Issue Overview

When using export_tensorrt_engine to export an ONNX model, regardless of the batch_size set in the input_sample, the exported ONNX model's output batch_size remains fixed at 1 when inserted with EfficientNMS_TRT.

Steps to Reproduce

  1. Use export_tensorrt_engine to export an ONNX model.
  2. Set different batch_size values in the input_sample.
  3. Observe the output batch_size of the exported ONNX model.

Expected Behavior

The output ONNX model should dynamically adjust its batch_size based on the input_sample's batch_size, instead of being fixed at 1.

Actual Behavior

The output ONNX model always has a fixed batch_size of 1.

Additional Information

example

Versions

PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 专业版
GCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders) 13.2.0
Clang version: 17.0.5
CMake version: version 3.27.8
Libc version: N/A

Python version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 546.33
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=2100
DeviceID=CPU0
Family=198
L2CacheSize=2048
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2100
Name=12th Gen Intel(R) Core(TM) i7-12700
ProcessorType=3
Revision=

Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] onnx==1.14.0
[pip3] onnx-graphsurgeon==0.3.12
[pip3] onnxruntime==1.16.3
[pip3] onnxsim==0.4.35
[pip3] paddle2onnx==1.0.6
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[conda] blas 1.0 mkl defaults
[conda] mkl 2023.1.0 h6b88ed4_46358 defaults
[conda] mkl-service 2.4.0 py310h2bbff1b_1 defaults
[conda] mkl_fft 1.3.8 py310h2bbff1b_0 defaults
[conda] mkl_random 1.2.4 py310h59b6b97_0 defaults
[conda] numpy 1.23.0 pypi_0 pypi
[conda] pytorch-cuda 11.7 h16d0643_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.1+cu117 pypi_0 pypi
[conda] torchaudio 2.0.2+cu117 pypi_0 pypi
[conda] torchvision 0.15.2+cu117 pypi_0 pypi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant