Skip to content

Optimum Workspace Value Depending on GPU Memory During TensorRT Export #14038

Answered by pderrenger
u-uzun asked this question in Q&A
Discussion options

You must be logged in to vote

Greetings!

Thank you for reaching out and for your detailed questions. It's great to hear about your progress with deploying YOLOv8 on your Jetson Orin Nano 4GB. Let's address your queries one by one:

Confirming Export on the Target Device

You are correct. Exporting the model to TensorRT should ideally be done on the device where it will run inference. This ensures that the calibration and optimization processes are tailored to the specific hardware, which can significantly improve performance.

Optimum Workspace Size

Regarding the workspace size during TensorRT export, you're right that there's a balance to strike. The workspace size should be large enough to allow TensorRT to explore var…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by u-uzun
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants