Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] SpaceToDepth & DepthToSpace integer implementations #21287

Open
mcollinswisc opened this issue Jul 8, 2024 · 1 comment
Open
Labels
contributions welcome lower priority issues for the core ORT teams feature request request for unsupported feature or enhancement

Comments

@mcollinswisc
Copy link
Contributor

mcollinswisc commented Jul 8, 2024

@mcollinswisc mcollinswisc added the feature request request for unsupported feature or enhancement label Jul 8, 2024
@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jul 8, 2024
mcollinswisc added a commit to mcollinswisc/onnxruntime that referenced this issue Jul 8, 2024
No integer implementations are present, so they need to stay in
floating-point.
microsoft#21287
@yufenglee yufenglee removed the ep:CUDA issues related to the CUDA execution provider label Jul 9, 2024
@skottmckay
Copy link
Contributor

Should be relatively simple to try out given the implementations are templatized.

You could extend the list of supported types in the type constraints for the latest opset and add a new branch in the Compute.

.TypeConstraint("T", {DataTypeImpl::GetTensorType<float>(),
DataTypeImpl::GetTensorType<double>()}),

.TypeConstraint("T", {DataTypeImpl::GetTensorType<float>(),
DataTypeImpl::GetTensorType<double>()}),

Ideally uint8 and int8 are handled in the same branch given they're the same datasize (i.e. we don't want to pay the binary size cost for 2 implementation moving 8-bit data around).

The CUDA implementation seems to be pretty generic already and may just need the addition of the data types in the type constraints.

The kernel registrations in the EPs aren't typed (i.e. the kernel implementation is internally handling the different supported data types) so you shouldn't need to do anything there.

@skottmckay skottmckay added the contributions welcome lower priority issues for the core ORT teams label Jul 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributions welcome lower priority issues for the core ORT teams feature request request for unsupported feature or enhancement
Projects
None yet
Development

No branches or pull requests

3 participants