Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SIMD operations that use f16 and f128 #125440

Open
tgross35 opened this issue May 23, 2024 · 7 comments
Open

Add SIMD operations that use f16 and f128 #125440

tgross35 opened this issue May 23, 2024 · 7 comments
Labels
A-simd Area: SIMD (Single Instruction Multiple Data) C-feature-request Category: A feature request, i.e: not implemented / a PR. E-help-wanted Call for participation: Help is requested to fix this issue. F-f16_and_f128 `#![feature(f16)]`, `#![feature(f128)]` T-libs Relevant to the library team, which will review and decide on the PR/issue.

Comments

@tgross35
Copy link
Contributor

tgross35 commented May 23, 2024

Eventually we will want to be able to make use of simd operations for f16 and f128, now that we have primitives to represent them. Possibilities that I know of:

Probably some work/research overlap with adding assembly #125398

Tracking issue: #116909

@rustbot rustbot added the needs-triage This issue may need triage. Remove it if it has been sufficiently triaged. label May 23, 2024
@tgross35
Copy link
Contributor Author

tgross35 commented May 23, 2024

@rustbot label +A-simd +T-libs +F-f16_and_f128 +E-help-wanted +C-feature-request -needs-triage

@rustbot rustbot added A-simd Area: SIMD (Single Instruction Multiple Data) F-f16_and_f128 `#![feature(f16)]`, `#![feature(f128)]` T-libs Relevant to the library team, which will review and decide on the PR/issue. C-feature-request Category: A feature request, i.e: not implemented / a PR. and removed needs-triage This issue may need triage. Remove it if it has been sufficiently triaged. labels May 23, 2024
@kjetilkjeka
Copy link
Contributor

Nvidia ptx (--target nvptx64-nvidia-cuda) also support arithmetic instructions for f16 and f16x2 SIMD.

Making this work is an important step for making the ptx target "feature complete" with languages traditionally used for GPGPU. Let me know if there's anything I can do to support this.

@tgross35
Copy link
Contributor Author

Thanks, I'll add that to the top list.

It looks like it might not be too hard to add new simd intrinsics on that platform? I have no clue but https://github.com/rust-lang/stdarch/blob/df3618d9f35165f4bc548114e511c49c29e1fd9b/crates/core_arch/src/nvptx/mod.rs is pretty straightforward if you want to give it a shot at some point

@kjetilkjeka
Copy link
Contributor

I just tested f16 on nvptx now and I don't think I realized how many of the pieces was already put together. That's great!

I looked a bit around in SIMD instructions for other arches and I think this is, as you say, pretty straightforward. I will give it a shot. Hopefully I will get around to creating a PR next week.

@tgross35
Copy link
Contributor Author

That is great news! Note that unfortunately math symbols aren’t yet available on all targets so testing with the new types is kind of weird sometimes, but hopefully that will be resolved in a week or so with a compiler_builtins update.

@kjetilkjeka
Copy link
Contributor

Took me a bit longer than I originally hoped for but I ended up creating a PR for (most) nvptx f16x2 intrinsics and getting it merged. rust-lang/stdarch#1626

I have also noticed that we're lacking portable_simd variants of f16 and it's not being tracked by this issue. Is that outside the scope of this issue or just not added yet? Is anyone already coordinating with the portable_simd project or is it simply being blocked by other features that needs to land first?

@tgross35
Copy link
Contributor Author

Took me a bit longer than I originally hoped for but I ended up creating a PR for (most) nvptx f16x2 intrinsics and getting it merged. rust-lang/stdarch#1626

Awesome news, thanks for the update! Looks like there is an open PR to get the new changes #128866.

I have also noticed that we're lacking portable_simd variants of f16 and it's not being tracked by this issue. Is that outside the scope of this issue or just not added yet? Is anyone already coordinating with the portable_simd project or is it simply being blocked by other features that needs to land first?

I'll add it to the issue, no particular reason outside of being lower priority than the intrinsics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-simd Area: SIMD (Single Instruction Multiple Data) C-feature-request Category: A feature request, i.e: not implemented / a PR. E-help-wanted Call for participation: Help is requested to fix this issue. F-f16_and_f128 `#![feature(f16)]`, `#![feature(f128)]` T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

3 participants