Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Add GlobalBuffer proposal #282

Closed

Conversation

charles-r-earp
Copy link
Contributor

This proposal stems from #249, and attempts to address #232. It does not intend to conflict with #8 but barriers may be inherently necessary to allow mutable access. It may be possible to enable specific access paradigms for trivial cases where no synchronization is necessary, but that is intended to be a potential next step.

Rendered

@charles-r-earp charles-r-earp changed the title Add GlobalBuffer proposal [proposed RFC] Add GlobalBuffer proposal Nov 27, 2020
@charles-r-earp charles-r-earp changed the title [proposed RFC] Add GlobalBuffer proposal [RFC] Add GlobalBuffer proposal Nov 27, 2020
Copy link
Contributor

@Jasper-Bekkers Jasper-Bekkers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not intend to conflict with #8 but barriers may be inherently necessary to allow mutable access.

The reason I brought up #8 was mostly because it's spiraled a bit out of control in terms of scope; and one of it's goals has become to add proper acquire and release semantics to all buffer loads so we know memory is in a well-defined state at all times. Therefor I would really like to see @Tobski's feedback on this PR. Also because the ideas in this PR map pretty closely to a private discussion @Tobski and I had about iterator-style access to buffers.

docs/src/rfcs/00X-global-buffer.md Show resolved Hide resolved
Comment on lines 286 to 293
type T = f32;
const N: usize = 1024;

#[allow(unused_attributes)]
#[spirv(gl_compute(local_size=64)]
pub fn scaled_add(
#[spirv(descriptor_set=1, binding=0)] x: GlobalBuffer<[T; N]>,
#[spirv(descriptor_set=1, binding=1)] mut y: GlobalBufferMut<[T; N]>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some slight trouble understanding this as written - does this mean that each thread gets 1024 items available to itself to iterate over, or does this mean that N here is the amount of total threads launched? In the latter case, shouldn't this be GlobalBuffer<[T]> instead from a usability point of view? I see in your proposal that this is used to create a slice of N long; however this would prose problems for most use-cases as they won't know the size of the GlobalBuffer at compile-time right?

It's kind of unfortunate that we can't rely on vulkan's built in robust buffer access support since it just silently ignores out of bounds access

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intent is to have both arrays and runtime arrays, with GlobalBuffer<[T; N]> and GlobalBuffer<[T]>. Right now, slices can't be used in entry parameters.
In this simple example, the buffers have 1024 f32's. I haven't proposed any way to validate that the buffers actually have that length. You are right that generally these would be runtime arrays with some runtime length.
The compute shader sees 1024 items in each buffer. It is up to GlobalBufferMut, through some safe interface, to ensure that each thread / invocation only sees some exclusive piece of it. In this case, the zip_mut_with function just indexes both buffers by the global index and applies the provided function, which operates on a single pair of scalars. If the global index is out of bounds, the function is not applied.
I think I will edit this to show using slices because the array is confusing.

@khyperia
Copy link
Contributor

khyperia commented Apr 1, 2021

Closing this due to inactivity, if someone would like to start pushing on this again, feel free to reopen.

@khyperia khyperia closed this Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants