Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KHR_node_selectability Draft Proposal #2422

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

lexaknyazev
Copy link
Member

As discussed in the Interactivity DTSG.

Copy link
Contributor

@javagl javagl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks straightforward to me. No real objections.

I cannot say much about the connection to KHR_interactivity on a low technical level.

On a high level, I'd mention possible alternatives for the output values: When there is a selectionRayOrigin, one could consider some selectionPointDistance (i.e. the distance along the ray), or a selectionRayDirection, or a selectionLine or so. But there are some ways of deriving one from the other...

  • selectionLine = selectionPoint - selectionRayOrigin
  • selectionRayDirection = normalize(selectionLine)
  • selectionPoint = selectionRayOrigin + selectionLine
  • selectionPointDistance = distance(selectionRayOrigin, selectionPoint)
  • selectionLine = selectionPointDistance * selectionRayDirection
  • selectionPoint = selectionRayOrigin + selectionPointDistance * selectionRayDirection
  • ...

Which one is "the best"? I don't know. Others might even want to throw in some "picked triangle" and the barycentric coordinates of the hit point in there...

In the past, I did model the 'result of a picking operation' as result = { ray, distance, pickedObject } with ray = { origin, direction }, but others may have different views on that.


A detail that might already be covered by some general "event handling" specification part of KHR_interactivity (I still have to finish reading that one)...:

This interactivity event node is activated when a “select” event occurs on a glTF node nodeIndex or on any node in its subtree subject to the following propagation rule: the lowest node in the tree receives the select event first, and the event bubbles up the tree until a glTF node with an associated event/onSelect behavior graph node with its stopPropagation configuration value set to true is found.

When stopPropagation is false, then the event bubbles up further, and that single operation could cause multiple elements to be selected with a single operation - is that correct?

@lexaknyazev
Copy link
Member Author

But there are some ways of deriving one from the other...

Sure. Two points seem to be the most atomic and fundamental values in this interconnected system. Various math nodes could be used to trivially derive everything else.

that single operation could cause multiple elements to be selected with a single operation

Yes if they have event "listeners", i.e., event/onSelect nodes with the corresponding nodeIndex configuration values.

@emackey
Copy link
Member

emackey commented Jul 10, 2024

Just a general question here from someone who hasn't been involved in the interactivity discussions: Do we need to assume a "selection ray" in this extension? I can easily imagine VR/XR scenarios where I could tap a "selection point" in space, or drag out a selection box of some kind, or use some not-as-yet-invented controller to indicate selection of a node by more futuristic means. Must it always be a ray?

@lexaknyazev
Copy link
Member Author

tap a "selection point" in space, or drag out a selection box of some kind

This sounds like new, more advanced events.


We could probably allow returning NaN if the exact coordinates cannot be provided for some reason. @dwrodger WDYT?

@dwrodger
Copy link

tap a "selection point" in space, or drag out a selection box of some kind

This sounds like new, more advanced events.

We could probably allow returning NaN if the exact coordinates cannot be provided for some reason. @dwrodger WDYT?

Yes, I think that NaN in the case that the ray can't be defined sounds fine. It may also be appropriate to update the language where it defines selection and what it means for an object to be invisible to selection. That language could just say that implementations that use systems other than ray-based selection are free to interpret "invisible to selection" in whatever way makes sense for their selection mechanics.

Copy link
Contributor

@javagl javagl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have substantial comments beyond what I already wrote in the first review. If the review was about the latest changes: I'm lacking some technical context there (e.g. about the 'controllers'). From a structural perspective, I could only brainstorm whether the absence of a point/origin should be modelled with NaN, or whether that should be differentiated into a "selection event" (without these properties) and a "picking event" (with these properties).

@hybridherbst
Copy link

Maybe as additional input here, in WebXR each controller has a "ray pose" and a "grip pose" and both are very much needed for different use cases.

For example, visuals for controller models are usually aligned with the "grip pose", and subsequently the action of "dragging an object" often is based on the "grip pose". The "ray pose" or "aim pose" is needed for clicking on things.

These two so far have survived a number of novel interaction mechanisms, including things like Apple Vision Pro where suddenly pointers are transient (they exist only temporarily while the user is interacting) and have very different "ray pose" (ray from the eye to where the user is looking) and "grip pose" (point and orientation for where in space the user has started the hand gesture for the selection).

@hybridherbst
Copy link

hybridherbst commented Aug 19, 2024

Reading a bit more in the spec, I wonder about this section:

In the case of multiple-controller systems, the controllerIndex output value MUST be set to the index of the controller that generated the event; in single-controller systems, this output value MUST be set to zero.

Maybe it should read "the unique ID of the controller that has generated the event" instead? The index of a controller can change; for example, in touch-based systems usually each new touch has a new pointer ID (see https://developer.mozilla.org/en-US/docs/Web/API/PointerEvent/pointerId). Or in WebXR, where users can connect and disconnect new controllers arbitrarily (e.g. switch from controllers to hands and back) or have transient pointers as well.

(I think it depends a bit on what expected usage for the returned controller index is – are there nodes that can get more data from a controller?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants