Skip to content

Bubble Worlds

Az edited this page Jul 27, 2023 · 17 revisions

🔎About the feature

Bubble Worlds is the very first sensing-based augmented reality experience. A completely unexplored field of technology and haptics for us to discover and build together. This feature, in conjunction with a Lodestone, allows you to explore spherical "worlds" by sensing (touch using biomagnets). These "worlds" or "maps" can be incredibly complex yet they are so easy to make you could create one right now.

🕹Tutorial

Exploring a world

  • Open up Bubble Worlds and plug in your Lodestone. Now hold your device upside down with the Lodestone pointing outwards. Hold it so that the finger(s) with biomagnets are by the Lodestone.

  • You are now in the world selector screen. You should see a short tutorial followed by a selection of preloaded worlds. Choose the first world, you will be taken to the navigation screen. Extend your arm in front of you with the Lodestone and finger pointing out.

  • Imagine yourself inside a big sphere, of which you are touching the interior. Slowly move your arm around your shoulder (keeping it extended). Close your eyes and focus on what you're sensing. Map out a small area, get your bearings then move on. You're doing it!

Helpful tips

It will inevitably take some time for your brain to adapt to this new method of navigation. It's called brain plasticity and can vary from one person to another. There's no better way to get there than to train, here are some tips to optimize your training:

  • Close your eyes. Avoid using your surroundings or the on-screen visualizers as references. Rather try to form a mental map of what you're feeling. This will also make "sensor drift" much less noticeable.

  • Start small. Open a simple map and try to follow the contour of basic solid shapes. Try to guess what the shape is, making it a game is much more fun. Once you're good at it you can move on to navigating gradients for example.

  • Try navigating unconsciously. Similarly to playing an instrument, you know you've learned when you can focus on something else while doing it. Load a maze map and aimlessly navigate it while having a conversation, listening to a podcast, or watching a youtube video. Unlocking this ability is a huge step forward! A portion of your brain is now specialized in understanding spatial sensing.

🛠Technical information

Glossary

  • sensor drift: Bubble Worlds relies on the device's sensors to keep track of its orientation in space. Over time small errors accumulate resulting in the map "drifting". That doesn't really affect the gameplay as long as the user is not using contextual spatial cues as reference points. This is why closing one's eyes is recommended in the beginning.
  • channel: Output audio channel to magnet stimulator (ie: mono, left, right).
  • layer: The map's red, green, blue, and alpha layers that will be sampled.
  • sample: The set of values pulled from the layers for a user's position at time T.
  • layer properties: Set of rules defining how a sample should be processed into haptic signals.
  • trigger: A optional condition on the sampling that leads to the end of the experience with success or failure.

Map construction

Bubble World maps are encoded in .png images. The RGBA channels are treated separately.

To create a map you must provide a 2:1 (recommended 2048*1024px) RGBA texture in PNG format. The file's name should preferably be in camelCase and will be processed as a display name within the app. This flat texture will be mapped to a sphere and wrapping your head around the distortions that that implies can be hard. I recommend using a 360° editing tool such as PanoPainter (free) to create your image. I also recommend working with separate layers for red, green, and blue and an additional black for the alpha channel.

Spherical mapping distortion example

Spherical mapping illustration

Dimensionality

Initially, you might underestimate the vastness of a map. Let's go through some numbers:

We know the user won't stick to a perfect sphere as they might initially pivot around the shoulder joint but at some point will need to turn around. We will approximate the radius of said sphere as the distance from the index fingertip to the center of the chest, for this example: 80cm. This creates a circumference of just over 5m so let's round it to that.

The map template HorizonMeters available in this repo represents these 5 meters along the horizon.

Based on these same numbers here are some more approximations:

 360° = 5m

 1° = 5.6px = 1.3cm

 1px = 5mm

Rendering loop

Rendering process

Rendering Process The rendering process pulls samples from the map at the user's current location at a fixed sampling rate. The sample is then processed using the relevant layer properties. Finally, the resulting signal parameters are applied to the output.

Custom layer properties

A .JSON file of the same name can be provided optionally to override the default layer properties. See Local Files for details. These properties will define how each image layer will be interpreted and subsequently felt.

The sampling can happen in two ways:

  • Simple: The value of the layer is taken at the pixel corresponding to the user's location. In this case, the user's z-rotation (along the arm's axis) does not impact the sample.

  • Directional: A Sobel filter is applied to the area where the user is located to extract the gradient direction at that point. The sample will then consist of the normalized difference between the gradient direction and the user's facing direction on the map.

The sample value must then influence one (or more) of the signal's parameters. This is done by applying an "effect function" as such X = effect(sampled value). For example:

Setting globalAmplitudeEffect in layer 1 (green) properties to linear means that the amplitude of the stimulation signal will be equal to the green value in the spot pointed to on the map.

Linear:

EaseOut:

EaseIn:

EaseInOut:

Step:

Triggers and gamification

You will notice a set of variables in the layer properties related to "triggers". These are meant to provide the map creator with optional win and loss conditions.

Each layer can be parameterized to trigger loss or success when a minimum or maximum value is sampled on said layer. Not that this is applied on the raw sample before any processing is done to create an output. Hence there is no link between triggers and user output: a completely silent layer can be a trigger.

When success is triggered, the rendering will continue for 1 second to allow any corresponding feedback to happen. At the end of this period, the exploration will end with a success message. The process is equivalent for the failure scenario.

If both isSuccess and isFailure are on (which should be avoided) failure will take precedence.

About spherical rendering and why?

Here is a little write-up on the idea of magnetic rendering on a player-centered sphere and why I chose that option:

My ideal would be to track the user’s hand (or have the phone track itself while held) in 3D space. This way I could measure the room and map pretty much anything in there. For example the model of a car.

You could walk around this virtual car, exploring it by touch. Of course, I would explore wilder things than a car. I’m thinking of all kinds of otherworldly landscapes of fields that the user could run around and feel like a child in a garden.

I was successful in doing that (the tracking and rendering) on a desktop scale with a leap-motion system for hand tracking but nobody has a leap-motion and the scale is quite restrictive. In other words, it has to be a phone and an app. The sad thing is phones suck at tracking their position (at the sub-centimeter scale I require). Trust me, I tried GPS, sensors, Bluetooth beacons, and more but all are either impractical, plain bad, or too expensive. So what’s the next best thing?

Well, phones are bad at tracking position but they’re not too bad at tracking their orientation. Using a fusion of an accelerometer, gyroscope, and compass you can tell which way the phone faces at any time. It’s still not perfect and I have to deal with “drift” but it’s good enough.

So instead of a 3D alien world of wonders. let’s scale things down to a flat 2D map projected on a sphere. If you ask the user to stand still and extend their arm you can use their shoulder as a fixed reference point and imagine this sphere having an arm-long radius and being centered on the shoulder. Using the phone’s orientation you can tell where on that sphere you are and return the appropriate feedback. Yeah, it’s not as great as it could be. I too can’t wait for a reliable, affordable 3D spatial tracker but this is still quite fun and most of all gets you used to the idea of navigating through sensing. Something I want to explore further…

To end on a positive note, computer vision for AR on mobile is becoming decent at spatial awareness. Not good enough yet but who knows… Also, the rendering is quite complex and interesting despite being flattened. Sensing is far from a binary “field - no field” it has as many nuances as hearing. Intensity, frequency, persistence… In other words, I can encode a whole lot of information in a simple 2D map and that’s only in mono. In stereo, you can do things like directional awareness like we use for hearing. This is what I was exploring in the bit-sense project in fact.