Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade SDFGI to HDDAGI #86267

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Upgrade SDFGI to HDDAGI #86267

wants to merge 3 commits into from

Conversation

reduz
Copy link
Member

@reduz reduz commented Dec 17, 2023

Supersedes #86007

This is a new global illumination system that is a full upgrade to SDFGI Key advantages are:

  • Significantly faster base frame time.
  • An order of magnitude faster updates (SDFGI drops frames very significantly when moving, as a consequence of having to redo its entire SDF even for a small motion).
  • Higher quality (less light leaks, properly energy conservation, much better reflections, others)
  • Occlusion remains the same as in SDFGI. I experimented with other systems (the one in DDGI) but everything has worse tradeoffs.

It is meant as a drop-in replacement, so games using SDFGI should get HDDAGI transparently.

TODO:

TODO AFTER MERGING:

  • High density probes support.
  • Dynamic objects support.

Improvement Screenshots

Motion performance:

By far the biggest improvement is the performance when moving the camera. This is what makes SDFGI unusable on lower end hardware and can even push down the FPS very strongly on higher end hardware. This is because SDFGI needs to regenerate the entire SDF cascade using a jump flood even if it moved just a bit. HDDAGI uses HDDA so it only does local updates to the cascade and no scrolling.

Here is what happens in SDFGI when the camera moves fast:

sdfgi_motion.mp4

As you can see FPS take a large dip. This really sucks for any kind of games where the camera moves fast, like a racing game, a fast shooter, etc.

In comparison, HDDAGI is unaffected by camera motion:

hddagi_motion.mp4

Static performance:

When rendering static (camera not moving) HDDAGI is also considerably faster than SDFGI:

This is SDFGI, the first set of dispatches is the light update, the second is the probe raytracing. On a Geforce 1650 it takes 2.43ms:

image

In HDDAGI, the same task takes 0.93ms. While not a great difference on a 1650, this makes a much larger difference on IGP:
image

Quality of Indirect Light

SDFGI does not properly do light conservation, while HDDAGI does. Here is comparison screens by @Jamsers:

SDFGI: Notice light is more uneven in general, and there are some color issues due to the use of spherical harmonics.

image
image

HDDAGI: Notice light is better, more evenly distributed:
image
image

Quality of reflections

SDFGI uses a very hacky way to obtain light from the SDF that result in very strange lighting from the reflections, plus the SDF gives it a very weird look (this is what you see in the reflections):
image

In contrast, HDDAGI filters voxels properly, so what is seen in reflections is more faithful to the actual geometry:
image
What is being reflected:
image

Here you can see in a more realistic scenario how close the reflections are to the actual geometry (note tonemapping is different in the reflection as this is a debug mode):
image

In motion

SDFGI suffers from jumping dark spots when lights or camera move. HDDAGI has a special probe filtering option (enabled by default) that gets rid of them.

SDFGI (notice the jumping dark spots):

filter_probes_light_off.mp4

HDDAGI (notice the general smoothness):

filter_probes_light_on.mp4

SDFGI (with a proper scene, note jumping dark spots on the tunnel

filter_probes_off.mp4

HDDAGI with filter probes:

filter_probes_on.mp4

Production edit: closes godotengine/godot-roadmap#32, closes #41154, closes godotengine/godot-proposals#3024

@Saul2022
Copy link

Saul2022 commented Dec 17, 2023

Pretty good job, performance increased when moving , and this time decided to use the default cell size , because as you said i exaggerated with that 1cm. It seem´s pretty good though the problem is still gi colliding with the geometry for me, in comparison this is with an empty scene.

Vídeo sin título - Screen Recording - 17_12_2023, 22_17_50.webm
Vídeo sin título - Screen Recording - 17_12_2023, 22_20_37.webm

Vídeo sin título - Screen Recording - 17_12_2023, 22_24_17.webm
Vídeo sin título - Screen Recording - 17_12_2023, 22_28_04.webm

Comment on lines 72 to 78
enum DynamicGICascadeFormat {
DYNAMIC_GI_CASCADE_FORMAT_16x16x16,
DYNAMIC_GI_CASCADE_FORMAT_16x8x16,
DYNAMIC_GI_CASCADE_FORMAT_MAX,
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16×8×16 should be before 16×16×16 as it's faster (and lower quality), to match other Godot enums.

While this means the default current value of SDFGI_Y_SCALE_75_PERCENT (1) will map to DYNAMIC_CASCADE_GI_FORMAT_16x16x16, this isn't really an issue in practice because Godot doesn't save default values to scene/resource files. This means that if you were previously using the default value, you can be using the new default value even if it has to be 0 instead of 1. This is the same approach followed in #75468.

GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/sdfgi/frames_to_converge", PROPERTY_HINT_ENUM, "5 (Less Latency but Lower Quality),10,15,20,25,30 (More Latency but Higher Quality)"), 5);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/sdfgi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Slower),2,4,8,16 (Faster)"), 2);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_converge", PROPERTY_HINT_ENUM, "6 (Less Latency/Mem usage & Low Quality),12,18,24,32 (More Latency / Mem Usage & High Quality)"), 1);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Faster),2,4,8,16 (Slower)"), 2);
Copy link
Member

@Calinou Calinou Dec 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on my testing, lower values run slower but provide lower latency, so the hint should be clarified:

Suggested change
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Faster),2,4,8,16 (Slower)"), 2);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Less Latency but Slower),2,4,8,16 (More Latency but Faster)"), 2);

GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/sdfgi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Slower),2,4,8,16 (Faster)"), 2);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_converge", PROPERTY_HINT_ENUM, "6 (Less Latency/Mem usage & Low Quality),12,18,24,32 (More Latency / Mem Usage & High Quality)"), 1);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_lights", PROPERTY_HINT_ENUM, "1 (Faster),2,4,8,16 (Slower)"), 2);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_inactive_probes", PROPERTY_HINT_ENUM, "1 (Faster),2,4,8,16 (Slower)"), 3);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on my testing, lower values run slower but provide lower latency, so the hint should be clarified:

Suggested change
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_inactive_probes", PROPERTY_HINT_ENUM, "1 (Faster),2,4,8,16 (Slower)"), 3);
GLOBAL_DEF(PropertyInfo(Variant::INT, "rendering/global_illumination/hddagi/frames_to_update_inactive_probes", PROPERTY_HINT_ENUM, "1 (Less Latency but Slower),2,4,8,16 (More Latency but Faster)"), 3);

@Calinou
Copy link
Member

Calinou commented Dec 17, 2023

Streaks aligned with the X axis are visible when volumetric fog is enabled and GI Inject is greater than 0.0, even if no lights are visible in the scene:

simplescreenrecorder-2023-12-18_00.04.08.mp4

Minimal reproduction project: test_hddagi_volumetric_fog.zip

@WickedInsignia
Copy link

The new Occlusion Bias setting helps solve some of the dark artifacting mentioned in the previous thread. I feel that the splotchiness of SDFGI is almost completely reduced and not an issue anymore as well.
Here are the existing problems in my tests:

  • Cascade shifting is far too noticeable. Default cascade 0 distance needs to be pushed back 3-4x more to prevent sudden shifts in GI as players move through the world, without reducing probe density so accuracy is not sacrificed (which causes a range of other issues).
  • Reflection jitter issue mentioned in the last thread.
  • Overdarkening is still a problem. Some areas go completely black when they should be receiving at least a little light. Normal maps on terrain and organic objects (rocks etc.) suffer especially badly, with extremely dark patches making some materials feel semi-metallic.
  • Spherical or curved forms (especially when overlapping with other geo) are still full of artifacts. The situation is improved, but sometimes it is impossible to find a sweet spot between bias settings that doesn't screw up other areas of the environment.
  • SSIL can help resolve artifacts but is far too smeary and noisy to be reliable at this point.

As mentioned in the previous thread, the ability for users to author probe positions manually in a domain (similar to Unity's Adaptive Probe Volumes or its aging Light Probes system) would allow artists to resolve most of these issues manually, and most users probably aren't making environments large enough or procedural enough to warrant a cascading solution (nor can Godot handle large streaming worlds). The option for both would be welcome.

I'm aware there are solutions in the works for these issues, just putting them here to keep record.

@Jamsers
Copy link

Jamsers commented Dec 18, 2023

Cascade shifting is far too noticeable. Default cascade 0 distance needs to be pushed back 3-4x more to prevent sudden shifts in GI as players move through the world, without reducing probe density so accuracy is not sacrificed (which causes a range of other issues).

An even better solution is allowing users to set Cell Size and Cascade Distance independently, for each cascade - performance be damned. Hide it behind an advanced toggle or something if you must, but it always irked me that we never had full control over SDFGI cascades. The default, unchangeable ratios are horrendously crawly, no stability whatsoever, and make cascade 0 so close on smaller cell sizes that it becomes irrelevant for the majority of the scene.

But I suppose this merits a separate proposal.

@reduz
Copy link
Member Author

reduz commented Dec 18, 2023

@WickedInsignia

Cascade shifting is far too noticeable. Default cascade 0 distance needs to be pushed back 3-4x more to prevent sudden shifts in GI as players move through the world, without reducing probe density so accuracy is not sacrificed (which causes a range of other issues).

This is mostly why I want to implement the high density probes option. The main problem right now is that, on small indoors, the GI runs out of probes at a certain distance and hence it stops receiving lighting. Or, sometimes, all 4 probes are occluded at that position. With high density that problem should mostly resolve, but it is independent to this PR, which aims to replace SDFGI first.

Reflection jitter issue mentioned in the last thread.

This is dependent on work Clay is doing to fix the normal buffer resolution. I can't merge this PR until that one is done.

Overdarkening is still a problem. Some areas go completely black when they should be receiving at least a little light. Normal maps on terrain and organic objects (rocks etc.) suffer especially badly, with extremely dark patches making some materials feel semi-metallic.
Spherical or curved forms (especially when overlapping with other geo) are still full of artifacts. The situation is improved, but sometimes it is impossible to find a sweet spot between bias settings that doesn't screw up other areas of the environment.

Thats kind of the same and this is the main problem with DDGI and these types of probe based GI. I am hoping the situation improves with the high density probes option.

SSIL can help resolve artifacts but is far too smeary and noisy to be reliable at this point.

My plan is to have both high density probes and also a screen space part . SSIL is not designed for this kind of GI, so I need to write a proper screen space tracing that traces the distances smaller than a single probe. With that all pixels should get proper lighting.

But then again, the plan with this PR is to supersede SDFGI, since It's pretty large and difficult to keep up to date as-is. Take it as foundation work, then I will work on the other stuff I mentioned.

@reduz
Copy link
Member Author

reduz commented Dec 18, 2023

@Jamsers

An even better solution is allowing users to set Cell Size and Cascade Distance independently, for each cascade - performance be damned. Hide it behind an advanced toggle or something if you must, but it always irked me that we never had full control over SDFGI cascades.

The problem is the density, not the distance. Cascades are always 16x16 regions, so customizing different cell size or cascade distance will always bite you one way or the other. This is why my plan to work on the high density probes.

@Jamsers
Copy link

Jamsers commented Dec 18, 2023

The problem is the density, not the distance. Cascades are always 16x16 regions

Yes, and my idea was being able to set arbitrary probe densities by changing Cell Size and Cascade Distance independently - i.e. if you want more probe density in cascade 0, you keep Cascade Distance at the default 12.8, but set Cell Size smaller. Is the 16x16 region limitation something that can't be changed/solved?

@reduz
Copy link
Member Author

reduz commented Dec 18, 2023

@Jamsers that is unfortunately not possible because cascades use a ton of memory and more density scales them exponentially and make them unusable pretty quick. This is why I need a special, separate, technique to increase the density without affecting memory so much.

@Saul2022
Copy link

Vídeo sin título - Screen Recording - 17_12_2023, 22_17_50.webm Vídeo sin título - Screen Recording - 17_12_2023, 22_20_37.webm

Vídeo sin título - Screen Recording - 17_12_2023, 22_24_17.webm Vídeo sin título - Screen Recording - 17_12_2023, 22_28_04.webm

After trying I would say it in a pretty good state( literally it has more or less fps than volumetric fog in a empty scene. Also,if i decrease cascades to 1 and put the light update to 16 it might be at the same vfog level of fps in a small scene with cubes). All pretty good, now the issue is with scenes like sponza where even if geometry is simple i still find that huge fps drops( motion is not the one having the fault here as it happens statically too) when using the meshes, like the bigger, the more hddagi consumes from it. look the difference with my above videos and the images below.

Can this be sorted out ,or the only way would be with the cascade sething, because if simple scenes is like this, not sure of big ones. although maybe the issue is just my igpu(radeon vega 3), as it not as good.

editor_screenshot_2023-12-18T144403
editor_screenshot_2023-12-18T144422

@reduz
Copy link
Member Author

reduz commented Dec 18, 2023

@Saul2022 just so you understand better the dependency on geometry. HDDAGI performance does of course depend on the amount of geometry in the level, what it does not depend is on the geometry complexity. This means if a scene with 1 million polygons occupies the same physical space as a scene with 1500 polygons, it is pretty much the same for HDDAGI.

@Saul2022
Copy link

This means if a scene with 1 million polygons occupies the same physical space as a scene with 1500 polygons, it is pretty much the same for HDDAGI.

Alright, thank you for clearing it out.

@@ -82,6 +82,54 @@
<member name="background_mode" type="int" setter="set_background" getter="get_background" enum="Environment.BGMode" default="0">
The background mode. See [enum BGMode] for possible values.
</member>
<member name="dynamic_gi_bounce_feedback" type="float" setter="set_dynamic_gi_bounce_feedback" getter="get_dynamic_gi_bounce_feedback" default="1.0">
How much light bounces back to the probes. This increases the amount of indirect light received on surfaces.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still seems to be the case as of the latest revision of this PR, so I suggest documenting it:

Suggested change
How much light bounces back to the probes. This increases the amount of indirect light received on surfaces.
How much light bounces back to the probes. This increases the amount of indirect light received on surfaces.
[b]Note:[/b] Values higher than [code]1.0[/code] may result in infinite feedback loops with bright surfaces. This can cause GI to appear extremely bright over time.

The amount of cascades used for global illumination. More cascades allows the global illumination to reach further away, but at the same time it costs more memory and GPU performance. Adjust this value to what you find necessary in your game.
</member>
<member name="dynamic_gi_enabled" type="bool" setter="set_dynamic_gi_enabled" getter="is_dynamic_gi_enabled" default="false">
Turns on Dynamic GI. This provides global illumination (indirect light and reflections) for the whole scene. Only static objects contribute to GI while dynamic objects can also recieve it (check whether your object is static, dynamic or disabled with [member GeometryInstance3D.gi_mode].
Copy link
Member

@Calinou Calinou Dec 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Turns on Dynamic GI. This provides global illumination (indirect light and reflections) for the whole scene. Only static objects contribute to GI while dynamic objects can also recieve it (check whether your object is static, dynamic or disabled with [member GeometryInstance3D.gi_mode].
If [code]true[/code], dynamic global illumination is enabled for the whole scene (indirect light and reflections). Only static objects contribute to GI while dynamic objects can also receive it (check whether your object is static, dynamic or disabled with [member GeometryInstance3D.gi_mode]).

Adjust the amount of energy that geometry recieved from GI. Use this only as a last resort because it affects everything uniformly and decreases the quality. If needed, consider using [member Environment.dynamic_gi_bounce_feedback] or [member Light3D.light_indirect_energy] to inject more energy into the system.
</member>
<member name="dynamic_gi_filter_ambient" type="bool" setter="set_dynamic_gi_filter_ambient" getter="is_dynamic_gi_filtering_ambient" default="true">
Filter the ambient light, this results in higher quality transitions between the cascades.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Filter the ambient light, this results in higher quality transitions between the cascades.
If [code]true[/code], filters the ambient light added by the dynamic GI system. This results in higher quality transitions between the cascades.

Filter the ambient light, this results in higher quality transitions between the cascades.
</member>
<member name="dynamic_gi_filter_probes" type="bool" setter="set_dynamic_gi_filter_probes" getter="is_dynamic_gi_filtering_probes" default="true">
Filter the probes (averaging probes with neighbouring probes) to smooth out the light transitions. This option can be used safely, as occlusion between probes is considered when filtering, but it may also result on lower light frequency.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Filter the probes (averaging probes with neighbouring probes) to smooth out the light transitions. This option can be used safely, as occlusion between probes is considered when filtering, but it may also result on lower light frequency.
If [code]true[/code], filters the probes (averaging probes with neighbouring probes) to smooth out indirect light transitions that may appear on surfaces that are mostly indirectly lit. This option can be used without the risk of introducing light leaking, as occlusion between probes is considered when filtering. Enabling probe filtering may result in reduced high-frequency detail in indirect lighting.

@Jamsers
Copy link

Jamsers commented Jul 12, 2024

For screen space radiance cascades however, that's a different story. Pretty much everyone agrees it's incredible - probably the best screen space GI the industry has to offer right now, both in terms of performance and quality.

@WickedInsignia
Copy link

Juan supposing at a glance that radiance cascades are impractical for world space is not the same as them being impractical for world space. You could just as equally take Alexander (the triple-A dev implementing GI techniques in a triple-A game) at his word for the validity of the technique.
It's fine for @fakhraldin to entertain the idea of it as an additional GI technique alongside HDDAGI. I'm not partial to radiance cascades, they're just pointless to dismiss in comparison to the failed/unfinished/experimental techniques Godot uses and has used. Let's move it to a different proposal though.

@fakhraldin
Copy link

My apologies to everyone who felt inconvenient about discussing godot's GI development here. But put yourself in my shoes for a moment. My intention is to improve GI in godot, to make it more competitive and more attractive to artists. However i try to keep it more technical.

It would be a dream to achieve something like ubisoft's latest GI solution used for Avatar: Frontiers of Pandora and the upcoming Star Wars Outlaws. Their technique is currently seen as the most sophisticated dynamic GI solution for cross-gen and next-gen realtime video games. I don't propose something like this for godot.
But it may surprise many people that their solution actually shares a similar ground concept of "radiance cascades". They also use a probe based system. Instead of using radiance cascades, they combine different techniques like world space, screen space for high frequency details and ray tracing per hardware RT or compute shaders RT as a fallback. Similar to radiance cascades they implement additional different layers to achieve a wider spectrum of GI. They capture the world's details by "different grades of detection" so to speak.
The solution with "radiance cascades" is way less complicated, more performant and more scalable for hardware. Just like with SDFGI and HDDAGI we already use a probe grid. "Radiance Cascades" is just adding hierarchical probe grids with different resolutions to the existing one. This step increases detail capture and quality tremendously at cheap costs even with ray tracing.
We don't even need to make additional probe grids mandatory. It could be optional in the editor and even be offered as an in-game option. The more grid levels you can add, the more quality you can achieve according to your liking and machine. It is highly flexible.
From a technical standpoint i really don't see insurmountable objections against this solution, as it doesn't even interfere dramatically with the existing one. Rather it can serve as an additional, supportive and optional layer to the basic probe grid. If you don't want to apply it for world space, than don't do it. There are many another ways.
"Radiance Cascades" can be combined with world space and hardware rt or compute shaders rt to achieve similar results to ubisoft's GI solution, if not even better. Many features in godot turned out to be short-lived obsolete code. But i really don't see "Radiance Cascades" as such. It rather could serve as a basis for further development and options, which build upon it.
Our resources are limited and it would truly be a missed opportunity not to take advantage of this low-hanging fruit from which a great tree could grow.

@WickedInsignia
Copy link

That's all fair but a little outside the scope of this PR.
I'd love to see a proposal for this so related discussion could be continued somewhere more suitable :)

@Jamsers
Copy link

Jamsers commented Jul 12, 2024

What you're describing is a much bigger more complex task, akin to implementing a Lumen style solution that combines multiple techniques to provide low tradeoff, high performance, high quality real time GI. That's way out of the scope of this PR - SDFGI/HDDAGI or a standalone implementation of world space radiance cascades can only, at most, be a component of said feature.

And I'll be the one to burst your bubble - the rendering team has no plans to implement such a feature - the plan is to jump straight to GPU driven rendering with real time ray tracing, because the cost of developing a solution like that would be huge, and it might ultimately end up just being an interim solution, because for all we know the next generation of consoles might just be able to do straight up ray tracing.

And, as you say, "Our resources are limited", and "Many features in godot turned out to be short-lived obsolete code". Surely you wouldn't wanna add to that? 😉

@Jamsers
Copy link

Jamsers commented Jul 12, 2024

If you really insist that this is an essential feature, you should create a separate proposal, and create a pull request with an implementation of the feature. If you don't have the skills to create such a pull request, you can hire a developer to implement it for you. If you don't have the resources to hire a developer, you can try to rally the community and hope someone skilled enough picks it up. We've done the same with per-pixel motion blur in Godot to great success - so rest assured this isn't just me trying to wave you off - if there is enough community interest, it's gonna get done.

@octanejohn
Copy link

i think people say false info with confidence because the dont value juans reasons, the author even said they dont know if it works with non-frustrum non-fixed camera triagle based meshes without large tradeoffs
info is in gp discord where the development happens

@Parsnip
Copy link

Parsnip commented Jul 12, 2024

OK, since people seem to be allergic to reading the whole thread Juan posted for some reason, I will summarize for you all.

Summary is greatly appreciated.
In case you (or anyone else) didn't know, it's actually not trivial to read twitter threads anymore if you don't have a twitter account, and frankly I don't blame anyone for not having one these days.

@fakhraldin
Copy link

fakhraldin commented Jul 12, 2024

What you're describing is a much bigger more complex task, akin to implementing a Lumen style solution that combines multiple techniques to provide low tradeoff, high performance, high quality real time GI. That's way out of the scope of this PR - SDFGI/HDDAGI or a standalone implementation of world space radiance cascades can only, at most, be a component of said feature.

And I'll be the one to burst your bubble - the rendering team has no plans to implement such a feature - the plan is to jump straight to GPU driven rendering with real time ray tracing, because the cost of developing a solution like that would be huge, and it might ultimately end up just being an interim solution, because for all we know the next generation of consoles might just be able to do straight up ray tracing.

And, as you say, "Our resources are limited", and "Many features in godot turned out to be short-lived obsolete code". Surely you wouldn't wanna add to that? 😉

I don't know why you are putting my comment out of context, despite me explicitly stating that i don't propose a combination of multiple GI techniques for godot now. If you think "radiance cascades" would be the same as those, than you still don't understand the concept of RC.
I really recommend the tech talk of Alexander about RC first and to read his paper. It should be comprehensible even for people without programming skills. I can help you with questions. I mentioned at the end of my previous comment, that Radiance Cascades could be combined with multiple GI techniques in future projects and not now. You really misunderstood my points here completely despite my detailed explanation.

As for your objection as to why not to implement radiance cascades because godot plans the direct transition to hardware ray tracing anyway, you again seem to miss some other substantial facts about the situation in graphics. If you really think we would be just a gpu generation away from achieving 1024 samples per pixel ray tracing in mass production gpus, then this is very delusional. Even a 4090 struggles with diffuse path tracing at high resolutions and high samples per pixel
(Edit.: I mean in real-time of course for games. Some seem to have misunderstood this point. A 4090 can of course render with 1024 spp but it needs hours to days still. This is not real-time)
Therefore even this gpu needs crutches and tricks like upscaling, denoising, ambient occlusion, limiting bounces etc. (for real-time games!)
Why do you think is the industry utilizing those and other tricks like probe based RT GI, like DDGI instead of per pixel ray tracing? We got a huge inefficiency problem with divergence in ray tracing and many other challenges. Even if we could achieve a monster rt gpu, there is still the challenge with rt in mobile devices and wide spread gpus and consoles.
Nearly ten years passed since RTX gpus and we are still stuck in cross-gen with just a couple of RT titles, which still use the aforementioned tricks by the way. Nearly all big graphics engine devs plan for the RT future but are well aware of RT limitations and the needs of the current gpu hardware. Don't be erred by marketing promises. We need realistic solutions.

Proposing to largely skip cross gen and wait for RT in future would be a big mistake. Future techniques build upon today's concepts and compatibility. Skipping those is not how to gain traction, especially when one depends on the other. May i remind you about the success of blender? Its project leaders listen to the needs and wishes of artists and devs and grow by that. They took full advantage of open source and not by unreasonably restricting it. This is how they earned respect and a solid place in the scene, where they can finally influence trends and developments.

Now please let's end this gate-keepy discussion and return back to actually improving HDDAGI. Do you have any positive and constructive ideas?

@Jamsers
Copy link

Jamsers commented Jul 12, 2024

This discussion has run its course. As I mentioned here, please create a proposal and accompanying PR if possible. A hard turn to radiance cascades from the current HDDAGI implementation isn't a reasonable course of action for this PR anyway, so even if we did decide to do radiance cascades you'll need the proposal and PR anyway.

@Raikiri
Copy link

Raikiri commented Jul 12, 2024

World space radiance cascades has occlusion challenges (sound familiar?) and memory consumption challenges - just like pretty much all real time GI solutions.

Radiance Cascades author here. It's strange to see RC being discussed in a PR for a different GI implementation, and it seems kind of rude towards the author of the PR. But somebody linked me this discussion and I thought I just had to clarify a couple things.

First, I never pitched 3d RC as the ultimate GI solution. I never even pitched it as a good GI solution. I wouldn't even call it practically viable by my standards, to be honest. I'm just saying that it's a direct improvement (in pretty much all parameters) over anything that has a regular grid (or nested grids) of radiance probes. DDGI for example.

Second, the screenspace version of RC is only limited to on-screen occluders and light sources if screenspace raymarching is used. However, screenspace cascades can store worldspace radiance intervals (including offscreen geometry) if you have a way of casting worldspace rays, using either a BVH, an SDM or a voxel raymarcher of some sort. The main limitation of this approach is that it only allows storing radiance on the surface of the depth buffer and you can't use it for example for volumetric lighting.

@fakhraldin
Copy link

This discussion has run its course. As I mentioned here, please create a proposal and accompanying PR if possible. A hard turn to radiance cascades from the current HDDAGI implementation isn't a reasonable course of action for this PR anyway, so even if we did decide to do radiance cascades you'll need the proposal and PR anyway.

This seems to be a misunderstanding again. I never proposed "Radiance Cascades" to replace HDDAGI but to complement it in order to mitigate some of the drawbacks like lights leaking and to increase visual detail without exponential performance cost and hacky workarounds.
Read what Juan replied here upon your proposition to "set arbitrary probe densities" here

I highly appreciate HDDAGI primarily as a vehicle to accelerate the calculation of ray tracing and ray marching. I don't want to dismiss it. But as for the quality improvement Juan himself posted the need for a solution for "High density probes support" in the TODO list. Radiance Cascades is more than a legit proposition to this challenge.
It doesn't make any sense to propose Radiance Cascades in a separate request, while HDDAGI is still in the discussion and conceptionally not finished yet.

@fakhraldin
Copy link

Ok, here comes a proposal tackling the lights leaking issue more specifically. How about applying a Radial Gaussian Depth Function around the probes. A custom iteration of it is being used in GIBS for its surfels technique. This is inspired by Variance Shadow Maps based on the paper from Donnelly in 2006 and used in DDGI by Majercik in 2019.

@jams3223

This comment was marked as off-topic.

@atirut-w
Copy link
Contributor

atirut-w commented Aug 3, 2024

Ok, here comes a proposal tackling the lights leaking issue more specifically. How about applying a Radial Gaussian Depth Function around the probes. A custom iteration of it is being used in GIBS for its surfels technique. This is inspired by Variance Shadow Maps based on the paper from Donnelly in 2006 and used in DDGI by Majercik in 2019.

I take it the engine's real time GI solution can be considered production-ready with this in combination with HDDAGI? HDDAGI solves black stairsteps and improves performance, and RGDF solves light leaking around thin geometries, both of which are the main pain points with SDFGI.

@octanejohn
Copy link

juan said he already tried ddgi so by extension Radial Gaussian Depth Function, so no

@jams3223
Copy link

jams3223 commented Aug 3, 2024

juan said he already tried ddgi so by extension Radial Gaussian Depth Function, so no

What do you mean he tried DDGI? Is it the same thing as the Radial Gaussian Depth Function?

@atirut-w
Copy link
Contributor

atirut-w commented Aug 4, 2024

How about applying a Radial Gaussian Depth Function around the probes. A custom iteration of it is being used in GIBS for its surfels technique. This is inspired by Variance Shadow Maps based on the paper from Donnelly in 2006 and used in DDGI by Majercik in 2019.

@jams3223 there, read more slowly.

juan said he already tried ddgi so by extension Radial Gaussian Depth Function, so no

DDGI is a completely different technique, so maybe he could try just the RGDF?

@jams3223

This comment was marked as off-topic.

@viksl

This comment was marked as off-topic.

@RadiantUwU
Copy link
Contributor

image

@RadiantUwU
Copy link
Contributor

image
Has been noticing some weird behaviour where HDDAGI puts light on objects that aren't in the light distance if the camera is close to them.

@RadiantUwU
Copy link
Contributor

RadiantUwU commented Sep 1, 2024

Turned on HDDAGI, Baked VoxelGI subdiv 512, set normal bias to 6 on voxelGI, turned HDDAGI off

================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine - Radiant Fork v4.3.stable.radiant_fork.transrights<3 (102b4525d332313c3d8ee794b38e742e32609835)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /usr/lib/libc.so.6(+0x3d1d0) [0x7ac2a1d3a1d0] (??:0)
[2] RendererRD::GI::process_gi(Ref<RenderSceneBuffersRD>, RID const*, RID, RID, unsigned int, Projection const*, Vector3 const*, Transform3D const&, PagedArray<RID> const&) (/home/radiant/godot/./servers/rendering/renderer_rd/environment/gi.cpp:3511)
[3] RendererSceneRenderImplementation::RenderForwardClustered::_pre_opaque_render(RenderDataRD*, bool, bool, bool, RID const*, RID) (/home/radiant/godot/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp:1488)
[4] RendererSceneRenderImplementation::RenderForwardClustered::_render_scene(RenderDataRD*, Color const&) (/home/radiant/godot/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp:2030)
[5] RendererSceneRenderRD::render_scene(Ref<RenderSceneBuffers> const&, RendererSceneRender::CameraData const*, RendererSceneRender::CameraData const*, PagedArray<RenderGeometryInstance*> const&, PagedArray<RID> const&, PagedArray<RID> const&, PagedArray<RID> const&, PagedArray<RID> const&, PagedArray<RID> const&, PagedArray<RID> const&, RID, RID, RID, RID, RID, RID, RID, int, float, RendererSceneRender::RenderShadowData const*, int, RendererSceneRender::RenderHDDAGIData const*, int, RendererSceneRender::RenderHDDAGIUpdateData const*, RenderingMethod::RenderInfo*) (/home/radiant/godot/./servers/rendering/renderer_rd/renderer_scene_render_rd.cpp:1239)
[6] RendererSceneCull::_render_scene(RendererSceneRender::CameraData const*, Ref<RenderSceneBuffers> const&, RID, RID, RID, unsigned int, RID, RID, RID, RID, int, float, bool, RenderingMethod::RenderInfo*) (/home/radiant/godot/./servers/rendering/renderer_scene_cull.cpp:3411)
[7] RendererSceneCull::render_camera(Ref<RenderSceneBuffers> const&, RID, RID, RID, Vector2, unsigned int, float, RID, Ref<XRInterface>&, RenderingMethod::RenderInfo*) (/home/radiant/godot/./servers/rendering/renderer_scene_cull.cpp:2648)
[8] RendererViewport::_draw_3d(RendererViewport::Viewport*) (/home/radiant/godot/./servers/rendering/renderer_viewport.cpp:250)
[9] RendererViewport::_draw_viewport(RendererViewport::Viewport*) (/home/radiant/godot/./servers/rendering/renderer_viewport.cpp:317)
[10] RendererViewport::draw_viewports(bool) (/home/radiant/godot/./servers/rendering/renderer_viewport.cpp:810)
[11] RenderingServerDefault::_draw(bool, double) (/home/radiant/godot/./servers/rendering/rendering_server_default.cpp:88)
[12] RenderingServerDefault::draw(bool, double) (/home/radiant/godot/./servers/rendering/rendering_server_default.cpp:412)
[13] Main::iteration() (/home/radiant/godot/main/main.cpp:4123)
[14] OS_LinuxBSD::run() (/home/radiant/godot/platform/linuxbsd/os_linuxbsd.cpp:962)
[15] /home/radiant/godot/bin/godot.linuxbsd.editor.dev.x86_64.llvm(main+0x1bf) [0x65487ec0834f] (/home/radiant/godot/platform/linuxbsd/godot_linuxbsd.cpp:86)
[16] /usr/lib/libc.so.6(+0x25e08) [0x7ac2a1d22e08] (??:0)
[17] /usr/lib/libc.so.6(__libc_start_main+0x8c) [0x7ac2a1d22ecc] (??:0)
[18] /home/radiant/godot/bin/godot.linuxbsd.editor.dev.x86_64.llvm(_start+0x25) [0x65487ec080b5] (??:?)
-- END OF BACKTRACE --
================================================================

@RadiantUwU
Copy link
Contributor

More details about the crash:

Process 369431 stopped
* thread #1, name = 'godot.linuxbsd.', stop reason = signal SIGSEGV: address not mapped to object (fault address: 0x1a8)
    frame #0: 0x000055555cd9ae37 godot.linuxbsd.editor.dev.x86_64.llvm`RendererRD::GI::process_gi(this=0x00005555637de818, p_render_buffers=Ref<RenderSceneBuffersRD> @ 0x00007fffffff8950, p_normal_roughness_slices=0x00007fffffffa580, p_voxel_gi_buffer=(_id = 1225139421193172), p_environment=(_id = 667631191326724), p_view_count=1, p_projections=0x00007fffffffaeb0, p_eye_offsets=0x00007fffffffae98, p_cam_transform=0x00007fffffffade4, p_voxel_gi_instances=0x0000555562cffca8) at gi.cpp:3511:44
   3508                 RID uniform_set = UniformSetCacheRD::get_singleton()->get_cache(
   3509                                 shader.version_get_shader(shader_version, 0),
   3510                                 0,
-> 3511                                 RD::Uniform(RD::UNIFORM_TYPE_IMAGE, 1, hddagi->voxel_bits_tex),
   3512                                 RD::Uniform(RD::UNIFORM_TYPE_IMAGE, 2, hddagi->voxel_region_tex),
   3513                                 RD::Uniform(RD::UNIFORM_TYPE_TEXTURE, 3, hddagi->voxel_light_tex),
   3514                                 RD::Uniform(RD::UNIFORM_TYPE_TEXTURE, 4, hddagi->lightprobe_specular_tex),

Copy link
Contributor

@RadiantUwU RadiantUwU left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in a segmentation violation when process_gi is being called for VoxelGI.

@@ -3856,12 +3456,12 @@ void GI::process_gi(Ref<RenderSceneBuffersRD> p_render_buffers, const RID *p_nor
push_constant.proj_info[2] = (1.0f - p_projections[0].columns[0][2]) / p_projections[0].columns[0][0];
push_constant.proj_info[3] = (1.0f + p_projections[0].columns[1][2]) / p_projections[0].columns[1][1];

bool use_sdfgi = p_render_buffers->has_custom_data(RB_SCOPE_SDFGI);
bool use_hddagi = p_render_buffers->has_custom_data(RB_SCOPE_HDDAGI);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of hddagi not being used, we know that hddagi is initialized as null.

vgiu.append_id(rbgi->voxel_gi_textures[i]);
}

RID uniform_set = UniformSetCacheRD::get_singleton()->get_cache(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that when hddagi is null, we still use it, right here.

@izarii-dev
Copy link

I have an idea, is it possible to add an option to include local lights entirely in the GI to improve performance?

@Jamsers
Copy link

Jamsers commented Oct 3, 2024

I have an idea, is it possible to add an option to include local lights entirely in the GI to improve performance?

You mean like this proposal? godotengine/godot-proposals#8098

@izarii-dev
Copy link

izarii-dev commented Oct 3, 2024

You mean like this proposal? godotengine/godot-proposals#8098

Yes but I got this idea from NVIDIA's RTXDI and Epic's Megalights

@fakhraldin
Copy link

Yes but I got this idea from NVIDIA's RTXDI and Epic's Megalights

Epic's Megalights seems to be based on a paper from Cem Yüksel called Stochastic Lightcuts for Sampling Many Lights. Here is the Demo.

It seems like an additional, profound and performant solution to visualize local lights and local shadows that inclusively also tackles lights leaking. Otherwise Megalights wouldn't make much sense. Sound's interesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet