Skip to content
straaljager edited this page Jan 31, 2018 · 2 revisions

Next-gen rendering platform

Goals (cfr mission statement)

High quality rendering

  • Path tracing
  • Global illumination
  • Advanced shading

Ease of use

  • Web UI
  • Nodegraph editor
  • iPython/JuPyter interface

Accessible

  • Web based
  • Remote rendering
  • Any client device

Scalable

  • Very large datasets
  • Super high resolution

Extensible

  • Modules
  • API
  • Plug-ins for analysis
  • Novel scientific rendering modes (electron microscopy, LFP, fluorescence microscopy, virtual MRI, …)

Immersive

  • OpenDeck
  • 3D stereoscopy
  • Head tracking

Unified rendering pipeline

  • Easier to maintain
  • Increased team focus
  • Decreased risk

Feature list

Architectural features

  • Client Server architecture
  • Handle massive datasets: Out-of-core, LOD scheme (see e.g. VoxLOD) for very large datasets
  • Modular approach
  • Load balancing
  • CPU and GPU based (CPU has highest priority)
  • Network rendering

Web/cloud related features

  • Rich web based UI
  • Python API (for Jupyter Notebooks)
  • Node graph editor (see www.webglstudio.org for open source JS implementation) for easy “visual programming”
  • Easy deployment with Docker (or OpenShift)
  • Real-time streaming: jpegs + pngs, WebRTC (low priority)

Rendering features

  • Support for multiple geometry primitives: cylinders, cubes (to visualize bounding boxes), spheres, triangles, splines, nurbs, subdivision surfaces
  • Environment lighting
  • Area lights
  • Noise reduction
    • Cosine weighted importance sampling
    • Quasi Monte Carlo sampling (Sobol sampling, low discrepancy sampling, Cranley Patterson rotation)
    • Denoiser (machine learning)
    • Direct lighting + next event estimation
    • Russian roulette (noise reduction, kills rays with certain probability)
    • HDR environment map importance sampling
    • (Portals)
    • Efficient many lights rendering (light tree)
    • Bidirectional PT
    • VCM: vertex connection merging
    • Metropolis Light transport
    • Stratified sampling on area lights
    • glossy filter/clamp,
    • caustics filter/clamping
  • Volume rendering
    • Exposure Render (volume rendering)
    • Volume irradiance caching (follow-up to Exposure render)
  • Subsampling,
  • Ray casting,
  • AO mode
  • Object ID mode
  • Material mode
  • Different rendering modes (simple ray casting, shadows only, AO mode, GI mode)
  • Multiple camera views
  • Camera: orthographic, perspective, panoramic, fisheye lense (microscope)
  • Depth of field (camera aperture, focal distance)
  • Surface + volume rendering + hybrid (see Exposure Render) (file format?)
  • Efficient Dynamic BVH building (Selective rebuilding of BVH (keep static parts, rebuild dynamic ones)
  • Arbitrary clipping planes
  • Efficient transparency (investigate multi-hit ray traversal)
  • Efficient rendering of thin geometry (fibres, branches -> also check multi-hit ray traversal: presentation and source code)
  • Selective visibility of subsets of cells (requires dynamic BVH? Or per ray object ID checking?)
  • Multi-scale rendering
  • On-demand geometry streaming/paging,
  • Out-of-core
  • Level of detail rendering (see VoxLOD)
  • Subdivision surfaces to increase tessellation for close-by compartments
  • Tonemapping (Reinhard, “cinematic”)
  • Post-processing effects: contrast, color saturation, brightness, exposure
  • Instancing
  • Advanced shaders: Subsurface scattering, Disney PBR shader, GGX
  • Real-time anti-aliasing: adaptive supersampling, FXAA, something smarter?
  • Stereoscopic 3D rendering
  • Render region
  • Colour maps
  • Local Field Potential visualization (volume rendering?)
  • Displacement mapping for adding more relief
  • Spectral rendering for wavelength based effects
  • Render passes (normal, diffuse, object Id, material ID, AO pass, direct/indirect diffuse, direct/indirect glossy): low priority, but useful for compositing
  • Offline (batch) rendering for movies
  • Animation: Define camera paths, animate simulation data

Opendeck/Tide features

  • Omnidirectional stereo camera
  • OpenDeck camera
  • VRPN tracking

Scientific Use Cases, Domain Driven, User Driven

  • Topology viewer
  • Local Field Potential rendering
  • Visualize slices, sections, subsets of cells
  • ….

Experimental/research features

  • Spatiotemporal reprojection and rewarping
  • Denoiser (with and without machine learning)
  • ARCore experiments + lightfields
  • Hololens experiment for realtime holographic rendering?
  • Rendering Acceleration using deep learning algorithms (upscaling, reprojection, denoising, pix-2-pix, more efficient computing, neuromorphic SPiNNaker)
  • Lightfield rendering (precomputed 3D, DOF effect as a post-process)
  • Neuromorphic chips, FPGAs
  • Filtering
  • Investigate 2 TB AMD GPUs
  • Novel scientific rendering modes (electron microscopy, LFP, fluorescence microscopy, virtual MRI, …

Open questions

  • Do we still need local rendering (WebGL based)? E.g. for single neuron morphologies? -> could use light fields instead, or