Game Research

Practical Layered Reconstruction for Defocus and Motion Blur

We present several practical improvements to a recent layered reconstruction algorithm for defocus and motion blur. We leverage hardware texture filters, layer merging and sparse statistics to reduce computational complexity. Furthermore, we restructure the algorithm for better load-balancing on graphics processors, albeit at increased memory usage. We show performance gains of 2 - 5x with an almost no difference in image quality, bringing this reconstruction technique to the real-time domain.
  • Developers
  • Game Development
  • Game Research
  • Rendering Research
  • Game Development
  • Graphics
  • Rendering
  • Layered Reconstruction for Defocus and Motion Blur

    Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real-time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real-time rendering and as a post-processing pass for offline rendering.
  • Developers
  • Game Development
  • Game Research
  • Rendering Research
  • Game Development
  • Graphics
  • Rendering
  • Coarse Pixel Shading

    We present a novel architecture for flexible control of shading rates in a GPU pipeline, and demonstrate substantially reduced shading costs for various applications. We decouple shading and visibility by restricting and quantizing shading rates to a finite set of screen-aligned grids, leading to simpler and fewer changes to the GPU pipeline compared to alternative approaches. Our architecture introduces different mechanisms for programmable control of the shading rate, which enables efficient shading in several scenarios, e.g., rendering for high pixel density displays, foveated rendering, and adaptive shading for motion and defocus blur. We also support shading at multiple rates in a single pass, which allows the user to compute different shading terms at rates better matching their frequency content.
  • Developers
  • Game Development
  • Game Research
  • Rendering Research
  • Game Development
  • Graphics
  • Rendering
  • Layered Reflective Shadow Maps for Voxel-based Indirect Illumination

    We introduce a novel voxel-based algorithm that interactively simulates both diffuse and glossy single-bounce indirect illumination. Our algorithm generates high quality images similar to the reference solution while using only a fraction of the memory of previous methods. The key idea in our work is to decouple occlusion data, stored in voxels, from lighting and geometric data, encoded in a new per-light data structure called layered reflective shadow maps (LRSMs). We use voxel cone tracing for visibility determination and integrate outgoing radiance by performing lookups in a pre-filtered LRSM. Finally we demonstrate that our simple data structures are easy to implement and can be rebuilt every frame to support both dynamic lights and scenes.
  • Developers
  • Game Development
  • Game Research
  • Rendering Research
  • Game Development
  • Graphics
  • Rendering
  • Subscribe to Game Research