Stereo rendering for 3D displays and for virtual reality headsets provide several visual cues, including convergence angle and highlight disparity. Naïve stereo rendering effectively doubles the computational burden of image synthesis, and thus it is desirable to reuse as many computations as possible between the stereo image pair. Computing a single radiance for a point on a surface, to be used when synthesizing both the left and right images, results in the loss of highlight disparity. Our hypothesis is that absence of highlight disparity does not impair perception of surface properties at larger distances. We verify this hypothesis with a user study and provide rendering guidelines to leverage our findings.
We present several practical improvements to a recent layered reconstruction algorithm for defocus and motion blur. We leverage hardware texture filters, layer merging and sparse statistics to reduce computational complexity. Furthermore, we restructure the algorithm for better load-balancing on graphics processors, albeit at increased memory usage. We show performance gains of 2 - 5x with an almost no difference in image quality, bringing this reconstruction technique to the real-time domain.
Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real-time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real-time rendering and as a post-processing pass for offline rendering.
We present a novel architecture for flexible control of shading rates in a GPU pipeline, and demonstrate substantially reduced shading costs for various applications. We decouple shading and visibility by restricting and quantizing shading rates to a finite set of screen-aligned grids, leading to simpler and fewer changes to the GPU pipeline compared to alternative approaches. Our architecture introduces different mechanisms for programmable control of the shading rate, which enables efficient shading in several scenarios, e.g., rendering for high pixel density displays, foveated rendering, and adaptive shading for motion and defocus blur. We also support shading at multiple rates in a single pass, which allows the user to compute different shading terms at rates better matching their frequency content.
We introduce a novel voxel-based algorithm that interactively simulates both diffuse and glossy single-bounce indirect illumination. Our algorithm generates high quality images similar to the reference solution while using only a fraction of the memory of previous methods. The key idea in our work is to decouple occlusion data, stored in voxels, from lighting and geometric data, encoded in a new per-light data structure called layered reflective shadow maps (LRSMs). We use voxel cone tracing for visibility determination and integrate outgoing radiance by performing lookups in a pre-filtered LRSM. Finally we demonstrate that our simple data structures are easy to implement and can be rebuilt every frame to support both dynamic lights and scenes.
- Seite 1