Deep Shading Buffers on Commodity GPUs

visual computing research

By Petrik Clarberg and Jacob Munkberg
Intel Corporation

deep shading buffers

Real-time rendering with true motion and defocus blur remains an elusive goal for application developers. In recent years, substantial progress has been made in the areas of rasterization, shading, and reconstruction for stochastic rendering. However, we have yet to see an efficient method for decoupled sampling that can be implemented on current or near-future graphics processors. In this paper, we propose one such algorithm that leverages the capability of modern GPUs to perform unordered memory accesses from within shaders. Our algorithm builds per-pixel primitive lists in canonical shading space. All shading then takes place in a single, non-multisampled forward rendering pass using conservative rasterization. This pass exploits the rasterization and shading hardware to perform shading very efficiently, and only samples that are visible in the final image are shaded. Last, the shading samples are gathered and filtered to create the final image. The input to our algorithm can be generated using a variety of methods, of which we show examples of interactive stochastic and interleaved rasterization, as well as ray tracing.

Citation: Petrik Clarberg, Jacob Munkberg, “Deep Shading Buffers on Commodity GPUs”, ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014), vol. 33(6), 2014.

For more complete information about compiler optimizations, see our Optimization Notice.