Intel® Open Image Denoise Library Saves Time, Boosts Quality

By Garret Romaine, Carson Brownlee, Attila Tamas Afra, Published: 03/15/2019, Last Updated: 03/15/2019

Game developers face a complex trade-off when using ray tracing to boost realism and immersion. Rendering can take hours to fully converge to a high quality image, so denoising methods are often used to reduce the time to convergence by multiple orders of magnitude in many cases. Denoising filters can reduce noise and improve image quality, but developing high-quality, high-performance denoising filters is difficult and requires expertise in the domain. To overcome these challenges, Intel has created a complete solution with a high-performance, open-source filter for images rendered with ray tracing. Available in beta, it is integrated into the Unity* game development engine. In this white paper, we’ll discuss denoising, describe Intel’s solution, and show how it helps developers add complexity and image quality to their games.

New Denoise Library

Intel® Open Image Denoise is an open-source library that is part of the Intel® Rendering Framework, released under the Apache* 2.0 license. Its purpose is to provide developers with an open-source, high-quality solution that significantly reduces rendering times. It does so by filtering out Monte Carlo noise common to ray tracing methods such as path tracing.

The library:

  • helps reduce the number of necessary samples per pixel
  • incorporates a flexible C/C++ application programming interface (API)
  • contains extensive documentation
  • can be easily incorporated in most rendering solutions.

Ray tracing, as David Bookout wrote for Intel, “is a rendering technique that generates an image by tracing light paths as pixels in an image plane, then simulates the effects as those paths encounter various objects. The results can be stunning, but the computational requirements are huge.”

Image noise is often a result of computational limitations, says Attila Áfra, a graphics software engineer with Intel. Áfra holds a PhD in computer science and is an expert on ray tracing-based rendering. He helped Intel develop open-source libraries dedicated to solving issues with visualization and rendering. He works on the Intel® Embree ray tracing library and the Intel® OSPRay project, a high fidelity visualization library, and is currently dedicated to the Intel Open Image Denoise library.

rendered at 64 samples per pixel
Figure 1. Original Amazon Lumberyard Bistro image, rendered at 64 samples per pixel (spp). Note the noise in the windows and their grainy appearance. (Images in this document are shown with a slider to track the difference between original and denoising effect, and are located at Open Image Denoise Gallery.)

Denoised image with less noise in the windows
Figure 2. Denoised image, with less noise in the windows and a cleaner overall appearance.

“Most ray tracing algorithms are stochastic, which means that essentially you use random numbers, which causes noise in the resulting image,” says Áfra. The more you render, the more samples you collect, and the noise is reduced. In theory, given enough time and computing power, the image converges to the ground truth and noise is eliminated.

However, complete noise reduction is extremely costly, in time and computing power. “You would have to do this offline,” Áfra said, “because it takes so much time to create the fully converged image.”

Light Maps Accelerate Render Times

One shortcut to a photorealistic result is to map the light reflected from surfaces in the image. The light map isn’t the final image, but can be used to accelerate rendering.

“This is why light maps are used in games,” Áfra explains. “Creating converged, ray traced images in real time is not possible in most real-world cases on current hardware. So one option could be to precompute light maps, which are not dependent on any view.” The maps are a representation of geometry and light, and developers could then render the light map in real time because it is precomputed. From there, additional effects can enhance the image. Áfra points to solutions such as Unity and Unreal* game engines as examples of using precomputed light maps.

Another option is to not render a fully converged image, but to eliminate the noise with a denoising solution. However, such a solution may still introduce artifacts and some algorithms require substantial computing to produce just an approximation of the ideal image (known as the “ground truth”). “Depending on how high a quality denoising algorithm you use, that could be a significant amount of time,” Áfra warns. Additionally, most commercial denoising libraries are proprietary, hardware-limited, or specific to one rendering solution.

Áfra says implementing high-quality denoising algorithms is difficult and time consuming, and most developers don’t have the time to do it themselves. That’s why Áfra’s recent work is so important.

The Intel Open Image Denoise library runs on most CPUs produced during the past ten years. It is open-source, so users can branch the code and tweak it as needed. Users also benefit from a large community of committed developers, sharing insights, advances, and bug reports. More benefits will become available as the library matures.

“No matter what kind of ray tracing you do, whether offline or real-time, you need denoising for good performance,” says Áfra. “It is the technology that makes ray tracing much more practical, regardless of your use and time constraints.”

Integrated with Unity* 2019.2

At GDC 2019, Intel and Unity engineers will provide information about incorporating the Intel Open Image Denoise library in the Unity* 2019.2 game engine. Developers are scheduled to learn how the library significantly improves fidelity over bilateral blur by using an artificial intelligence (AI) denoiser that greatly improves convergence time for light map rendering. Other possible uses of Intel Open Image Denoise for general purpose image denoising will also be discussed.

Áfra explains that the library is implemented in Unity at the editor level. Using an efficient denoising library can decrease the time to produce converged lightmaps by multiple orders of magnitude, which can revolutionize how developers generate assets. When designers edit game levels, they need to inspect the lighting, and to do that repeatedly. They do a final bake for the ultimate quality. A single level can take hours of baking or even days, depending on the settings; this can disrupt an otherwise interactive workflow. The new denoise library can significantly reduce that time.

Rendered at 16  s p p, with grainy masonry and shadows
Figure 3. The Atrium Sponza Palace in Dubrovnik (remodeled by Frank Meinl at Crytek*, with inspiration from Marko Dabrovic's original), rendered at 16 spp, with grainy masonry and shadows.

Denoised image, with cleaner appearance
Figure 4. Denoised image, with cleaner appearance. (More denoising examples are at Intel® Open Image Denoise)

Fully Trained AI

At the heart of the Intel Open Image Denoise library is a deep-learning-based denoising filter, trained to handle a wide range of settings, from 1 sample per pixel (spp) to almost fully converged. Thus, it is suitable for both preview and final-frame rendering.

Áfra’s team applied AI to the challenge of reducing noise with good results. Previous workers had used AI denoising for ray tracing, such that AI denoising is the current state-of-the-art in terms of high quality and good performance.

The Intel team used pairs of images – one noisy, the other fully converged and noise-free – to train the AI. It learned how to denoise images, based on what it gleaned from the developer-provided examples. The Intel Open Image Denoise library ships with a fully trained AI network that the team developed to work with path-traced images. For a variety of uses, noise levels, theme setups, and light setups, the AI figures out the solution without painstakingly inputting additional parameters.

The filter can denoise images using only the noisy color (“beauty”) buffer. Or, to preserve as much detail as possible, it can utilize auxiliary buffers, such as “albedo” and “normal.” Such buffers are supported by most renderers as arbitrary output variables (AOVs) or can be usually implemented with little effort.

Using the Intel Open Image Denoise library to reduce rendering time can free up developers to introduce more complex shading and higher quality graphics. Previously, a developer might hesitate to introduce more complexity to a scene or level, because the rendering could stretch for days. This new solution enables as much complexity and creativity as their vision allows.

The functionality supports Intel® 64 Architecture-based CPUs and compatible architectures, and automatically exploits instruction sets such as Intel SSE4, AVX2, and AVX-512. It runs on laptops, workstations, and compute nodes in high performance computing (HPC) systems. The flexible C/C++ API ensures that the library can be easily integrated into most rendering solutions.

Simplicity Is the Key

The main object in the API is the device responsible for the actual denoising. The CPU could be the only device, but later as the library evolves, the object could be a GPU device or possibly a different kind of device altogether. Once the device object is identified, developers can create buffers for the denoising. These buffers can contain attributes such as color and albedo.

The library is essentially a collection of filters, some closely related. Developers can choose the one that best suits their needs. Filter objects undertake the actual denoising. The initial version of the library features a generic filter for ray tracing called RT. Áfra indicates that subsequent releases will feature a filter named RTLightmap, which will provide even higher quality for light maps.

After creating the filter object, developers specify the buffers, and denote the input images for the filter and the output image. A call to the function to do the actual denoising executes the filter, and the output is created.

In short: to use the API, create a device; from that device, create buffers as needed; create a filter object for which you specify the input and output images; then execute the filter.

Crack the Code

The Intel Open Image Denoise library provides a C99 API (also compatible with C++) and a C++11 wrapper API. The API has an object-oriented design, so it contains device objects (OIDNDevice type), buffer objects (OIDNBuffer type), and filter objects (OIDNFilter type).

All objects are reference-counted. Handles can be released by calling the appropriate release function (e.g. OIDNReleaseDevice) or retained by incrementing the reference count (e.g. OIDNRetainDevice).

With a few exceptions, setting the parameters of objects does not have an immediate effect. Objects with updated parameters are unusable until those parameters are explicitly committed to a given object. This means multiple small changes can be undertaken in batches, and the user can specify exactly when changes to objects will occur.

All API calls are thread-safe. However, operations that use the same device are serialized, so the amount of API calls from different threads should be minimized.

The following is a simple example code snippet, by Áfra, for C++ (version 11), from GitHub*:

#include <OpenImageDenoise/oidn.hpp>
// Create an Open Image Denoise device
oidn::DeviceRef device = oidn::newDevice();

// Create a denoising filter
oidn::FilterRef filter = device.newFilter("RT"); // generic ray tracing filter
filter.setImage("color",  colorPtr,  oidn::Format::Float3, width, height);
filter.setImage("albedo", albedoPtr, oidn::Format::Float3, width, height); // optional
filter.setImage("normal", normalPtr, oidn::Format::Float3, width, height); // optional
filter.setImage("output", outputPtr, oidn::Format::Float3, width, height);
filter.set("hdr", true); // image is HDR

// Filter the image

// Check for errors
const char* errorMessage;
if (device.getError(errorMessage) != oidn::Error::None)
  std::cout << "Error: " << errorMessage << std::endl;

Time Versus Quality

Game coders face enormous time constraints, and achieving 60 frames per second on a non-optimized system could require days. “Nobody wants that,” Áfra notes. “Everyone wants faster rendering. Whether you want to run in real-time or offline, faster is always better.”

If you compare a noisy image to a denoised one, the difference could be slight or dramatic. “You could get very nice, clean images from horribly noisy ones in which you can barely see what's going on,” Áfra said. But the difference could also be minor, because the image may already be close to the highest possible quality. However, even a slightly noisy image can quickly benefit from the Intel Open Image Denoise library. The denoising will never perfectly reproduce the fully converged image, Áfra says, “but it could be an order of magnitude faster.”

The denoiser’s job is to get as close as possible to the “ground truth” – fast.

Library Near Production Phase

The code base on which Áfra is working is nearing production. The beta version at press time is 0.8.1, so it is still very new. The latest Intel Open Image Denoise library sources are available at its GitHub repository. The documentation contains step-by-step instructions, and should get developers up and running quickly.

Áfra believes there are good reasons for developers to try this state-of-the-art, AI-based denoising solution, built on an open-source code base that runs on almost any x86 CPU and produces excellent image quality. “Some denoising solutions need several minutes to denoise an image,” he explains. “Our solution could run in milliseconds, depending on the hardware.”

The aim is that developers can focus on building the most complex, immersive environments they can imagine, and iterate those designs quickly and painlessly. That’s a win for developers and users alike. Download the latest version and see what a difference it can make for your project.


Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804