By Lucy Burton
Download Advanced Rendering Techniques [PDF 578KB]
This article goes further into the details of the varying procedures associated with rendering in Autodesk* Softimage*. Rendering is as nuanced a task as lighting is and incorporates many of the principles covered in previous articles. You don't want to spend hours of effort creating detailed models and precise animation only to create substandard imagery in the final output of your scene, and there are many interrelated parameters and options to consider in this phase of scene creation. As with lighting, rendering is often one of the least-understood aspects of 3D imaging, but it's critical to achieving high-quality animations.
Ambient Occlusion Shading
Several rendering techniques can help you achieve a higher degree of realism, and one of the best is ambient occlusion. Ambient occlusion (see Figure 1) creates more defined and complex shading depth within a scene by globally calculating the physical properties of light as it radiates off of non-reflective surfaces, adding a softness to the scene that more accurately mimics reality. Rays are cast from the object's surface, with the densest areas of geometry represented by darker shading. It also helps to visually represent the relative proximity of objects via the shadows they cast.
Figure 1. The image on the left shows a prehistoric creature with standard Lambert shading. The image on the right has ambient occlusion shading parameters applied to it, providing much more nuanced shadow depth.
Achieving Real-world Light Reflections: HDRI and IBL
One thing that made computer-generated images (CGI) look different from so-called "real-world imagery" in the past was that the images themselves didn't have the same level of color depth that our eyes are used to seeing in nature. Though you are not consciously aware of how many secondary reflections your eye processes, when those reflections are missing in an image, it makes the image feel less real-even when the viewer cannot readily identify why. High dynamic range images (HDRIs) solve this problem.
A typical 8-bit image stores color component values in an integer range between 0 and 255, but HDRIs store data in floating-point values (for example, 25.3874) that allow them a much greater range of tonal values-from 0 to more than 100,000-thereby more closely mimicking the range of radiance values you're used to seeing in nature. You create these images by combining a number of photographs bracketed at varying exposures ranging from very dark to very light into a single image that uses a higher number of bits per color channel (16 bit or 32 bit) than traditional device-referencing images. Many applications can help you create HDRIs, such as Autodesk* Stitcher* or Paul Debevec's HDR Shop, and now there are even camera mounts designed to specifically produce these images, like the GigaPan* EPIC* DSLR robotic camera mount system. But perhaps the most common tool would be Adobe* Photoshop*, which (as of version CS3) natively includes HDRI creation within the application. Simply bring your 16-32 bracketed images into Photoshop, and click File > Automate > Merge to HDR. The program automatically creates the merged image, with each exposure representing a new layer in the main image. Then, you can use Autodesk* Softimage* for environment mapping (see Figure 2).
Figure 2. The image of the bell and vajra on the left was created using final gather and global illumination alone. The image on the right has an HDRI mapped to the environment shader, providing more realistic reflections on the surface of the metallic objects.
In addition to the .hdr format, Autodesk* Softimage* supports the Open EXR (.exr) format, and you have the added advantage of being able to modify these files directly in the embedded Softimage* compositing application FX tree. Originally developed by Florian Kainz, Wojciech Jarosz, and Rod Bogart at Industrial Light & Magic, the EXR format is especially helpful during the compositing phase of production, because it stores arbitrary channels such as specular, diffuse, and alpha into one file. It also allows animators to annotate each image with additional data (such as color timing, tracking data, and camera position) that will be helpful further down the production pipeline.
Image-based lighting (IBL) uses these photographs to light a scene solely by reflection-mapping these photos onto the larger scene. To do so, you have to measure the illumination of your real-world scene via what is known as a light probe-typically, an omnidirectional photograph made of a mirror-reflective sphere whose data is used in combination with data obtained from another photograph of a diffuse white sphere. A background plate of that same scene is used as a reference plane behind other 3D objects, and then the light probe image or stitched HDRI is used to light the scene via the Environment Pass shader in Autodesk* Softimage*. When you make photo-based lighting calculations using final gather or global illumination, the probe itself lights the scene, casting reflections and shadows on all artificial 3D objects from the same direction as the original natural light source that was present in the real world. As a result, your 3D characters and scene objects mesh better with the surrounding environment and seem more real.
Global Illumination: Fine-tuning Photon-based Rendering
Within Softimage*, you set individual lights to allow for global illumination photon generation from the visibility parameters found in the Explorer. However, if you want to tell the software to actually create those photons, double-click the light to open its dialog box, and select the Global Illumination option. If you want to add caustics as well, select that option both in the visibility parameters and in the shader dialog box.
The number of emitted photons generated determines the quality of the render but also increases render times the higher the number. Memory usage is proportional to the number of photons stored within the photon map.
In terms of optimization, consider that even in a photorealistic scene, it may not be necessary to have all surfaces generating global illumination values. In some cases, you could for instance turn off global illumination effects on a particular object and let ambient occlusion and/or final gather renders fill in the gaps (see Figure 3).
Figure 3. The image on the left was rendered with one non-photon-generating infinite light (used for creating sunlight shadows) and two spot lights generating 70,000 global illumination photons, with a global illumination accuracy setting of 400, a photon search radius of 1.1, a combined trace depth on each light of 20, and gauss filtering. This lower setting is useful for making sure that global illumination is functioning within the scene and that your photons are covering the space evenly prior to increasing them for final render quality. The image on the right shows the room with higher global illumination sampling applied. It has each spot generating 150,000 global illumination photons, with a global illumination accuracy setting of 550, a photon search radius of 6, a combined trace depth of 24, and Mitchell filtering. It also has ambient occlusion applied to the walls, curtains, and furniture, so the scene is noticeably smoother.
After the photons pass through the scene, a photon map is created, three-dimensionally representing where those photons are stored. Once a satisfactory version is generated, I recommend that you save the map and clear the Rebuild Map check box on the Global Illumination tab, so that this image isn't rebuilt with every frame.
Another important consideration is trace depth. As mentioned in the previous article, "3D Lighting in Softimage," where caustics were discussed, if you want to make glass clear, you must allow the photons generated within the scene to bounce through the surface of the glass, typically reflecting and refracting a minimum of five times. However, if you have a transparent object that is more complex or you want additional reflections, you may need to increase those values. Just remember that doing so also increases render time.
Combining Final Gather with Global Illumination
Global illumination uses photon energy to calculate direct and indirect lighting by simulating the real-world behavior of light itself: Photons travel in a straight line from a light source until they bounce off some other medium. In contrast, final gather calculates indirect illumination by measuring rays cast from all the illuminated points on the surface of a scene object itself. Final gather can create photorealistic lighting faster than global illumination, but when used together, your scenes become even more realistic (see Figure 4).
You need to make a few adjustments if you're going to achieve crystal-clear results. In final gather, high-frequency noise appears like tiles on your scene. You can reduce this noise by increasing the number of rays and reducing the Max Radius value using the sliders on the Final Gather tab of the Mental Ray Render Options dialog box. In contrast, low-frequency noise makes a scene appear blotchy. You can resolve this by decreasing the Max Radius value until that noise is eliminated, although anything below a value of 1 will increase your render times.
If artifacts like this persist, then increase the number of final gather rays generated by adjusting that slider in the same dialog box. In general, if you have a large scene with open spaces, such as an aircraft hangar or a stadium, it's wise to use a large Max Radius value; for scenes that show tighter spaces containing more detailed items, use a smaller Max Radius value.
To further smooth out the scene, click the Rendering tab in the Mental Ray Render Options dialog box, and change the Min/Max level sliders in the Aliasing section. You can also refine the look by clicking the Framebuffer tab and adjusting the Sample Filtering Type. These two settings describe a different kind of mathematical formula that yields particular curves that the software uses to reduce the jagged edges on images as they scale in size. The Box value is the lowest quality, and the default is Gauss, which is a low-pass filter that is non-negative and non-oscillatory, therefore causing no ringing artifacts. I tend to prefer Mitchell, because it's a happy medium between Gaussian and Lanczos filtering, but mileage may vary on that. Lanczos is excellent, as well, but I've found that it does tend to make for longer render times and, depending on the scene, can cause clipping or ringing artifacts under some circumstances.
Figure 4. The image above has all the global illumination parameters from the previous renders but now has both ambient occlusion and final gather applied, as well, to further smooth out the scene. These settings greatly improve the lighting detail around the curtains in particular, adding more subtle detail to the glossiness of the floor, as well.
When determining the level of accuracy, you should also be conscious of which mode you're operating under in the Mental Ray Render Options dialog box. If you have an animation, Multiframe is the proper selection, but if you're seeking to render an extremely high-quality, single-frame render for a print advertisement (for example), choose Legacy mode. You could also choose Exact mode, but be aware that this mode vastly increases render times, as it dispenses with any cached final gather data and calculates every single sample without interpolating information from the previously stored information you may have generated in previous test renders.
When you've done an initial render with final gather and global illumination, you can refine the lighting in one of two ways. First, in the Render Options dialog box, on the Final Gathering tab, click Only use FG points from File from the Map File Settings drop-down menu. Doing so freezes that setting so that you can adjust other parameters without having to re-render the entire final gather calculation. Then, either adjust the brightness of the scene using the Intensity slider bar on the GI & Caustics tab of the lights themselves, or change the setting on the pass level by clicking Edit > Edit Current Pass, then on the Pass Shaders tab, clicking Overwrite Lens Shader from the Lens drop-down menu. Click Add, and then select the mia_Simple_Tone_Mapping shader (see Figure 5.). Once that's been applied, click Edit and adjust the Gamma and Gain sliders in the dialog box that pops up to achieve the look you desire.
Figure 5. Adjusting simple tone mapping
If you choose to alter the scene using the lights, first turn off final gather. Then, on the GI & Caustics tab, make sure the Rebuild Map check box is selected and change the intensity values. Your scene will temporarily look a bit too bright, but be mindful that when you add final gather to the mix, global illumination will be smoothed out and darkened somewhat. So next, clear the Rebuild Map check box, return to the Final Gather tab, select the Enable checkbox, and then click Overwrite existing file from the Map File Settings drop-down menu. The two settings should blend nicely.
Additionally, when creating final gather animations, you must take steps to prevent flickering, which can occur during rendering if you don't adjust your settings properly. First, change the Map File setting to read Append new FG points to file, and give the map a unique name. This way, the computer isn't recalculating all the final gather points from scratch on each frame. Next, on the Optimization tab, click Final Gathering Only to create a final gather pass that you can work with later.
When that's complete, open a Directory browser window and look for the FG_Animation folder inside the Render_Pictures folder of your scene: You'll find the final gather map file you just generated. Next, switch your map file settings back to Only use FG points from file; click the Optimization tab, and select Full Render. The software reads all the indirect illumination information data you generated previously-minus the flickering-and you've created a rock-solid animation with all the indirect illumination extras.
In November 2009, Mental Images* released iray*, a mechanism specifically designed for creating "push-button" renders of photorealistic images with correct global illumination calculations. It is more intuitive and easier to set up (if you want to simulate real-world lighting and physically correct materials), so it's especially good for architectural visualization, where mia_materials shaders are most often used. This is not, however, a renderer you would use for non-photoreal scenes, nor does version 1 support motion blur, though I suspect that will be upcoming. Still, this is an advance for photoreal rendering, as it more closely simulates the real-world behavior of light and its interaction with materials, whereas traditional ray tracing methods could only approximate this through the use of a variety of algorithms.
There is, of course, much more I could discuss about lighting and rendering, but I hope that this series of articles has given you a foundation of understanding upon which to build new ideas and create fantastic new imagery. In truth, the technological limits fall farther by the wayside with each passing day, and the only limits that remain are those of human imagination.
About the Author
Lucy Burton was raised in Europe and returned to the United States for college, graduating with honors from Seattle University with a degree in drama/political science and obtaining Film Certification at New York University. She interned in technical direction at Intiman Theatre. Having worked professionally both in theater and in production on several films, she then moved into the postproduction/visual effects realm, first working on Softimage 3D Extreme at Mesmer FX. Lucy has been working with the XSI platform since its inception nearly a decade ago. After founding her own digital design studio in 2001, she went on to produce documentary videos on the humanitarian crisis in Indonesia following the tsunami and helped a nongovernmental organization that assists victims in Darfur, Afghanistan, and Uganda, among other crisis areas. She now is freelancing in and around Hollywood, California.