An Embree-Based Viewport Plugin for Autodesk Maya* 2014 with Support for the Intel® Xeon Phi™ Coprocessor

Download PDF

Purpose

This code recipe describes how to obtain, build, and use the Embree-based Viewport Plugin for Autodesk Maya* 2014 on either Microsoft Windows* or Linux*. This plugin (actually a suite of plugins) runs under Autodesk Maya 2014 on the Intel® Xeon® processor (referred to as ‘host’ in this document), and can make use of either the host or the Intel® Xeon Phi™ coprocessor (referred to as ‘coprocessor’ in this document) attached to the host for rendering.

Introduction

Embree is a collection of high-performance ray tracing kernels, developed at Intel, that are optimized for photo-realistic rendering on the latest Intel® processors. As of publication date (early 2015), Embree contains support for vector instructions on all released generations of Intel® Xeon® Processors (up to the 8-wide Intel® Advanced Vector Extensions 2 (Intel® AVX2)), and the 16-wide vector instructions of the Intel® Xeon Phi™ coprocessor. The Embree high-performance ray-tracing kernels ship with a limited-functionality path tracer (referred to as the “Embree sample renderer”) that builds on these kernels and demonstrates how they might be used in a production renderer.

Autodesk Maya* is a highly extensible digital content creation tool for “3D animation, modeling, simulation, rendering, and compositing.” Among other things, the programmer can change the technique used to display content being edited in Maya’s user interface by the creation of their own “viewport” plugin. Common viewport plugins display content using the OpenGL* or Microsoft Direct X* real-time rendering APIs.

This project is a proof-of-concept that explores the feasibility of editing content in a Maya viewport plugin while rendering the scene being edited using the Embree high-performance ray-tracing kernels and a modified version of the Embree sample path-tracer. The source generates a suite of plugins that do this rendering for two different Embree-based renderers (the C++-based “single-ray” renderer and a renderer implemented using the Intel® SPMD Program Compiler (ISPC)), on both the host platform or any attached Intel® Xeon Phi™ coprocessors. The source can be built on both Microsoft Windows* and Linux*. The resulting plugins provide real-time previsualization of basic material, global illumination, and camera viewport changes that closely mirror the result produced by final renders in mental ray*. Simple editing in the Embree-based viewport renderer is done no differently than in the default or high-quality viewport renderers that ship with Autodesk Maya. The only appreciable difference is that the result is ray-traced.

Key features of this plugin include:

  • No special Maya* materials or controls required to get the ray-traced scene to show up in a viewport
    • Other than putting enough light in the scene
  • Progressive image refinement
  • Concurrent multi-camera ray-traced viewports
  • Supports multiple Embree-based renderers on Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors, as well as a hybrid mode that uses them all at once.
  • Interactive feedback from light, material, and camera changes

Code Access

The Embree-Based Viewport Plugin for Autodesk Maya is available from the mayarender_v2.3.2 branch of https://github.com/embree/embree-renderer under the Apache 2.0 license agreement (http://www.apache.org/licenses/LICENSE-2.0). We do not anticipate that the code will undergo much additional development or maintenance.

In addition to the source for the plugin, to build it you will need to download and install:

Plugin Implementation

This plugin is implemented using the “old” pre-Viewport 2.0 Maya API, which means it operates within Maya as follows:

  • Maya passes control to the viewport plugin when it wishes to refresh the viewport
  • The plugin:
    • Figures out if any Maya meshes, lights, or materials have changed the since last render
      • If they have, it may re-import some or all of the scene into the Embree sample renderer
    • Figures out if the Maya camera changed
      • If not, this render accumulates on top of old frame buffer
    • Renders the scene using the Embree sample renderer
    • Copies the renderer frame buffer to Maya viewport’s frame buffer
  • Exits the plugin until Maya next wants another viewport refresh

Since the author has been unable to find a Maya API to identify what changes in the scene have caused Maya to wish to refresh the viewport, a brute-force approach is used instead: we check the current values of the major properties associated with every mesh, light, material, and viewport in the scene against saved values. Changes in meshes (bounding box, transform, number of points, etc.) require a full re-import of all geometry in the scene and a full BVH rebuild (there might be a way to avoid this full rebuild in the ISPC renderer that hasn’t been implemented). Material changes require changing the material associated with the mesh without the mesh’s knowledge. Light changes require recreating some or all of the lights in the scene (but not a BVH rebuild). And finally, any camera changes require flushing the frame-buffer (otherwise, rendered results accumulate from one refresh to the next).

A number of issues had to be surmounted to import scene data from Maya into the Embree sample renderer:

  • All colors from Maya and textures needed R and B swapped
  • Textures needed to be flipped on one or two axes depending on format
  • Camera eye-point and look-at vectors could be imported without change, but the view-up elements needed negation
  • Transforms required negating the off-diagonal elements of the Maya transform matrix, and then extracting the translation from the last column of the Maya transform matrix.
  • The Maya scene graph needed to be collapsed into the flat scene graph expected by the Embree sample renderer by querying inclusiveMatrix() for each MDagPath object
  • All transforms and coordinates are single-precision

Some additional considerations also needed to be addressed. Because this viewport will render via ray-tracing, off-camera lights and objects should not be pruned from the scenegraph like they would using an OpenGL* or DirectX* renderer, since they affect the lighting of visible objects. In addition, Maya has far more functionality than is implemented in the Embree sample renderer. For example, this viewport plugin only supports objects made of triangular polygons, and only a subset of common material properties. The plugin supports directional, point, spot, and single-triangle area lights, but not ambient or volume lights. Finally, the Embree sample renderer implements only a subset of the features provided by the Embree kernels (for example, the renderer in this plugin does not support hair, instancing, and subdivision surfaces).

To allow artists to create content in Maya without changes to their standard workflow, a number of features were added to the Embree sample renderer for these plugins:

  • Non-physical lights with no or linear falloff
  • A new “general” material (called “Uber”) and BDRFs for sub-surface scattering
  • Improved BRDFs for refraction
  • Filters to smooth/disable caustics – a “bug zapper” for “fireflies”
  • New methods that allow lights or materials to be changed without regenerating the entire scene and causing a BVH rebuild.
  • Changes to how textures are imported to correct the displayed result
  • A new hybrid renderer was created that uses both the host and coprocessor(s) at once to share the cost of generating a frame. This can improve the overall frame rate at the expense of host memory and CPU cycles.
  • We also fire 4 rays per pixel rather than one per viewport update since Maya doesn’t call the plugin frequently enough to quickly converge to a high-quality result by firing only one ray per pixel with each viewport update.

In terms of the differences between the Windows and Linux implementations, they were primarily in the build process, plus differences in functions used to determine the paths of shared libraries and numbers of available threads. As far as the build process is concerned, the differences are:

  • Linux
    • All versions of the sampler renderer for Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors are built by the make script generated by cmake for the Embree sample renderer
    • The Maya plugins are built using a manually-invoked script
  • Windows
    • The Maya plugins are built as part of the sample renderer’s solution file
    • The versions of the sample renderer that run on the Intel® Xeon Phi™ coprocessor are built by a manually-invoked script
      • A special “cross-compiler” version of the ISPC compiler, not generally available, is required to build these renderers. How/if this cross-compiler is released is under discussion as of early 2015.

Other Implementation Information

Embree has its own private thread pool and threading runtime that is used by the single-ray and ISPC renderers on both the host and coprocessor. This runtime is set up by the rtcInit() call (among other things) at the creation of each rendering device.

The single-ray device goes a step further by calling TaskScheduler::create(), which starts its own runtime depending on the architecture selected. Currently this causes the single-ray renderer to run very slowly on the coprocessor in the Linux implementation.

In all cases, of course, the number of threads used can be controlled at initialization time.

There are two other aspects that control the performance of the rendering, especially on the coprocessor. First, the BVH builder used can affect the overall rendering performance. For example, BVHs built with the 64-bit Morton builder are rendered more slowly than those built with the 32-bit Morton builder. We chose to use the 32-bit builder for our experiments since it handled most models we threw at it with higher performance (except for the San Miguel scene, which requires the 64-bit Morton builder to render at all). You can switch to the 64-bit Morton builder if you want by modifying EmbreeViewportRendererXeonPhiISPC::createRenderDevice().

Secondly, the choice of whether meshes and scenes are static (RTC_GEOMETRY_STATIC and RTC_SCENE_STATIC respectively) or dynamic (RTC_GEOMETRY_DYNAMIC and RTC_SCENE_DYNAMIC respectively) affects rendering speed. As you might guess, the BVH for static scenes can be queried faster than the one for dynamic scenes. This plugin is using dynamic geometry and scenes, since that potentially allows the replacement of changed geometry without a full BVH rebuild (not implemented).

As is the case in the unmodified Embree sample renderer, the modified Embree sample renderer used by this plugin is run on the coprocessor using the Intel® Coprocessor Offload Infrastructure (Intel® COI) APIs via device_coi, rather than using the compiler-based offload directives. The ISPC renderer, in particular, is also highly vectorized for maximum performance on both the host and coprocessor, at least in the case of coherent bundles of rays (such as primary rays). For less coherent rays, the “hybrid” intersector “downshifts” to intersecting single triangles at a time.

Using the “general” material (“Uber”)

As noted above, the material used by the Embree sample renderer (“Uber”) to display content in Maya only uses a subset of the Maya material attributes, and sometimes uses them in creative ways.

The Embree “Uber” material can use texture maps for bump maps, diffuse reflection, and specular reflection (the latter may not be working properly yet), which are extracted from the Maya “Bump Mapping,” “Color,” and “Specular Color” attributes (if set). Be warned that, at present, bump mapping is only barely functional/buggy.

If not represented by texture maps, the “Color” and “Specular” color Maya material attributes are used directly in the Embree plugin, as is the “Transparency” attribute. Note that the first two must default to white to make texture maps visible, while the latter defaults to black. The refraction index defaults to 1.5 (strong glass), and is taken from the “Refractive Index” in the “Raytrace Options” panel.

Roughness is calculated as follows:

  • Maya’s Lambert textures are assumed to have a roughness of 1.0 (corresponding to full diffuse – 0.0 represents full specular)
  • In Maya’s Phong material, roughness = 100 - “Cosine Power”/100
  • In Maya’s PhongE material, we directly take the “Roughness” value
  • In Maya’s Blinn material, we get it directly from the “Eccentricity” attribute

Translucent surfaces with subsurface scattering are triggered by setting the “Common Material Attribute” called “Translucence” to a non-zero value (this parameter acts is a switch – the value isn’t used any other way). The Maya “Translucence Depth” is used to set the depth of the translucence (a depth greater than 999999.0 will also disable translucence), while the “Translucent Focus” (which is limited between 0.0 and 1.0) is multiplied by 100 and used as the diffusion exponent. The diffusion coefficients are currently hard-coded in EmbreeViewportRenderer::convertSurfaceMaterial – they are set to reproduce a “waxy” like surface. Commented out are diffusion coefficients for “carnival glass.” Allowing user control of these parameters will either require the creative use of some existing Maya material, or creation of a custom material (something we wanted to avoid at this point so that the most possible Maya scenes could be used with the Embree viewport plugin without modification).

To model metals, Maya’s “Incandescence” attribute is “subverted”: the 0.0 to 1.0 values of each R, G, and B color component is multiplied by 5.0 to generate the three absorption coefficients (which default to 0). To be implemented successfully we also need to collect three-value refraction indices. Unfortunately, this has not been done – only a single-value refraction index is collected, making successful metals unlikely. Collecting three-value refraction indices will likely require subverting another Maya attribute (maybe the “Ambient Color”?). In which case, a non-black ambient color would be required to switch from single-value to the three-value refraction coefficients needed for metals.

The easiest way to understand the operation of the Embree “general” material is to look at devices/device_singleray/materials/uber.h or devices/device_ispc/materials/uber.ispc. In outline, the material works as follows:

  • If the transparency is 0, the material is treated as a “conductor,” else it is treated like a “dielectric"
  • For a “conductor”
    • If the roughness is zero we use the BRDF of a specular conductor
    • With a non-zero roughness we “stack” BRDFs: the diffuse color is used with a Lambertian BRDF, and the specular color is used with a Microfacet metal BRDF
  • For a “dielectric”
    • If the roughness is zero, we use a Dielectric reflection BDRF on entering and exiting the material (simulating some internal reflection)
    • For a non-zero roughness, we add dielectric transmission, Lambertian, and specular BRDFs on entering the material, and a dielectric transmission BRDF on exiting
    • Materials with a translucence depth less than 999999.0 will also have BRDFs for translucency added on material entry and exit
  • Note that at this time the transition from one “state” to another of the Embree “uber” material is not smooth – effects turn on and off abruptly, and no attempt has been made to create smooth transitions.
  • There may also still be some bugs in the material implementation or the underlying BRDFs.

Developer Challenges

Among the many challenges faced during the creation of this plugin, three stand out. First, the Embree ray-tracing kernel and the Embree sample renderer signal errors via exceptions (for performance reasons). This can make it quite challenging to trace an issue, especially if it happens on coprocessor. Sometimes these exceptions cause Maya to crash, sometimes they simply show up as a failure of the plugin to load properly. Gdb was usually able to catch these exceptions on either the host or coprocessor, but sometimes it was easier to narrow down the issue by diagnostic output. When the exception was located, it was also helpful to add diagnostic text before throwing the exception to make it easier to diagnose in the future.

Next, the Embree kernels and the Embree sample renderer share some globals (such as g_device), and occasionally use some other globals that are named the same in both software packages. You need to watch out for this when more than one frame buffer is being rendered at once using the Embree sample renderer, and store the necessary information for each viewport. Since Maya (fortunately) requests that only one viewport update its display at a time, only one render device was active at a time, avoiding machine oversubscription and other possible issues resulting from these globals. This may change in the future. Likewise, these globals can cause problems when you load more than one host plugin at a time – crashes can result when plugins are unloaded or Maya is shut down.

Finally, we discovered that the ISPC compiler for Microsoft Windows* doesn’t generate intermediate code that is fully Linux-compatible. In particular, it makes use of the _aligned_malloc() and _aligned_free() memory management functions, where Linux* needs to use the posix_memalign() and free() memory management functions. This is fine if the intermediate code is being built to run under a Microsoft Windows* environment, but here it is problematic because the code is being generated for Intel® Xeon Phi™ coprocessors, which internally run the Linux operating system (even when used on Microsoft Windows*). Getting around this issue required either constant search/replace, or (the option we took) the creation of an ISPC cross-compiler for Microsoft Windows*, which generates Linux-compatible (and Windows-incompatible) code on Windows. This version of ISPC is not publically available, and whether it is considered for release will depend on customer demand.

Considerations when Authoring Ray-traced scenes

While working on this project, we discovered a number of things that should be obvious for anyone used to authoring content for final rendering in a ray tracer, but that may not be obvious to someone doing initial content creation in Maya (or in another digital content creation tool) using a ray-traced interface for the first time.

  • You need lights present in your scene to see anything. At least in Maya, the “defaultLightSet” doesn’t actually cast any light (doesn’t contain any scenegraph lights). This is fine when you are authoring a scene with the default, high-quality, OpenGL, or DirectX viewport renderers, but doesn’t work so well when the view port is being rendered by a ray-tracer. Actual Maya lights need to added to the scene for anything to show up in the viewport.
  • Off-camera lights and objects matter. Again, since the default, high-quality, OpenGL, or DirectX viewport renderers aren’t actually rendering the scene with light, it is perfectly fine if they render only the objects visible to the viewport camera (in fact, this is an important optimization). But this is not the case in a ray-traced viewport, since off-camera lights cast illumination that is seen by on-camera objects, and off-camera objects cast shadows that effect the illumination of on-camera objects.
  • Light does not go through walls. Another “obvious” thing that matters in a ray-traced viewport as well as final rendering. Exterior lights can only enter rooms by proper openings, so rooms will be dark if not internally lit. Likewise, ambient light is implemented in the Embree sample renderer as a uniform environment map/light source at infinite distance – “ambient lighting” does not pervade all space, and can be hidden by an external wall. It does not, as is common in many digital content creation tools, cause an intrinsic glow on any object with a non-black ambient color (at least as implemented in the Embree sample renderer). Likewise, directional lights in the Embree sample renderer come from infinity as in Maya, and so can be blocked. Once again, objects will need to be lighted properly to be seen.
  • You still need to cheat with light. In practice, it is very hard to author scenes that consist solely of physically accurate inverse-square light sources. Use of non-physical lights with linear falloff, or no falloff, can greatly speed the creation of a plausibly lit scene.
  • Reflection/refraction influences the overall look of lighting. In ray tracers, as in the real world, light bouncing off an illuminated wall spills onto other objects, adding the color of the wall to the color of the light received by these other objects. Likewise, partially transparent objects can act as gels, modifying the color of the incoming light. It should be noted that while this version of the Embree sample renderer properly handles refraction of rays from the camera (including tinting due to an intermediate material), it does not handle refraction of light sources properly. Thus, even transparent objects cast black shadows at this time

Ideas for future work

As noted, these plugins are a proof-of-concept. A partial list of possible enhancements includes:

  • Creation of plugins for other digital content-authoring tools, such as Autodesk 3ds Max*.
  • Reduce the work necessary to sense object, light, and material changes in larger scenes
  • Progressive rendering without the need for user interaction (currently the user needs to manually trigger viewport updates to get progressively-improved render quality)
  • Retrofit the single-ray renderer to allow single-object changes without re-building the full BVH, then code the plugin to use this functionality (already present in ISPC)
  • Optimize data transfer to the Intel® Xeon Phi™ coprocessor
  • Correct the shadows cast by translucent objects (currently black)
  • Fix the issues that prevent the use of the high-speed threading Embree subsystem for the single-ray renderer on the coprocessor.
  • Make changes in Maya update not just the last-selected ray-traced viewport, but all ray-traced viewports at once.
  • Support the case where more than one material is present on an object
  • Support more complex materials and different Maya material types
  • Support more types of Embree and Maya geometry (hair, subdivision surfaces, instances)
  • Provide the user with feedback on the memory used by a scene
  • Remove all possible use of global variables

Build Directions

Note: These build instructions have only been tested on Red Hat Enterprise Linux 6.5 and Windows Server 2008 R2 Enterprise.

To build on Linux, do the following:

  • Prerequisites for both Embree and the Embree sample renderer:
    • Make sure the ISPC_DIR environment variable is defined to point to the top of the ISPC 1.7.0 directory and that the "ispc" binary is at that location
      • You may want to put ISPC_DIR in your PATH
      • Also, make sure the ISPC “example” directory is at this location and that the examples/intrinsics/ directory is populated with all the header files found in the downloaded distribution.
    • If using the Intel® C++ Compiler, “source” the appropriate compilervars.sh/csh script
      • Example: source /opt/intel/composer_xe_<version>/bin/compilervars.csh intel64
  • Build Embree
    • Download the Embree 2.3.3 source and the ISPC binaries
    • See README.txt in the top-level directory of the Embree source for general build instructions
    • Create a “build” subdirectory and go to that directory
    • Set your build options using ccmake ..
      • Set the “XEON_PHI_ISA” option to “ON”
      • Set the COMPILER option to use “ICC”
      • Change the CMAKE_BUILD_TYPE to “RelWithDebInfo”
      • If you need to set the COI paths, they are:
        • COI_DEV_LIBRARY_DIR is typically /opt/mpss/<version>/sysroots/k1om-mpss-linux/usr/lib64
        • COI_HOST_LIBRARY_DIR is typically /opt/intel/mic/coi/host-linux-release/lib
        • COI_INCLUDE_PATH is typically /usr/include/intel-coi
      • RTCORE_INTERSECTION_FILTER can improve performance on Xeon Phi if set to “OFF”
      • RTCORE_SPINLOCKS may make the most sense set to OFF
      • And conclude by generating the makefile
    • Now “make
    • For simplicity, copy the resulting libraries to ~/maya/plug-ins/
      • cp libembree_xeonphi.so libembree.so libembree.so.2 libembree_xeonphi.so.2 ~/maya/plug-ins/
  • Build the Embree sample renderer that includes the Embree-Based Viewport Plugin for Autodesk Maya
    • If it is not installed on your system, download and build ImageMagick
    • Once again, see the README.txt in the top-level directory of the Embree source for general build instructions
    • Create a “build” subdirectory and go to that directory
    • Set your build options using ccmake ..
      • Set all of the BUILD_* options to “ON” except for BUILD_NETWORK_DEVICE
      • Set all TARGET_* options to “ON”
      • Set USE_IMAGE_MAGICK and USE_LIBJPEG to “ON”
      • Set the COMPILER option to use “ICC”
      • Change the CMAKE_BUILD_TYPE to “RelWithDebInfo”
      • If <embree_path> is the location of the Embree build tree above, set the following:
        • EMBREE_INCLUDE_PATH to <embree_path>/include
        • EMBREE_LIBRARY to <embree_path>/build/libembree.so.<version>
        • EMBREE_LIBRARY_MIX to <embree_path>/build/libembree_xeonphi.so.<version>
      • If you need to set the COI paths, they are:
        • COI_DEV_LIBRARY_DIR is typically /opt/mpss/<version>/sysroots/k1om-mpss-linux/usr/lib64
        • COI_HOST_LIBRARY_DIR is typically /opt/intel/mic/coi/host-linux-release/lib
        • COI_INCLUDE_PATH is typically /usr/include/intel-coi
      • For ImageMagick, the paths are set to something like:
        • ImageMagick_EXECUTABLE_DIR to /usr/local/bin
        • ImageMagick_Magick++_INCLUDE_DIR to /usr/local/include/ImageMagick-6
        • ImageMagick_Magick++_LIBRARY to /usr/local/lib/libMagick++-6.Q16.so;/usr/local/lib/libMagickCore-6.Q16.so;/usr/local/lib/libMagickWand-6.Q16.so
      • And conclude by generating the makefile
    • Now “make
      • Ignore the ImageMagic warnings
    • Go to ../EmbreeViewportRenderer
    • Edit ./build_all.sh to have paths appropriate for your system
    • Execute ./make_clean.sh to create the Maya plugins that interact with Embree
    • Now execute ./build_all.sh
    • Return to ../build
    • For simplicity, copy the resulting libraries to ~/maya/plug-ins/
      • cp device_singleray_knc device_ispc_knc libdevice_coi.so libdevice_ispc.so libdevice_singleray.so EmbreeViewportRendererSX.so EmbreeViewportRendererIX.so EmbreeViewportRendererSXP.so EmbreeViewportRendererIXP.so EmbreeViewportRendererIH.so ~/maya/plug-ins/

To build on Windows, do the following:

  • Refer to the Windows build instructions in the README.txt file in the top level directories of the Embree kernels and the Embree sample renderer that includes the Embree-Based Viewport Plugin for Autodesk Maya.
  • Download the ImageMagick 6.8.9 source
    • Install to C:\ImageMagick-6.8.9
    • Select the x64 version with multi-threaded DLL
    • Build an x64 Release version
    • Make sure PATH includes C:\ImageMagick-6.8.9\VisualMagick\bin
  • Download libjpeg for Windows
  • Make sure PATH includes location of the Windows ISPC install
  • For both Embree and the Embree sample renderer that includes the Embree-Based Viewport Plugin for Autodesk Maya:
    • Select the Microsoft Visual Studio* 2010 solution and convert the projects to use the Intel Compiler.
    • Set the appropriate destination directory for all binaries built by the project (in particular, the EmbreeViewportRender* projects will want to have your \Documents\maya\plug-ins set in “Properties/General/Output Directory”).
  • Build the Embree solution first, selecting the “Release, x64” build target
  • Point the EMBREE_INSTALL_DIR environment variable to the main folder of the Embree directory tree.
  • Build the entire solution for the Embree sample renderer that includes the Embree-Based Viewport Plugin for Autodesk Maya next, again selecting the “Release, x64” build target:
    • Copy the EmbreeViewportRenderer*.mll files from EmbreeViewportRenderer\Release to your \Documents\maya\plug-ins directory if it did not do so automatically
  • Create “objs" and "ispcgen" directories in the root directories of the source to both Embree AND the Embree sample renderer
  • To build the renderers for the Intel® Xeon Phi™ coprocessor, if you have the ISPC cross-compiler for Microsoft Windows*, open an "Intel Composer XE 2013 SP1 Intel(R) 64 Visual Studio 2010" command window (if you do not have the cross-compiler, a workaround is to build the renderers on Linux and then copy the resulting device_*_knc binaries to your Windows* \Documents\maya\plug-ins directory):
    • In this command window, go to the root directory of the Embree sample renderer that includes the Embree-Based Viewport Plugin for Autodesk Maya
    • Edit the paths in BuildKNCRenderer.bat to be correct for your environment
    • Then run the BuildKNCRenderer.bat script
  • Copy the following files to your \Documents\maya\plug-ins directory
    • *.dll from Documents\<embree>\x64\Release
    • *.dll from Documents\<embree_renderer>\x64\Release
    • device_*_knc from Documents\<embree_renderer>\objs

Run Directions

Before trying to use the plugins with Maya, make sure that the environment variables LD_LIBRARY_PATH and SINK_LD_LIBRARY_PATH include ~/maya/plug-ins/ (Linux) or <user_path>\Documents\maya\plug-ins (Windows) in the shell you are using to start Maya. On Linux this looks like:

  • setenv LD_LIBRARY_PATH ~/maya/plug-ins/:$LD_LIBRARY_PATH
  • setenv SINK_LD_LIBRARY_PATH ~/maya/plug-ins/

Now start Maya, and open the Plug-In Manager found under Window/”Settings/Preferences”/Plug-in Manager. On Linux, load EmbreeViewportRenderIX.so, EmbreeViewportRenderIXP.so, EmbreeViewportRendererIH.so, or EmbreeViewPortRendererSX.so. On Windows, load EmbreeViewportRenderIX.mll, EmbreeViewportRenderIXP.mll, EmbreeViewportRendererIH.mll, EmbreeViewPortRendererSX.mll or EmbreeViewPortRendererSXP.mll.

The naming convention is that “I” stands for the ISPC renderer, “S” for the single-ray renderer, “X” for the Intel® Xeon® processor version of the renderer, and “XP” for the Intel® Xeon Phi™ coprocessor version of the renderer. “H” stands for the hybrid version of the renderer that uses both the host and coprocessor at once.

Now, load a scene, select a viewport (we recommend a perspective view), and under the viewport’s “Renderer” menu select either the “Single-Ray Embree Renderer for Intel(R) Xeon(R) host,” “ISPC Embree Renderer for Intel(R) Xeon(R) host,” “Hybrid ISPC Embree Renderer,” “Single-Ray Embree Renderer for Intel(R) Xeon Phi(TM) coprocessor” (recommended on Microsoft Windows* only), or “ISPC Embree Renderer for Intel(R) Xeon Phi(TM) coprocessor” renderer.

Once you have selected and renderer and the screen goes black, spiral your mouse pointer over the ViewCube (without pressing any mouse buttons) to force the screen to update and allow progressive rendering of your scene. Note that if the scene stays dark you will likely need to add some light sources to the scene from the rendering shelf. Also, the larger the scene, the longer it will take to load on the Intel® Xeon Phi™ coprocessor before the first frame is rendered.

There are a number of things you should be aware of when running more than one of these plugins:

  • Loading and unloading multiple plugins with a host component (including hybrid) may crash Maya due to the use of globals in some parts of the software. This may also cause Maya crash or hang when shut down by the user manually. For the best user experience, only load one host plugin at once.
  • Both hybrid and coprocessor versions of the ISPC renderer create active thread pools on the coprocessor. As a result, if they are loaded at the same time performance will be roughly half of what is seen when only one is loaded. Load only one plugin that uses the coprocessor at a time for the best experience.
  • The number of coprocessors used in either coprocessor-only or hybrid rendering can be controlled by the environment variable EMBREE_NUM_COPROCESSORS on Linux. The default is to use them all.

As the screen is updated, the command-line window that started Maya (or the output window on Windows) will print a number every 5 updates, representing the average updates/second occurring over that interval. Averages of these numbers are what we used to produce the reported benchmark results.

Using 3D Models (and their textures) downloaded from Morgan McGuire's Computer Graphics Archive http://graphics.cs.williams.edu/data (and then converted to Maya scenes and lit), we saw performance like the following on Linux (your results will vary based on the size of your perspective viewport and camera view you use):

Scene (single-perspective window panel layout)

Updates / second for host ISPC renderer

Updates / second for coprocessor ISPC renderer

Updates / second for hybrid ISPC renderer

Speedup factor when using coprocessor

Speedup when using hybrid renderer

Chinese Dragon

9.16

8.08

15.37

0.88x

1.68x

Crytek Sponza

2.28

2.35

3.81

1.03x

1.67x

Hairball

6.79

5.52

10.61

0.81x

1.56x

Lost Empire

5.78

6.71

10.47

1.16x

1.81x

Power Plant

4.58

3.80

7.26

0.83x

1.59x

Rungholt

4.55

7.44

8.04

1.64x

1.77x

Sibenik

7.76

6.19

11.66

0.80x

1.50x

Once again, remember that you need to spiral your mouse pointer over the ViewCube (without pressing any mouse buttons) to force the screen to update and allow progressive rendering of your scene.

Conclusions

This proof-of-concept shows that it is feasible to author scenes inside a CPU-based ray-traced viewport based on the Embree high-performance ray-tracing kernels and a modified version of the Embree sample path-tracer. Unless the scene contains a large number of elements (in which case the pain from the brute-force change-detection process becomes too much), materials, lights, and camera positions can be freely adjusted at relatively interactive rates, even in the case of very large models. In fact, the screen update speed is much more strongly affected by the number of objects in the scene (no matter how minor) than the complexity of the models. Editing models is also possible, if painful for all but small scenes, due to the need to reconstruct the full BVH after each change (something which could be worked around).

This has an interesting implication: under a ray-traced viewport, it is possible to use the same shaders during authoring and previsualization that are used for final render. This could potentially result in large time savings for the artist, and decreased maintenance costs for the studio. It could also trigger optimizations to your rendering pipeline that benefit both authoring as well as final render. As a result, we urge you to closely examine the benefits offered to your codebase by the Embree kernels.

The creation of this plug-in was not a lot of work – it took a few months of part-time work by a single person not initially familiar with either the Embree or Maya APIs. Most of the code in the plugin is related to book-keeping for change detection, or conversion between Maya and the Embree sample renderer. Technically, there is no reason to use the Embree sample renderer to create a ray-traced Maya viewport – use instead your own renderer based on the Embree ray-tracing kernels. Or enhance this code as little or as much as you need to support the requirements of your artists.

Linux Platform Configurations

Plugin development, testing, and benchmarking were done on the following platforms:

  • Workstation host with:
    • Two sockets containing 12-core, 2.7Ghz Intel® Xeon® Processors E5-2697 v2
    • 64GB DDR3-1600 memory, 8.0 GT/s
    • Red Hat Enterprise Linux Server release 6.5
    • Intel® Turbo Boost Technology enabled, Intel® Hyper-Threading Technology (Intel® HT Technology) enabled
    • Intel® Parallel Studio XE 2015 Composer Edition, Version 15.0.0.090 Build 20140723
    • Embree version 2.3.3
    • ISPC 1.7.0
    • Autodesk Maya 2014
    • ImageMagick 6.8.9
    • libjpeg 9a
  • One Intel® Xeon Phi™ Coprocessor 7120A
    • 61 1.238 GHz cores
    • 16GB GDDR5-5500 memory, 5.5 GT/s
    • MPSS 3.4-1, Flash version 2.1.02.0390, uOS version 2.6.32-431.el6.x86_64
    • ECC enabled, Intel® Turbo Boost Technology disabled
  • Windows development was done on an equivalent system with Windows Server 2008 R2 Enterprise
For more complete information about compiler optimizations, see our Optimization Notice.