Voxel-Based Graphics on Intel Architectures

by Vittorio Accomazzi - Cedara Software Corporation


Voxel Graphics is a means of modeling 3D objects with sampled points (voxels) instead of surfaces. Benefits of this technology include the ability to handle highly detailed objects, as well as objects with ill-defined boundaries, such as clouds. The historical barriers to adopting Voxel Graphics were lack of processing power and lack of standards to share and transmit datasets. These barriers are vanishing since todays desktop Pentium® 4 processor-based system is powerful enough to interactively render voxel models, and compression technology, such as JPEG2000, will provide the framework to compress large 3D images. In the future, Web content will view 3D images as easily as 2D JPEGs today. We present an overview of the technology and guidelines for efficient implementation of voxel rendering engines on Pentium 4 processor-based systems.

Voxel-based Graphics, also called Volume Graphics, provides an alternative to polygonal models. The proliferation of this technology has been very limited compared to polygon graphics. One of the key enablers of this technology is the ability to render voxel models interactively on Pentium 4 processor-based systems. Another enabling factor is the acquisition technology, such as color laser 3D scanners, that can capture real-world objects.

Benefits of Voxel-based Graphics

Volume Graphics can describe objects without the explicitly defined boundaries required for a polygonal description. Voxels represent interior details facilitating operations such as clipping and cutting. These and other properties can enhance immersive reality simulations, such as the fly-through inside the human body shown in Figure 1. Unlike polygon graphics, such as embodied in OpenGL*, semi-transparency is handled directly.

Voxel data structures ease operations such as detecting collisions and modifying the environment.

Figure 1. Click an image to download the animation, which demonstrates the ability to handle highly detailed objects and semi-transparency.

The ability to handle semi-transparent samples helps in modeling amorphous phenomena such as clouds, fog and fire [2].

Voxel Graphics deals well with highly detailed objects, such as the human body. The example model in Figure 1 visualizes the chest, including inner organs, to better than 0.5 mm precision. The resolution of the model is limited only by the acquisition device, not by the technology. The memory footprint and rendering speed depend on the dataset's size, not its complexity. This is a major difference with polygon models in which the complexity of the model effects the number of polygons and texture, and heavily impact the memory footprint. As noted in chapter 10 of Volume Graphics (Chen, Kaufman and Yagel), polygon representations often require more memory than voxel representations, forcing a reduction in the number of polygons and hence perceptual quality of the images.

Historical Barriers for Voxel Graphics

Four major limitations have prevented widespread use of voxel graphics:

  • Inadequate processing power and memory bandwidth. Costly high-end workstations, often multi-processor, were required to render large datasets.
  • Lack of standards to share datasets across applications. This limitation was overcome in the medical imaging market with the DICOM standard.
  • Awareness of this technology.
  • Inexpensive commercial acquisition hardware.


Developments removing these barriers include:

  • Pentium 4 processors that enable voxel model rendering at interactive frame rates.
  • Compression formats such as the new JPEG2000 standard that will support 3D volume compression [11]. Using 3D wavelet compression, JPEG2000 can transmit volumetric data across the Internet. It also supports features such as progressive downloading, which allow a client to enhance interactivity and select the appropriate volume resolution for the system. Standardization is a good driver for adoption, and we believe that a well-known standard like JPEG will very likely be adopted soon.
  • Growing adoption of voxel graphics in diverse applications, such as:
    • Computer games, especially for static objects such as landscapes (such as terrains using height maps) and buildings. Demand for greater realism [12] is raising interest in voxel graphics.
    • Image-based rendering for generating 3D color datasets from 2D digital photographs [10].
    • Chemistry and biochemistry applications including modeling.
    • Geophysics for oil and gas exploration.
    • Non-destructive testing (NDT).
    • Security imaging applications.
  • Desktop micro-computed tomography (CT) scanners are already being sold by companies such as SkyScan (http://www.skyscan.be*); they can see inside objects at 2um resolution. Laser scanners such as those of Arius 3D (http://www.arius3d.com*) can capture the true color (independent of ambient lighting or illumination) of object surfaces at very high resolution.


Optimizing Voxel Graphics for Intel Architectures

A volumetric dataset is an extension of a 2D image. Similarly voxel, or volume element, is an extension of the pixel.

Like the pixel in a pseudo color image, the voxel can contain a value that is used to index a color table. This value is called density and is obtained through the sampling process. For data acquired from a CT scanner, density represents the x-ray absorption. It is commonplace for voxels to encode other properties, such as a normal or index of the material represented by the voxel, to improve the realism of the visualization.

Figure 2. Taxonomy of Voxel-based Graphics techniques. The object is sampled on a regular grid where each element is called a voxel. The entire set of voxels is called a volumetric dataset, or simply volume. The volume can be rendered directly into a 2D image using Volume Rendering, or first transformed into a binary volume. The binary volume can be rendered as a 2D image (projection) using Surface Rendering.

Volume Rendering allows direct visualization without decomposition into any intermediate geometric representation. Surface Rendering is a special case of Volume Rendering in which each voxel is either visible or invisible. Surface Rendering deals with binary volumes such as height maps. This white paper discusses the Volume Rendering technique; however, the considerations are equally applicable to surface rendering. Our definition of Surface Rendering differs from that in Volume Graphics, where it is considered as polygon rendering. For an excellent introduction to Volume Rendering, refer to the "Introduction to Volume Rendering" [1].

Volume Rendering is commonly used to selectively visualize a few structures inside the dataset. The voxels belonging to these structures are typically fully opaque with one or two layers of semi-transparent voxels surrounding them to avoid creating aliasing artifacts during the rendering. The semi-transparent voxels typically represent regions that are not fully occupied with the structure that we intend to visualize; this is known as "partial volume artifact" in the Medical Imaging literature. Figure 3 shows how semi-transparency improves the visualization.

Figure 3. The image on the left is generated without semi-transparent voxel using a binary volume. The image on the right is generated using semi-transparency to properly render voxels partially occupied by bone.

When selected structures are visualized, the amount of voxels that contribute to the final image is typically very small compared to the entire dataset. The main technical challenge in writing a high performance Volume Renderer is to access and process only these voxels efficiently.

Consider the example in Figure 4.

Optimizing Voxel Graphics for Intel Architectures - continued

Figure 4. On the left is a CT dataset showing the bone structures of the head. On the right is an MR dataset showing the location of a tumor inside the brain.

Figure 4 illustrates two very common situations in Medical Imaging. Table 1 shows the percentage of non-transparent voxels and the percentage of voxels contributing to the final images in figure 4.

CT Dataset MR Dataset
Dataset Size 230x365x422 417x460x217
Non-transparent voxels 8% 33%
Voxels projected in the final image 0.6% 2%


Table 1. Percentage of non-transparent voxels, and voxels contributing to the two images in Figure 4. The dataset size is the smallest bounding box containing the anatomy shown.

Both datasets have been acquired with a stack of 512x512 images. The voxel percentages are measured against the bounding box of the respective anatomy.

Two key observations based on Table 1 that apply in general are:

  • The proportion of non-transparent voxels is small, compared to the entire dataset, and most important:
  • The number of voxels contributing to the final image is very small, compared to the entire dataset.


Table 1 suggests an important optimization for volume rendering, which is an efficient method for identifying the voxels that contribute to the final image.

Several implementations have been proposed to accelerate volume rendering; for our purposes we categorize them in three classes:

  • Software Only: solution based only on the power of the CPU [9].
  • Texture Mapping: solutions that leverage the 2D or 3D texture mapping implemented by hardware [4][5].
  • Dedicated Hardware: board designed to perform Volume Rendering.


We've found that a software-only implementation can achieve interactive frame rates. For example, the CT dataset shown in Figure 4 can be rotated with updates at greater than 5 frames/second on a 2.0-GHz Pentium 4 processor (with Intel NetBurst™ microarchitecture). The following crucial techniques take advantage of these observations for optimized performance:

  • Space Leaping: skips the completely transparent voxels. As shown in Table 1, this eliminates rendering a large number of voxels. Space leaping can be implemented in several ways, depending on how the dataset is stored in memory.
  • Early Ray Termination or Voxel Culling: voxels behind fully opaque voxels don't require projection. Early ray termination ensures that only the relatively small numbers of voxels contributing to the final image are projected.


To take advantage of early ray termination, the voxel must be projected in a "front to back" fashion; from the near plane to the far plane of the frustum. Another common optimization is to cull all the voxels behind a voxel with a predetermined high opacity threshold (for example, 95% opaque) since their contributions to the final image are minimal.

A good strategy that has been proposed by several authors ([7], [8]), is to run length encode non-transparent voxels. This strategy achieves:

  • Minimal memory footprint
  • Space leaping.


Combining this strategy with a front-to-back projection method, as proposed in Philippe Lacroute's "Shear Warp" algorithm [7], permits early ray termination as well. This algorithm is particularly suited for the Pentium 4 architecture because:

  • The projection buffer and the voxel data are accessed in storage-order, taking advantage of spatial locality (more cache hits) and hardware cache pre-fetching.
  • Sampling and alpha blending can be easily implemented in fixed-point arithmetic. The half latency integer arithmetic implemented in the Pentium 4 processor spe eds up these calculations and index calculations. SIMD instructions (MMX™ technology, SSE, SSE2) can aid alpha blending.


The Shear Warp algorithm is well suited for multithreading because the image can be split in several non-intersecting regions, which can then be computed in parallel with no dependency.


Volume Based Graphics is a mature technology whose historical barriers to adoption are vanishing. Pentium 4 processor-based systems are able to render large volumes at interactive rates, and JPEG2000 could provide the standard for sharing datasets. Several markets-including chemistry, games, micro CT, nondestructive testing, security, laser scanners and medical-can take advantage of this technology.

For any questions and comments feel free to send e-mail questions and comments to Vittorio.Accomazzi@cedara.com.


About the Author

Vittorio Accomazzi is the Architect of the Visualization and Image Processing group at Cedara Software Corp. Vittorio joined Cedara in 1996 and became the Architect of the Visualization and Image Processing group in 1999. Before joining Cedara, Vittorio developed the volume rendering application X-Eva under a grant from the University of Milan. Vittorio's primarly research focus is Volume Visualization and Segmentation on general purpose hardware. Vittorio has authored several papers and participated as part of the Pannel on 3D Visualization at CARS 2000.


Einzelheiten zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.