Use Cyclone® V SoC FPGA to Create Real-time HDR Video

ID 660412
Updated 11/5/2018
Version Latest
Public

author-image

By

Every year, new technology introduces bigger, better, and faster video. Innovations of the late 90s popularized 720p high definition (HD) cameras and displays, with 1080p and 4K following shortly after. Despite these advances, cameras fall short of replicating the human eye’s capabilities. For example, the dynamic range of shadow and highlight perception of most cameras is considerably smaller than that of the human eye. High dynamic range (HDR) technology addresses these shortcomings and brings cameras closer to the goal of replicating human vision. However, delivering HDR in real time proves difficult. Intel® SoC FPGAs meet these challenges with the processing power to realize HDR video in real time.

Figure 1. Human Vision Perception Range

The creation of HDR content involves the capture of two or more images at different exposures in rapid succession to use in the creation of a single composite image. These operations work well for still photography but present challenges in videography, as subjects are in motion. Essential tasks like color conversion and gamma correction present further difficulties, especially in real time.

To capture video in a wider dynamic range, the Cyclone® V SoC FPGA processes the raw streams from two cameras, each streaming at different exposure levels, and combines them to produce a single HDR stream. Using two identical cameras with a Cyclone V SoC FPGA creates a solution that delivers HDR video in real time, without drawbacks such as motion blurring and decreased frame rates.

Figure 2. Data Stream Combining

Solution Components

Hardware:

  • Terasic DE10-Nano Kit (Cyclone® V SoC FPGA and ARM Cortex*-A9 MPCore processor).
  • Two identical cameras configured at a resolution of 1280x720
  • Monitor
  • HDMI*
  • Host PC
  • Ethernet

Software:

  • Intel® Quartus® Prime Software
  • Qt Creator* IDE

Functions of the FPGA in Enabling Real Time HDR

Data Stream – The FPGA adjusts the exposure level of each data stream separately as it receives processed data from the HDR algorithm for display. It synchronizes the two data streams to the system clock. If there are any gaps or delays in a data stream, the frequency automatically adjusts to the faster data stream, ensuring that the two streams have stable synchronization and are ready for gamma correction.

Gamma correction – Gamma correction, a nonlinear operation, encodes and decodes luminance (brightness) or tristimulus (RGB) values in video or still image systems. Cyclone V SoC FPGA with the ARM Cortex-A9 MPCore processor enables floating point DSP computations to perform this function efficiently. Additionally, the GUI for this solution features a histogram representing both the luminance and number of pixels of each color channel in the video stream in real time. The GUI displays the output stream or each camera’s stream.

Figure 3. Gamma Correction

Color Conversion –The cameras transmit frames to the FPGA as a RAW data stream. To output an image, the solution converts RAW data to a standard RGB format.

Figure 4. Bayer Color Filter Mosaic

Color filters arrays (CFAs) are arranged on the pixel sensors of an image sensor in a grid pattern or mosaic. CFAs aid in the capture of color data of an image. Modern digital cameras employ CFAs known as Bayer filters, in which millions of individual photosites collect the luminance value of a color (red, green, blue).

The FPGA performs debayering, the process of constructing a color image from incomplete color data, using bilinear interpolation, which calculates the RGB color output of a pixel by averaging the values of the surrounding pixels of that same color. With the Cyclone V SoC FPGA architecture, this process of RAW to RGB conversion can be performed for large data sets in parallel.

Frame Blending – While both cameras are oriented in a close configuration, they still exhibit a slight offset in their field of view. In order to resolve the resulting parallax between the two streams, the Intel Soc FPGA performs intensive DSP functions to properly align and trim the video. Using smaller cameras in a tighter configuration decreases the parallax effect, resulting in less trimming and discarding of pixels.

Figure 5. Parallax Resolution

The Intel® SoC FPGA Advantage

The Intel® SoC FPGA provides a high-performance solution, delivering HDR video in real time. Its outstanding qualities include:

Adaptability to Changes – HDR content creation usually involves multiple captures using a single camera with data combining performed on a CPU. Intel SoC FPGAs provide an efficient alternative. With pipelining and parallel architecture, the solution captures and combines two data streams simultaneously to realize HDR in real time.

I/O Expansion – More inputs result in better image quality-- better video pixel correction with larger sample sets. The expansion feature of Intel SoC FPGA supports expansion with two, three, or even 10 cameras, simultaneously providing higher quality HDR.

Performance Boost – To create a balanced image, the raw data streams undergo the resource-intensive tasks of RGB conversion and color correction. Intel SoC FPGAs convert the RAW stream into RGB in real time, both decreasing the delay in output and increasing the maximum frame rate.

Conclusion

The solution demonstrates the remarkable potential for delivering powerful HDR video quality in real time. See a more in-depth breakdown of all blocks, modules, and functions available at the contest page Innovate FPGA.

To get started, see source code on GitHub*.