This article shows you how you can use LibRealSense and OpenCV to stream RGB and depth data. In the end you will have a nice starting point where you use this code base to build upon to create your own LibRealSense / OpenCV applications.
This article and sample application show you how to use the Intel® RealSense™ camera (R200) and the Enhanced Photography functionality that is part of the Intel® RealSense™ SDK. The article separates the Intel RealSense SDK functionality from the GUI layer code to make it easier to focus on the R200 Enhanced Photography functionality.
This code sample allows the user to scan their face using a front-facing Intel® RealSense™ camera, project it onto a customizable head mesh, and apply post-processing effects on it. It’s an extension of a previous code sample titled Applying Intel® RealSense™ SDK Face Scans to a 3D Mesh by adding features and techniques to improve the quality of the final head result as well as provide a series of post mapping effects.
MSAA provides a neat way to reduce pixel shading without sacrificing image quality. Recently, researchers at Intel came up a with a technique called Coarse Pixel Shading that works like MSAA, Andrew Lauritzen at Intel, had come up with a clever way a few years ago to enable MSAA and deferred shading. We extended his ideas to enable Coarse Pixel Shading in the deferred rendering set-up. With our technique we saw about 40-50% reduction in the shading costs on Intel GPUs with a slight increase in the G-buffer generation time
Imagine playing a game where avatars display player facial motion in real time. Learn how to accomplish this using the latest Intel® RealSense™ SDK and a consumer-grade RGB-D camera. With code sample.
The Intel® Software Guard Extensions (Intel® SGX) SDK provides three functions for detecting and enabling Intel SGX support on systems. The key question for software developers is: what is the proper way to detect Intel SGX support on a system so that their applications and their installers behave accordingly?
This article provides an introduction to autonomous navigation and its use in augmented reality applications, with a focus on agents that move and navigate. Autonomous agents are entities that act independently using artificial intelligence, which defines the operational parameters and rules by which the agent must abide. The agent responds dynamically in real time to its environment, so even a simple design can result in complex behavior. An example is developed that uses the Intel RealSense camera R200 and the Unity* 3D Game Engine.
This article introduces a new implementation of the effect called adaptive screen space ambient occlusion (ASSAO), which is specially designed to scale from low-power devices and scenarios up to high-end desktops at high resolutions, all under one implementation with a uniform look, settings, and quality that is equal to the industry standard.
This article discusses why using a texture rather than an image can improve OpenGL rendering performance. It is accompanied by a simple C++ application that alternates between using a texture and using an image. The purpose of this application is to show the effect on rendering performance (milliseconds per frame) when using the two techniques.