WebVR: Interacting with the Head-Mounted Display (HMD)

Overview

See how to use the WebVR API to build your immersive experience for the web.

 

Resources

See the WebVR 1.1 Specification

Follow Host Alexis Menard on Twitter*

Subscribe to the YouTube* Channel for Intel® Software

Transcript

In this episode, we talk more in depth on how to use the WebVR API to build your immersive experience. I'm your host, Alexis Menard, and today we talk about how to use the WebVR API to interact with the HMD. This is WebVR. 

One of the first things to do is to detect if the browser support WebVR. If not, you can use the WebVR polyfill, which on mobile will provide you the best possible experience using the phone sensors. The next step is to create the available VR displays with Get VR displays. 

It will return a list of connected HMDs. When you find your suitable HMD, you can request a browser to be able to present content with requestPresent. This means you can take control of the HMDs. However, this goal may fail if another VR application is using the HMD. 

When requestPresent succeeds, you will need to start a rendering loop. To start a rendering loop on the VR display, you call requestAnimationFrame with a callback. That callback would be called whenever the HMD is ready to render a new frame. 

When your callback is called, you can access the frame data, which will tell you where the user is looking. From this frame data, you will have all the information needed to render each eye. When you're done rendering, you can submit the frame to the display and request a new frame with requestAnimationFrame. Please note that the moment you request a frame data and the moment you submit is called a "critical path." On a 90-hertz HMD, you have 11 milliseconds to render both eyes, so you really want to make the code pass as efficient as possible, postponing secondary task after submitting the frame. 

Let's now look into what is included in the frame data for each frame. From the pose of the user, you will get two matrices by eye. The view matrix is where the camera is positioned into the space. Then there is the projection matrix, which can be seen as the camera type and how the object is rendered, as well as the field of view and depth. Each eye has different values, because they don't see the same things. 

A final note here is the rendering resolution of each eye, which you'll get using get eye parameters. HMDs have different resolutions; therefore, the rendering size will be different. Finally, if you want to exit the VR mode, you can call exitPresent, which will terminate the rendering inside the HMD. 

Thanks for watching and subscribing to the Intel Software channel. We will see you next week for another episode of WebVR.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804