Power is Nothing Without "Controllers"

In the first post I spoke about the team and the project, while the second one has been all about the developer kit we are using, while I introduced also a few key aspects of Mixed Reality such as the Space Type. As soon as we received the devkit we started working and today I want to describe more key features of the Mixed Reality Toolkit and also suggest a few concepts on how to design your app.

Let’s start from a key element: motion controllers.

Motion controllers allow us to take action in mixed reality, by tracking the position of our hands in space, so we can get fine grained interaction with digital objects.

MR Controllers

There is a lot of power behind motion controllers.  I can use a thumbstick to move forward, or to rotate, say 30 degrees, in any direction. There is a lot of buttons that I can configure to perform several different actions.

The only caveat is that today, the controllers are not standard across all hardware. There is commonality: They all have a system button, Menu button, Trigger, Grip. After that you get into a few differences. The Windows* Mixed Reality controllers are well represented as they have both the touchpad that Vive* has, as well as the thumbstick that Oculus Rift* has.

XR Controllers

To handle motion controller input, you can use Unity*’s generic Open VR input system or, when targeting Microsoft* Mixed Reality experiences, you can use the new Unity* software XR.WSA APIs.

These APIs are strongly typed and feature a deeper level of detail than the Open VR APIs do, because the UWP platform exposes two different poses for controllers: Pointer and Grip.

There is a pose for “Pointer” where the center is where the pointer begins and there is a pose for “Grip” where the center is where our hands grip the controller
As you can imagine, the pointer pose is great for when you are pointing to an object, especially when you render the controller model and use that to point. 

The grip pose is best for when you are holding or throwing an object and you want your origin to be the grip where you hold the control.

Pointer and grip system

Well, how to start using all this information? Let's see which are the steps to create a first Unity app using the MixedRealityToolkit.

1.      Download the latest version of the MixedRealityToolkit from this link: https://github.com/Microsoft/MixedRealityToolkit-Unity/releases

2.      Open Unity and create a new 3D project

3.      To import the unity packages, click on Assets->Import Package->Custom Package menu item and navigate to the Holotoolkit.unitypackage you saved earlier.

4.      In the top menu navigate to Mixed Reality Toolkit->Configure and select Apply Mixed Reality Project Settings. This to configure our Unity project to target Windows Mixed Reality app.

5.      Check the box labeled Use Toolkit-specific InputManager Axes (if it has not been checked already).

6.      In the top menu navigate to Mixed Reality Toolkit->Configure and select Apply UWP Capability Settings.

At this point, our Environment setup is completed, and we can start creating a scene. In the top menu navigate to Mixed Reality Toolkit->Configure and select Apply Mixed Reality Scene Settings. Leave all the default settings checked and click Apply.

The default settings add these components to the scene:

  • A Mixed RealityCameraParent prefab is added to the scene. This prefab adds a default (main) camera at the origin (0,0,0) and it also adds support for motion controllers and boundary.
  • An InputManager prefab is added to the scene. This prefab adds support for input (via gaze, touch, gestures, and Xbox controller) to our scene.
  • A DefaultCursor has been added to the scene.

With that, the project and scene are now configured and primed for making a UWP MR application. Go ahead and save your scene and project.

The MixedRealityCameraParent game object that we added has a Boundary object that we need to configure to be able to navigate within our virtual room. Let's first set the floor.

  1. In the Hierarchy panel, expand MixedRealityCameraParent and select (or click on) the Boundary object. This should select it in the Hierarchy panel and show the properties for the Boundary in the Inspector panel.
  2. With the Boundary object still selected, find the Floor object and drag it into the Floor Quad property of the Boundary Manager in the Inspector panel.

Like I explained in the previous post (link), we are going for a "standing-scale" configuration and we have changed the space type to stationary. With Boundary still selected make sure the Opaque Tracking Space Type is set to Stationary.

Now all it would take is controller support, but we actually already have it.

When we applied the Mixed Reality Settings to the scene, we added motion controller support to our scene. The MixedRealityCameraParent element we added has a child object called MotionControllers, and this has a script called MotionControllerVisualizer. This script will track and render our controller model (position, rotation, as well as input events).

The InputManager object that was added is listening for input events across many input sources (Mouse, Touch, Xbox, …) and contains the GestureInput object that has an InteractionInputSource that is listening to motion controller events from InteractionManager. These are the motion controller events that will be used to manipulate elements and interact with the scene.

Now all you need to do is to wear the headset on your head and Click the play icon in the Unity Editor. The system will start tracking your head movement and you will see different parts of the virtual space, as you move your head. Walk around the room and notice how the headset tracks your position and moving the controllers, these are going to appear in the virtual space and will help you getting a better feedback from the device.

 

For more complete information about compiler optimizations, see our Optimization Notice.