VR UX: We Are Not Gorillas!

Gorilla Arm

In the previous posts I talked about every developer’s first experience with the Mixed Reality worlds. Today though, I would like to talk about one of the key topics of every application: User Experience and User Interface.

Generally speaking, each application UI and UX are burning issues which also get often underestimated.

10 years ago the transition from desktop to mobile made clear that when designing an application for a new device, often involves a complete redesign of the interaction paradigm itself.

Today, this topic is still often underestimated during the design of desktop/mobile apps, and completely ignored in most of the VR/AR/MR applications.

Devices like Hololens and HP VR 1000, and its predecessors Oculus and HTC VIVE made obvious that a complete rethinking of user interfaces and interaction paradigms 0-3D.

As an example think about “static interaction”: before the advent of these devices, the user was not moving, and the interaction space was limited (phone or monitor screen), while now it is mandatory to design interfaces that accepts and support the freedom of movement of the user in both virtual or real space.

To tell the truth, not everything must be designed from scratch, and many interesting hints can be taken from past experiences such as Wii and Kinect, or related fields like Voice Assistants such as Alexa, Cortana and Google Home (why Google did not give it a proper name is still a mystery!). The UI/UX subject is huge and in order not to dwell, I will omit the 0-D interfaces for now and will try to get back to it, if there will be enough time, in another post.

First of all you must distinguish between Game and UI. When you create a Game the challenge is have fun the player. When you create a User interface for an application the challenge is create an interface easy to use and effective.

One of the most common problems of VR apps, which often is not being considered, is amount of physical stress a user can endure.

Remember that fatigue kills any gesture or UI: when fatigue increases the performance decrease and grow the user’s frustration and UX is bad.

The first type of stress is referred as the “Gorilla Arm”, that is the fatigue accumulated on shoulders and arms after interacting for long periods of time. As an example, try to keep your arm raised in front of your face for just 5 minutes: you will get what I am talking about.

In a VR application this can happen quite often, for example when the user needs to interact a lot with vertical interfaces using joysticks (which could be more or less heavy) or while scrolling long lists of data repeating the same gesture many times. A possible solution for this is to use a “momentum without friction” effect.

It is recommended to avoid long interactions and to maintain the interaction area limited to what is referred as the “Comfort Zone”, namely the area in which the hand can move between the shoulder's height and the belly-button. This is a well-known topic in the field of usability and ergonomics in industrial applications.

Physical Confort Zone

 

Another important stress factor is the device weight on the user’s neck, and since we cannot remove or modify it at all, as developers we can work to make sure that the UI is always well positioned, and the user will not need any unnecessary movement with his head to be able to interact with the application.

Moreover, we need to consider the user’s cognitive stress as an important factor in a VR/MR experience.

A common problem to many applications is the disorientation that the user can feel, and the consequently misunderstanding of how to interact, for which we can name three main causes:

  • Interactive objects are not visually different from non-interactive ones
  • Feedbacks on interactive objects are inadequate, or semantically unaligned with the performed action
  • Interactive objects are outside the field of view

Except for games, in which these factors can be part of the game itself, in any other application it always makes sense that visual or audio suggestions are present to help users identify the right direction for interacting with the VR/MR environment.

Another interesting help is given by context menus which can be opened at any time during the experience.

Finally, as developers we always need to consider the “Cultural Input Variability”: different people perform the same task in different ways depending on cultural origins. In the movie "Inglorious Bastard" the spies are discovered because of the way to do the gesture of number 3 with the hand.

Bastard without glory

Some Takeaways to remember:

Semantic intuitiveness

Gestures and Interaction should have a clear cognitive association with the semantic functions they perform and the effects they achieve. Intuitiveness can be enforced by appropriate interface and feedbacks.

 The semantics of gestural patterns that belong to everyday life or common task should be as consistent as possible to their “conventional” meaning, but also take into account that intuitiveness is strongly associated with users’ cultural background, general knowledge, and experience.

Minimalize fatigue

3D interaction involves more muscles than keyboard interaction or speech. Gestural commands must therefore be concise and quick, and minimize user’s effort and physical stress.

 Two types of muscular stress are known: static, the effort required maintaining a posture for a fixed amount of time; dynamic, related to the effort required to move a portion of the body through a trajectory.

Favor ease of learning (Learnability)

It must be easy for the user to learn how to perform and remember the interaction paradigm, minimizing the mental load of recalling movement, buttons and associated actions.

The learning rate depends on tasks, user experience, skills, as well as the size of the interaction language (more gestures and interface decrease the learnability rate).

The gestures that are most natural, easy to learn and are immediately assimilated by the user are those that belong to everyday life, or involve the least physical effort. These gestures should be associated to the most frequent interactions.

Complex gestures can be more expressive and give more control, but have a higher learnability burden.

Hence there is clearly a tension between design requirements, among which a compromise must be made: naturalness of gestures, minimum size of the interaction language, expressiveness and completeness of the interaction pattern.

Intentionality (Immersion Syndrome)

Users can perform unintended gestures, i.e., movements that are not meant to communicate with the system they are interacting with. The “immersion syndrome” occurs if every movement is interpreted by the system, whether or not it was intended, and may determine interaction effects against the user’s will.

The designer must identify well-defined means to detect the intention of the gestures, as distinguishing useful movements from unintentional ones is not easy. Body tension and non-relaxed posture of users can be used to make explicit the user intention to start interaction, issue a command, or confirm a choice.

The tense period should be short to not generate fatigue.

Not-self-revealing

Appropriate feedback indicating the effects and correctness of the gesture performed is necessary for successful interaction, and to improve the user's confidence in the system.

Which solutions did we use in our project?

We decided to create a “Curved UI” for the static menu part, which will position itself at eye level height, and a “Radial Menu” for every frequent action that the user can perform during the experience. The following video explains it

Curved UI is a common solution in VR applications and there are many great plugins which allow to develop these canvas-based effects, therefore I will not get deeper in the topic.

For context menus we decided to create a new 3D component from scratch, and not just a pre-built 2D canvas.

Mixed Reality Toolkit introduced a brand new “UX Controls” section in the latest release, which contains “Bounding Box & App Bar” elements together with many other ones.

These components recreate the effect used at system level and in some sample apps such as Holograms, but they have some critical issues.

First, the two components are strongly attached and the usage of one needs the presence of the other. Also, the APP BAR UX can easily become very long and difficult to manage, other than not presenting second level menus.

For these reasons, we decided to create a menu similar to the one Microsoft build in its app Cirque Du Soleil.

Cirque Du Soleil

Andrea Bresser (https://twitter.com/datanonsense) already started to work on a similar component, but it requires to configure manually the number and dimensions of the menu slices at compile time. So, we decided to use it as a base to develop a fully dynamical and still MR Toolkit compatible version BoundingBox-bound.

To whom is interested in learning more about designing a Radial Menu I suggest the following links:

The Radial Menu code will be released on GitHub at the end of the project, and subsequently integrated in the Mixed Reality Toolkit.

Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.