Prototyping Zenva* Sky - Learn Coding in VR

First Steps Into Developing the World's First Vr App to Teach Coding

How VR Can Help Teach Computer Science

Virtual reality is not needed to teach computer science. We’ve already developed successful products and courses to teach these concepts across web and mobile platforms such as Zenva* Academy and Codemurai (Android* and iOS*). There is in fact no single VR application to teach computer science out there.

What we’ve observed however is that a lot of people struggle to understand abstract concepts when there is no direct application to them. Humans are wired to interact with their natural environment and learn from the feedback they get from those interactions. Unless you are already writing code, computer science concepts don’t have such a direct feedback loop, which makes it hard for a lot of people to get passed the initial learning curve.

What VR can allow is the creation of an simulated environment to allow people to explore computer science concepts in the same way they interact with the real world. By making these concepts visual, tangible and responsive, we will make it easier for learners to develop skills in the same way they do when they learn a new sport or how to play an instrument.

How it Will Work

Zenva Sky will consists on different activities that introduce or reinforce previously taught computer science concepts. At the start of each activity, a visual challenge will be presented to the user. This could be for instance to move a block to a certain position so that a door opens up.

The user will interact with different objects in their environment using hand-tracked controllers with a “laser pointer” for selections.

When selecting an object that is “interactable”, different commands will pop up for the user to pick. For example, some objects will give the user the ability to move them or scale them up.

Upon command selection, the user will pick parameters for the method if applicable. The commands selected by the user will be presented in a “program view”, which will be a panel located in the world and visible to the user at all times.

The user will be able to “execute” this program, which will execute every single command selected by them in a sequence. The commands will move / affect the environmental objects they selected, and the goal of a program will be to solve the challenge.

In addition, some challenges will allow the user to define conditions and iterations, in this same intuitive manner.

Hardware and Software

We’d like to thank Intel and Microsoft for the generous support and hardware provided, which consisted on:

Zenva Sky is being developed on Windows 10 Pro with the Unity* engine.

Week 1 Progress

The goal of week 1 was to get a basic prototype off the ground. Something simple that can showcased and iterated upon. Since there are no similar products out in the market, it was important to develop something quickly that we can use to get a feel of the experience and how it will work.

Some activities included:

  • Environment setup
  • Data model definition
  • Creating commands in VR
  • Running lists of commands in VR

Environment setup - Hello World VR

Running a “hello world VR” project in Unity these days is definitely easier than it was in the past. First of all, when installing Unity we made sure to include all the dependencies that related to Windows Universal Apps. Also, we decided to use the Unity Hub tool (currently in beta) which makes it easier to manage multiple versions of Unity.

After installing Unity and creating a new project, we selected Windows Universal Apps as the build target, and enabled virtual reality under Build Settings - XR Settings:

The other two packages we incorporated from the start where:

  • TextMesh Pro: Now free, gives you much sharper text and it’s easier to work with than the default Text component in Unity.
  • VRTK: Library used for various VR interactions which works out of the box with Windows MR headsets. We manually imported the latest code from the master branch.

Data model definition

Since we had a clear idea of where we were headed, it was important to spend some time mapping out the main entities our application will have. Of course, this model will be updated over time.

As a starting point, we have three main entities:

  • Target objects: interactable objects that are in the environment. For now just cubes but could include any model we want the user to interact with.
  • Commands: actions we want target objects to be able to carry out. For example “moving” or “scaling”.
  • Programs: a list of commands which can be executed.

This is a simplification of the actual implementation, as there are many nuances such as the UI that belongs to each command, or the difference between an abstract command and an instance of a command with actual parameter values and a related target object.

Creating commands in VR

1. The user selects an object with the tracked controller. A panel opens up with a list of available commands:

2. The user selects a command. A panel with command parameters opens up:

3. The user enters parameter values (for now can only be done with keyboard, but we’ll add a VR keyboard):

4. The command is added to the program

Running lists of commands

The user can press a virtual button to execute the commands. This will execute all the actions in a sequence.

We’ve only created one command for testing: move(), which allows the user to move an object across an axis.

When executing a command, if there is a collision the task stops and the next one is executed. This will allow us to create challenges in an environment that feels real.

Main challenges

  • Connecting all the dots. Basically putting together a mix of scriptable objects and prefabs that get passed along the different parts. This took a lot of iterations. We had issues such as parameters of a command affecting all past commands, or the endless creation of commands which could produce performance issues in a production environment.
  • Design (ongoing). We are still exploring and trying to figure out what the app will actually look like. Having this basic functionality is a first step towards creating a demo activity that actually makes sense. We still have to define the art style, scale and UI look and feel.

Next Steps

  • Improve the design and create a demo activity
  • Give the user more control by letting them change / delete commands from a program
  • Challenge completion logic, when successful or failure
  • Show a thumbnail of the target object in the command list, so users can know which objects are being used in each command

Of course, a lot of left in terms of the commands and content we want to create:

  • More commands
  • Conditional logic, let the user define courses of action based on variables
  • Loops and recursion
  • Explore other computer science abstractions
  • Make elements interact with each other (explosive elements?), hook that to condition logic
For more complete information about compiler optimizations, see our Optimization Notice.