In the Sierra Nevada mountain range, Bodie is a mining town founded during the California Gold Rush. It thrived in the late 1800s and endured into the early twentieth century, but was abandoned when the ore ran out. California State Parks now maintains what they call a state of “arrested decay.”
This code sample demonstrates how VR can be used to reach audiences other than gamers. Bodie, California is both a tourist application and an educational resource.
My colleague – Steve Schroedl at Third Rock Studio – and I worked with California State Parks and the California Film Commission to obtain permits for photography and video. We also paid for access to a park guide.
Our first day was spent shooting photographs. We placed the camera at a location, took a 360-degree still image, then moved on to the next spot. Days two and three were spent with the guide. She gave us a list of places – both inside and outside buildings – where we could shoot 360-degree video while she spoke in front of the camera.
Not all the media captured on this trip is used in the initial version of the Bodie application as the sheer volume of hard drive space that would be required is prohibitive. Another factor is the nature of this initial version, which is a developer-focused release and code sample.
This Unity* software document shows how to set up a scene for displaying 360-degree media. I designed the Bodie application around concepts in the document. Both the video and stills used the same principles as explained in the document authored by Unity.
Figure 1. Unity project assets.
The app stores images and video in the Resources folder. This in turn uses the Unity Resources class to extract media for display. The XML folder does not contain media, but XML data that is used to identify files to be loaded at runtime.
The sole scene for the application is named “Bodie California.”
The main Scripts folders contain the following:
The ScreenManagers folder contains classes that control and manipulate the project’s CanvasScreens, and load proper image types. CanvasScreens are Unity Canvas objects with some extra functionality built around them. I appended the word “Screens” to the name to give them a little more meaning in this context.Screens in this context have a 2D image applied to them and the user will view them as though they were looking at a screen in a movie theater. So to speak.
This contains the main Bodie360Texture used by the skybox material for displaying 360-degree video.
This is the SteamVR Unity plugin, downloaded from the Unity* Asset Store.
Figure 2. Unity project hierarchy.
ApplicationManager is an empty game object that holds a reference to the ManagerMainApp.cs class, responsible for launching the intro screen, flat image screen, and home screen
The root CanvasScreens is an empty game object that is used as a container for other screens. All canvas objects in this group contain a Unity Canvas Group object that works with the FadeManager.cs class to fade in and out the canvas screens.
The root ScreenManagers is an empty game object used for organization. It contains three child game objects, with instances of the C# screen manager classes attached.
This game object contains support for the VR system and interactions.
The Main App Manager variable is the same class instance instance that’s defined in the ApplicationManager empty game object in the project Hierarchy. This is used so functions on the MainAppManger.cs class can be called when events occur.
The VideoManager game object maintains the video aspect. This includes how the 360-degree videos are displayed and traversed.
Figure 3. Microsoft Visual Studio* 2018 source code folders.
The Microsoft Visual Studio* 2018 solution groups classes by functionality. Below are brief descriptions of the code. For deeper explanations, refer to the heavily commented code, which includes a thorough explanation of each class.
These represent files associated with aspects of the application. PhotoSphereList.xml contains the names of the photosphere files for display. VideoList contains the names of the videos for display.
When looking at these XML files, note that the filenames do not have an extension associated with them. This is because using Unity’s Recourses.Load() functionality requires that no filename extensions be supplied.
The XML files provide flexibility. Of the many photosphere images in the resources, you may wish to work with only a few. This allows you to know—at both runtime and design time—which to isolate and work with. Another benefit is that knowing what files you want lets you load one at a time rather than all of them at once.
Figure 4. Map of Bodie on the home screen.
Each of these markers represents a user interface (UI) button. MapButtonClickArgs is added to each button, allowing the application to know which photosphere or video to load at runtime. This information is passed back to the ManagerMainApp class in the ApplicationManager game object. The data is then used to determine what to load and display.
ManagerMainApp.cs: This is the main controlling class. It pulls together the screen managers, skybox changer, and video manager. The class monitors the state of the application and contains functions that are called by the event classes.
When the application starts, this class puts the app state into UNKNOWN by default. This prevents any 360-type functionality executing. It runs the intro splash screens and displays the main home screen.
For fade transitions between the canvas screens (excluding the intro screen), the SteamVR_Fade.Start() function is heavily used in the co-routines.
APP_STATES illustrates what is happening in the application at any given moment. Monitoring the app state makes it possible to create smarter functions that operate in different states. For example, the “MoveNext” and “MovePrevious” functions—used by the event system—can navigate between 360-degree videos or photographs. This concept can be extended to support moving between next and previous photosphere images.
The manager area of the code was initially designed to contain functionality to fade the canvas screens in and out.
NOTE: In the initial design, I included functionality in ScreenManagerBase.cs, allowing each screen manager that inherited from the base class to fade in and out.
However, after design and coding were complete, I discovered the SteamVR Unity plugin can fade the VR camera. This eliminates the need to manually fade each canvas screen. Rather than redesign the application, I left the functionality in place, to be revisited.
Currently, the only canvas screen manager that manages its own fading is the IntroScreen. This provides a nice transition between the intro splash images.
Once I changed the fading from each screen controller to using the SteamVR object methodology, I realized I could have removed the screen manager classes altogether, because almost all the control happens in the ManagerMainApp.cs.
Due to time constraints, I left these classes in place but would like to re-architect the overall class structure of the application.
This is the SteamVR Unity plugin source code. No modifications to this code have been done.
This application is compatible with Oculus Rift and HTC Vive. Only one controller for each—for right-hand control—has been enabled.
On the HTC Vive controller, four buttons are used (see figure 5).
Figure 5. HTC Vive controller buttons. Source: Unity Documentation for OpenVR Controllers
Four buttons are also used on the Oculus Rift controller:
Figure 6. Oculus controller buttons. Source: Unity Documentation for OpenVR Controllers
All buttons that must interact with the laser have been assigned the tag HIT_TARGET and assigned to the layer HIT_LAYER_MASK.
Excluding any normal object initialization by Unity and the SteamVR SDK, the first thing that executes is the ManagerMainApp’s Start() function. This is in the ApplicationManager game object in the hierarchy.
In the Start() method, a few things are set up, then it is instructed to run the opening intro screen sequence. The opening intro screen sequence is nothing more than the CanvasIntroScreen game object displaying two different images one after another with a nice transitional fade affect. After the images are displayed, the CanvasIntroScreen is hidden and the CanvasManAppHomeScreen displayed.
At this point the application becomes idle, awaiting input from the VR controller. When the VRC_EventPublisher detects an event (button press), notification is dispatched and the VRC_EventSubscriber takes action, calling back into the ManagerManApp.cs class for the proper function.
All events start in the VRC_EventPublisher class. In the hierarchy, this class is attached to the “RightHand” game object, to which SteamVR’s “Player” prefab is assigned. The script listens to the hand (see figure 7).
Figure 7. VRC_EventPublisher components.
When the grip button (vive) Axis1D.SecondaryHandTrigger(oculus) is squeezed, the GripButtonClicked event is triggered in the VRC_EventPublisher class. The VRC_EventListener class calls the corresponding event handler, OnGripClicked. This sets the ShowLaser property of the VRC_LaserPointer class. The laser is turned on and off by this simple toggle.
The Oculus Rift “A” button and the HTC Vive “Menu” button return users to the home screen.
Pressing these buttons notifies the VRC_EventListener, which calls the MainAppManager’s ReturnToHomeScreen function. This is an event handler for other functionality in the application, and is reused in this case. ReturnToHomeScreen determines which state the app is in, stops any video that might be playing, and starts the co-routine to show the main screen.
VRC_EventListener’s OnTriggerPressed calls the VRC_LaserPointer ClickButton() function. ClickButton() ensures the pointer is hovering over a button, then invokes the button’s onClick method (see figure 8).
Figure 8. Starting the tour.
This button is linked to ManangerMainApp.OnLaserButtonClick. OnLaserButtonClick determines which button was clicked and calls the proper co-routine; in this case, “StartVideoTour.”
StartVideoTour changes the app state, sets up the VideoManager object, fades the VR camera, changes the skybox so it can display 360-degree video, and plays the video.
From this point, the user can use the pads on the controllers to navigate previous and next, or click the Oculus Rift A button/HTC Vive Menu button to return to the home screen.
The 2D image gallery button is connected the same as Start Video Tour: everything operates the same up to the ManagerMainApp.OnLaserButtonClick. From this point, the “StartFlatImageGallery” co-routine is called. This sets the application state and fades out the VR camera. GoToFullBlack changes the skybox. The CanvasStandardImageScreen is activated and the SteamVR camera is faded back in.
At this point, the user can use the same pad buttons to navigate previous and next, and to return to the home screen.
The map buttons are defined in the hierarchy (see figure 9). Each of these buttons is represented by a green ball on the Bodie home screen map.
Figure 9. Map buttons in the hierarchy.
These correspond to the green ball button indicators on the map screen (see figure 10).
Figure 10. Button indicators on the map screen.
Figure 11 shows the controls for the definition of the map button image:
Figure 11. Map button image.
As shown in the previous figure, the buttons are assigned a green sphere image. MapButtonClickArgs is added to each button, allowing the application to know which button was clicked and what to display.
The buttons are linked to the ManagerMainApp.OnMapItemClick function. OnMapItemClick determines what type of media to load—a photosphere or video—and calls the appropriate co-routine. If a button has been configured to play a video, the PlaySingleVideo co-routine is called. This behaves differently to the video tour: it is designed to play a single video only.
PlaySingleVideo changes the application to the single video state, fades to full black, sets up the video play, starts the video, and fades back to visible. When the video has played, it automatically returns the user to the home screen. This is accomplished by assigning the video player’s loopPointReached event to the “SingleVideoDone” function.
LoadPhotoSphere sets the app state, fades the VR camera, changes the skybox to the photosphere material, and fades the VR camera back in. The user manually returns to the home screen by pressing A on the Oculus Rift or Menu on the HTC Vive.
The skybox is controlled by the SkyboxChanger.cs script. Three materials are used to control what is displayed at any given time:
The Insta360 camera’s six lenses produce 6K resolution each, but their total resolution is 8K. We reduced this to 4K to save disk space. For instance, one video—1:02 minutes in length—at the native output created a file size, post-stitch, of 1.97 GB. Stepped down to 4K, the same video was 496 MB. Adding 8264 compression reduced the final size to 107.6 MB.
Despite these drastic steps, the videos remain great quality for today’s VR headsets.
We had the opportunity to compare two processors while creating twenty-nine 360-degree videos:
This suggests that the Intel® Core™ i9 processor is more than four times faster at stitching videos than the Intel Xeon E5 processor.
I am not a Unity 3D expert, but have used it for other projects. Bodie, California taught me a lot along the way – although there are aspects of its architecture that I’d like to change. I plan to rework the application to provide a more immersive home screen, as well as enabling navigation from location to location within the immersive environment without having to return to the home screen.
I hope this article is educational and provides insight into how to create 360-degree VR experiences in Unity.
Rick Blacker is a developer evangelist based in the United States. He helps people understand how to incorporate Intel® RealSense™ technology into Windows*-based applications and Unity games.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804