Diegetic VR UI Example

VR: The Future and Best Development Practices in Unity

I recently acquired a VIVE and after a day of oohing and ahing about how cool it was, began to create some simulations for it in Unity. The first of such is located on my main site, along with a demo video in case you don’t have a VR headset. You should definitely check it out.

I discovered a couple of things. First, I’m totally sold on VR being the future. I don’t get motion sick (at least while not moving from a fixed point in VR, more on this in a bit) , so I’m fine to whip my head around in Virtual Reality all I want. The experience is really cool, and tricked my brain into thinking I was somewhere else much more than expected. I first picked up a jetpack and immediately got butterflies in my stomach, because I felt like I was actually flying upwards! Over the next couple of years VR tech will improve drastically, just like all new devices. A few areas that could improve are portability, resolution of the eye pieces, and performance on lower end devices. We will also see advancements in handling sound. Currently you need to provide your own headphones and it’s a bit of a clunky setup.

VR Development Best Practices

So, what’s actually different about VR development? Beyond the obvious need for different gameplay design, there are some key details that devs might overlook. I refer to Unity with these points but they can be adapted to other engines, as conceptually they are the same.

Performance

Performance is much more important in VR than typical game design. This is because if the display lags, it can induce physical discomfort and nausea in some users.

Rendering

Rendering is one of the most recurring bottlenecks in VR projects. Optimizing rendering is essential to building a comfortable and enjoyable experience in VR. In Unity, “setting Stereo Rendering Method to Single Pass Instanced or Single Pass in the XR Settings section of Player Settings will allow for performance gains on the CPU and GPU.

Lighting

Every lighting strategy has its pros, cons, and implications. Don’t use full realtime lighting and realtime global illumination in your VR project. This impacts rendering performance. For most projects, favor the use of non-directional lightmaps for static objects and the use of light probes for dynamic objects instead.

Post-Processing

In VR, image effects are expensive as they are rendering the scene twice – once for each eye. Many post-processes require full screen draws, so reducing the number of post-processing passes helps overall rendering performance. Full-frame post process effects are very expensive and should be used sparingly.

Anti-aliasing is a must in VR as it helps to smooth the image, reduce jagged edges, and improve the “look” for the user. The performance hit is worth the increase in quality.

Cameras

  • Orientation and position (for platforms supporting 6 degrees of freedom) should always respond to the user’s motion, no matter which of camera viewpoint is used.
  • Actions that affect camera movement without user interaction can lead to simulation sickness. Avoid using camera effects similar to “Walking Bob” commonly found in first-person shooter games, camera zoom effects, camera shake events, and cinematic cameras. Raw input from the user should always be respected.
  • Unity obtains the stereo projection matrices from the VR SDKs directly. Overriding the field of view manually is not allowed.
  • Depth of field or motion blur post-process effects affect a user’s sight and often lead to simulation sickness. These effects are often used to simulate what your eyes do naturally, and attempting to replicate them in a VR environment is disorienting.
  • Moving or rotating the horizon line or other large components of the environment can affect the user’s sense of stability and should be avoided.
  • Set the near clip plane of the first-person camera(s) to the minimal acceptable value for correct rendering of objects. Test how it feels to put an object into your face in VR. Set your far clip plane to a value that optimizes frustum culling.
  • When using a Canvas, favor World Space render mode over Screen Space render modes, as it very difficult for a user to focus on Screen Space UI.

UI

More on that last bullet point above.

Something very interesting about VR is the need for a Diegetic UI. A Diegetic UI means a user interface that exists in the universe (in this case, a game) that we are experiencing. So, a non-Diegetic UI would be your health floating at the bottom left of your screen on a normal computer game.

Now here’s the problem. In VR: your eyes can’t focus on something that close. Putting something on screen close to the face of the viewer works really well for normal games where you can focus on the screen at a specific part. However, VR goggles work by projecting two separate images on each lens, and your brain combines it to achieve depth perception. Putting something that statically close to the screen makes the user’s eye attempt to focus on it, which makes the viewer go cross-eyed and the whole immersion is broken. The solution? Use diegetic UI elements. What this means is attaching the UI to objects IN the game world. This looks really cool, and accomplishes the goal of not breaking immersion and looking terrible.

Notice the time left is stuck to the gun, so the user can look at the UI themselves vs it being stuck on the screen

This type of UI hasn’t been limited to VR either, it just works really well in it. We’ve seen examples of this kind of user interface all over.


VR will hit mainstream within 20 years, and we will see long term usage within 50.


It helps me if you share this post

Published 2018-11-08 15:53:28

Leave a Reply

Your email address will not be published. Required fields are marked *