Depthkit volumetric video support for Apple Vision Pro in Unity

Hi all,

As I’m sure most of you have seen, Apple has finally unveiled their XR device the Apple Vision Pro.

The question is how Depthkit captures will be supported on Vision Pro. There are some considerations here that we are beginning to look at. I’m opening up this thread as a community discussion point. Anyone should feel free to jump in to discuss, ask questions or share their point of view on the Apple Vision Pro (h/t @Andrew @NikitaShokhov :wink:

Our intention is to support Depthkit playback on Apple Vision Pro through Unity by or before the headset launches next year. But there is clearly some complexity to how the platform works that we need to work through

Here’s what we see so far:

  1. Unity 2022 is required. Depthkit is currently officially supporting 2020.3, so we will look to upgrade to 2022 for an upcoming phase of Depthkit’s Unity Plug-in release.
  2. Vision Pro has two rendering modes in Unity. Immersive and Fully Immersive, which have different shader restrictions.

From the first impression on the developer videos & documentation, the Immersive mode will be challenging fit for Depthkit because it does not support “hand written shaders” - which I interpret as nothing outside of the standard ShaderGraph patches will work. While Depthkit uses Shader Graph, we have custom graphs that may not work

On the other hand, Fully Immersive mode seems more promising as it more closely mirrors how Unity has done XR rendering on other platforms. It seems to support the full shader system in URP (and Built-in RP, which Unity says is supported but will not see updates)

We have applied to Unity’s Beta program and will begin testing once we have access to device simulators. Look out for updates here.

If anyone has any questions or perspectives to share, jump in!

1 Like

@James and friends,

Thank you so much for taking the time to show us the current playing field regarding VisionOS! This is such an exciting moment, and I’m overwhelmed with information (in a good way) from all of the WWDC lessons they have released. A couple of thoughts and fun questions to keep our conversations going below!

  • Fun question to start for James and the group here. We’ve all been here before with the launch of new headsets and technology. What feels different this time to you? What are you most excited about that is different from before? For me, as I mentioned above, having 40+ lessons and videos detailing all aspects of VisionOS is really blowing me away. I’ve never gone into WWDC developer documentation before - is it always like this with product launches? Regardless, the amount of care that’s going into all these new and existing frameworks: RealityKit, ArKit, Reality Composer Pro, Unity, WebXR, SwiftUI etc etc, just gives me an overwhelming feeling that we are in the right place once again, and all the work and passion we’ve been pouring into Volumetrics is finally being validated a bit :slight_smile: Also, it’s quite obvious that Apple really believes in this, just from the sheer amount of tools they are providing so that the community can take advantage of this all.

  • WebXR: The first lesson in the WWDC sessions that piqued my interest was 3D immersive content through Safari and the web. In this lesson, they mentioned WebXR being based on WebGL and the ability to use many libraries such as three.js. I’ve done a pretty big interactive WebGL depthkit project in the past, and imagining that this now has the ability to be actually immersive outside a 2D web browser screen brings me such joy! Also when I hear three.js, I specifically remember many fun projects in the beginning depthkit days, especially the project of four different clips of one guy playing different instruments and being about to rotate around all 4 of them in the web browser. Anyway, hooray for WebXR! Any thoughts on this James? Anyone?

  • WebXR: Because content living in Safari is not app-based, do you see this as a more immediate way to get existing workflows up and immediately on the device to view? For example, previously exported Depthkit assets and projects may already be ready to be viewed in browser as opposed to going through a new Unity 2022-based approach etc etc. I am definitely not an expert in WebGL, WebXR, and three.js so I could be way off here, but I’m definitely excited by the thought of being able to show off some of our content on the web here or having some sort of viewer that is easily accessible through web browsers.

  • WebXR: Was the early-days Vimeo/Depthkit integration based on three.js? Is this all worth revisiting now?

  • I’m off to begin studying the Immersive and Fully Immersive modes now. Thanks for these explanations James. Is Shader Graph what all the Zero Days looks and Unity Plugin is built on?

  • That’s it for now! Keep the conversations going all!

1 Like

I also talked with another Apple engineer who is from a department related to games. He claims that it is actually possible to use Unity’s Shader Graph for immersive mode in Vision, and avoid Reality Composer. So seems like the knowledge of different engineers there isn’t coinciding yet. We will see how it works in fact in July when their pilot program launches.

He also says that it is possible to mix lighting effects from real-world light sources and light sources from a virtual Unity scene. This works with PBR materials in Lit mode. Unlit materials will not be affected by real-world conditions.

An amazing feature Foveated Rendering will work automatically in Immersive mixed reality mode and is unavoidable, he says. It is only possible to turn it off in Fully Immersive VR mode.

That’s because all rendering for Immersive mode is done by Reality Kit, not Unity. Unity rendering works in VR mode. He also says that it will be possible to switch Immersive and Fully Immersive modes at runtime. I’m not sure how smoothly this switch may perform though, seems like it will require switching rendering from Reality Kit to Unity.

Lastly, he says that Unity is developing a Poly Spatial feature specifically for the Vision Pro platform, it will be available in beta soon. It will allow, after an initial deployment to the device through Xcode, to play a Unity scene in the connected headset right away, without the need to deploy an app through Xcode each time we want to test something in the scene - as we have to annoyingly do developing for iOS. Obviously, the changes we make in the Unity scene will not affect the app on the device. So we will still have to deploy new builds quite frequently.

1 Like