The question is how Depthkit captures will be supported on Vision Pro. There are some considerations here that we are beginning to look at. I’m opening up this thread as a community discussion point. Anyone should feel free to jump in to discuss, ask questions or share their point of view on the Apple Vision Pro (h/t @Andrew@NikitaShokhov
Our intention is to support Depthkit playback on Apple Vision Pro through Unity by or before the headset launches next year. But there is clearly some complexity to how the platform works that we need to work through
Here’s what we see so far:
Unity 2022 is required. Depthkit is currently officially supporting 2020.3, so we will look to upgrade to 2022 for an upcoming phase of Depthkit’s Unity Plug-in release.
Vision Pro has two rendering modes in Unity. Immersive and Fully Immersive, which have different shader restrictions.
From the first impression on the developer videos & documentation, the Immersive mode will be challenging fit for Depthkit because it does not support “hand written shaders” - which I interpret as nothing outside of the standard ShaderGraph patches will work. While Depthkit uses Shader Graph, we have custom graphs that may not work
On the other hand, Fully Immersive mode seems more promising as it more closely mirrors how Unity has done XR rendering on other platforms. It seems to support the full shader system in URP (and Built-in RP, which Unity says is supported but will not see updates)
We have applied to Unity’s Beta program and will begin testing once we have access to device simulators. Look out for updates here.
If anyone has any questions or perspectives to share, jump in!
Thank you so much for taking the time to show us the current playing field regarding VisionOS! This is such an exciting moment, and I’m overwhelmed with information (in a good way) from all of the WWDC lessons they have released. A couple of thoughts and fun questions to keep our conversations going below!
Fun question to start for James and the group here. We’ve all been here before with the launch of new headsets and technology. What feels different this time to you? What are you most excited about that is different from before? For me, as I mentioned above, having 40+ lessons and videos detailing all aspects of VisionOS is really blowing me away. I’ve never gone into WWDC developer documentation before - is it always like this with product launches? Regardless, the amount of care that’s going into all these new and existing frameworks: RealityKit, ArKit, Reality Composer Pro, Unity, WebXR, SwiftUI etc etc, just gives me an overwhelming feeling that we are in the right place once again, and all the work and passion we’ve been pouring into Volumetrics is finally being validated a bit Also, it’s quite obvious that Apple really believes in this, just from the sheer amount of tools they are providing so that the community can take advantage of this all.
WebXR: The first lesson in the WWDC sessions that piqued my interest was 3D immersive content through Safari and the web. In this lesson, they mentioned WebXR being based on WebGL and the ability to use many libraries such as three.js. I’ve done a pretty big interactive WebGL depthkit project in the past, and imagining that this now has the ability to be actually immersive outside a 2D web browser screen brings me such joy! Also when I hear three.js, I specifically remember many fun projects in the beginning depthkit days, especially the project of four different clips of one guy playing different instruments and being about to rotate around all 4 of them in the web browser. Anyway, hooray for WebXR! Any thoughts on this James? Anyone?
WebXR: Because content living in Safari is not app-based, do you see this as a more immediate way to get existing workflows up and immediately on the device to view? For example, previously exported Depthkit assets and projects may already be ready to be viewed in browser as opposed to going through a new Unity 2022-based approach etc etc. I am definitely not an expert in WebGL, WebXR, and three.js so I could be way off here, but I’m definitely excited by the thought of being able to show off some of our content on the web here or having some sort of viewer that is easily accessible through web browsers.
WebXR: Was the early-days Vimeo/Depthkit integration based on three.js? Is this all worth revisiting now?
I’m off to begin studying the Immersive and Fully Immersive modes now. Thanks for these explanations James. Is Shader Graph what all the Zero Days looks and Unity Plugin is built on?
That’s it for now! Keep the conversations going all!
I also talked with another Apple engineer who is from a department related to games. He claims that it is actually possible to use Unity’s Shader Graph for immersive mode in Vision, and avoid Reality Composer. So seems like the knowledge of different engineers there isn’t coinciding yet. We will see how it works in fact in July when their pilot program launches.
He also says that it is possible to mix lighting effects from real-world light sources and light sources from a virtual Unity scene. This works with PBR materials in Lit mode. Unlit materials will not be affected by real-world conditions.
An amazing feature Foveated Rendering will work automatically in Immersive mixed reality mode and is unavoidable, he says. It is only possible to turn it off in Fully Immersive VR mode.
That’s because all rendering for Immersive mode is done by Reality Kit, not Unity. Unity rendering works in VR mode. He also says that it will be possible to switch Immersive and Fully Immersive modes at runtime. I’m not sure how smoothly this switch may perform though, seems like it will require switching rendering from Reality Kit to Unity.
Lastly, he says that Unity is developing a Poly Spatial feature specifically for the Vision Pro platform, it will be available in beta soon. It will allow, after an initial deployment to the device through Xcode, to play a Unity scene in the connected headset right away, without the need to deploy an app through Xcode each time we want to test something in the scene - as we have to annoyingly do developing for iOS. Obviously, the changes we make in the Unity scene will not affect the app on the device. So we will still have to deploy new builds quite frequently.
Hi all-
It looks like it’s been awhile since Apple Vision Pro support was discussed in the forum so I’m hoping to re-ignite this. My attempts to play a Depthkit clip on Vision OS haven’t worked out most likely due to the lack of support for Unity Video Player on vision OS. The error I get in Xcode:
Operation GetVideoTrack not currently supported in visionOS.
There is a PolySpatialVideoComponent available to play back videos but lacks the functionality that Unity Video Player provides in Depthkit.UnityVideoPlayer.cs so I think we are waiting on Unity Video Player to be available on vision OS to move forward. I also don’t see compatibility for Apple Vision Pro through Renderheads AVPro yet either. Are you seeing the same results?
Thanks for looking into this and posting your findings! We have been waiting on production hardware to get started supporting Apple Vision Pro. Looks like you’ve found some initial challenges
This video player compatibility does in fact look like a blocker for initial out of the box Depthkit support with the default Unity Video Player.
We’ll do some more research / investigation on our side to see there is a way to get the PolySpatialVideoComponent working for Depthkit purposes or if we can find another way to swap out the dependency.
Hi James-
You’re right, using the 1.0 visionOS package works great. I was able to get a depthkit clip to play in the visionOS simulator in a fully immersive scene. There are some shader warnings but nothing major. Looking forward to testing on actual hardware tomorrow.
We just received our device here, and the good news is we are seeing performant playback through WebXR! However 8th Wall needs to update the way they handle Apple Vision because it opens the Persona camera! Lol
Embarrassingly we don’t have a recent M2 mac yet to test Unity yet. @ArvinTehrani curious to hear once you give it a spin!
I was able to successfully play a Depthkit Clip on the Apple Vision Pro. I did have to dial back the density to 50 to get a stable experience and some more optimizations could go a long way. I included screenshots of the settings I used in case anything jumps out at you to try adjusting. Here is a link to a video captured from the Apple Vision Pro: https://f.io/NZFq-vTL
one thing potentially contributing to the performance issues is that those template clips are exported for Windows Desktop VR - they are 4096x4096 H.264
iOS (and assumedly Apple Vision Pro) best performs with 2048x2048 with H.265. We’ll be created some tests on our side to balance performance and quality across the media encoding and reconstruction, and share updated recommendations and templates.
Surprisingly the resolution seems to have little effect on performance. I thought the resolution was the issue as well so I tried lower than 2k x 2k and still had unacceptable performance. Funny note, when the frame rate drops too low while debugging, you get a message about the content being headlocked.
The only thing that made a significant difference is the volume settings, specifically the density. The clip can play back at 4k but if the surface buffer becomes larger than around 20k, I’m seeing the performance issues and significant frame rate drop.
Can you confirm which plugin version we should be using for Unity 2022? Is the phase 9 plugin working ok despite only officially targeting 2020.3? Thank you!
Hi @WhittSellers - we are about to release a new Unity plugin (Phase 10) with official support, but in the meantime the best starting point is the Unity URP example template you can download under Depthkit :: Downloads