Hi, we are building a mixed reality experience in Unity for which we have volumetric recordings for two different characters. What would be the best settings and is it possible to playback two exports simultaneously using the combined pixel workflow?
The plan is to have both characters present at the same time but at the moment there is seriously slow playback at the moment in headset. One recording seems to playback quite smoothly.
We have exported the captures from Depthkit using the recommended settings for the Meta Quest and the combined pixel workflow.
The captures are long - 2 - 3 mins although I am not sure if that makes a difference?
Thanks for the reply.
We are using the recommended Unity version
We are not using the built in render pipeline currently will try that.
We are using Depthkit Studio (have been working with Terence Quinn).
We are using the recommended settings for the Quest. I am not by the computer currently but from memory that is h265 and around 4k x 4k resolution but will check that. Am certain about the codec less certain about resolution.
Regarding depth kit looks and timeline integration will need to get back to you on those last two points tomorrow.
Thanks @BenTurnbull - I’ll see if I can reproduce the issue on my side, and please provide those remaining details and any other information about your Unity project which might be relevant to the issue when you’re able.
My bad we are using depthkit studio, not core as previously advised.
Re depthkit looks we have the depthkit studio photo look built in material applied.
Switching to the Built in Render pipeline means we can have both depthkit clips running simultaneously so thank you for that.
The issue we’re having now is that it runs fine in game play mode but when we build to the meta quest 3 it stops working. Playback lags and it gets very glitchy.
@BenTurnbull - Glad to hear you have both assets playing back simultaneously. It sounds like the hardware is struggling render the multiple clips as they are currently configured. You can confirm this by adding a debugger / FPS counter to your project and turning things on and off to see where the bottlenecks are - We often use Graphy.
I assume you are building an APK to run directly on the headset, and not using Link / AirLink - Can you confirm? The standard Depthkit Studio renderer is technically compatible with on-device rendering, but works much better on desktop-class graphics hardware like that used when using the Quest in Link / AirLink mode. To address performance issues caused my the hardware constraints of rendering on the Quest itself, we developed the Depthkit Studio LIte renderer which sacrifices some visual fidelity for dramatically increased performance. Both Studio and Studio Lite renderers can be used in the same project, so you can even mix and match per object based on your aesthetic and performance needs.
For any objects which keep the Depthkit Studio renderer, lower the Volume Density to a value of 60 or less - This setting requires exponentially more computing power as it increases, so it’s best to keep it low when rendering on mobile hardware.
Thanks Cory. We will try the lite renderer bit worried about the trade off in quality but let’s see.
Because our clips are long they are reasonably large at 300-500mb would that be an issue?
Ok so we have had success by lowering the volume density to 50 and exporting the captures from Depthkit Studio at a lower bit rate (10 mb per sec).
This is playing back smoothly but there is some loss in visual quality. If you can recommend any other settings to try and tweak that would be much appreciated!