We have been working with the new livestreaming setup over the past month; we’ve got everything up and running, and streaming to a Quest Pro headset. However, we haven’t been able to achieve the same visual quality as seen in your video: Depthkit Livestreaming Tutorial - Part 3: Peer-to-Peer Livestreaming with WebRTC - YouTube (the clip in the beginning with both of you, and also segment towards the end where Cory steps into the volume during the walkthrough).
In our setup, the system is picking up walls, floors etc. but we noticed that in your videos you have none of these issues. Any advice on how to achieve this on our end? We are aware of the limitations of the Lite Renderer and would also love to hear about anything in the works that might help us out.
I’ll chime in with a little more detail from a product roadmap perspective on how we intend to address in the future.
We are tackling this issue in two ways:
Livestreaming Bounding Box in Depthkit
The new Depthkit 0.7.0 Pre-release introduces a bounding box to the Editor context to allow you to remove floor geometry at that stage, before export. The next step is to bring the bounding box into the Studio Calibration & Streaming context, which will allow floor removal for the livestreams right within Depthkit
We are also doing R&D to improve the performance of our full-fusion renderer on mobile XR devices, to reduce the need for Studio Lite specifically Quest 2 / Quest Pro. The full fusion in Unity allows for the bounding-box based floor removal.
Let us know if the TouchDesigner and/or material solutions are workable for you in the meantime. Thanks for your patience!