WebRTC Quality Settings

@INungChuang I see that in the OBS instance running on the Unity computer, you have set the resolution of the received stream to 3840x2160, but can you confirm that that instance of OBS has a base/canvas resolution set to 3840x2160 (Settings > Video > Base/Canvas Resolution)? If the base/canvas resolution is lower, then the stream will be scaled down.

I also noticed in your combined-per-pixel stream that one of your sensors appears to be cutting off your mannequin with the far plane - This is indicated by the mannequin’s leg and neck being missing in the depth map (see below). Adjust the near and far planes of that sensor so the subject is in the yellow-green-blue range of the depth map like the other sensors.
image

Finally, how does the quality of the reconstructed livestream asset compare to an asset reconstructed from a recording? One thing to try (if you haven’t already) is to record a capture in Depthkit with the same configuration as your livestream, and export it with the same near- and far-plane positions and resolution constraints (3840x2152) as the livestream. In Unity, switch the Depthkit object’s video player from “Livestream Player (Spout)” to “Video Player (Unity)” and load your recording. This provides a baseline for quality, and isolates any quality issues introduced by the WebRTC pipeline, so configure the reconstruction settings of your Depthkit asset based on the recording until it looks the best, then switch the player back to “Livestream Player (Spout)” to see how the livestream version compares. (Note: You need to update the metadata with the appropriate file each time you switch the player types.)

Once you have confirmed the OBS base/canvas resolution in both instances of OBS, adjusted the near- and far-planes, and compared the livestream to a recording, let us know how the quality compares between the recording and the livestream.

I have answered your question about lighting a Depthkit asset in Unity in a separate thread.