Hi, as we mentioned in email. We’ve successed to use 6 cameras and 3840x2160 in VDO.Ninja with Microsoft Edge. But the quality in Unity Editor is not better than 6 cameras and 1920x1080. Here’s our setting below, could you please tell us how could we adjust settings to increase the quality?
In DepthKit we set 3840x2152,because I cannot set the height to 2160 in preference ui.
OBS Settings In DepthKit PC , we set 3840x2160 too.
VDO.Ninja in DepthKit PC, we use Microsoft Edge and “width=3840&height=2160&fps=30&ovb=40000&ad=0&vd=OBS&autostart=1”.
OBS properties for Browser in Unity PC, we set 3840x2160 and url &vb=10000
The display screen is as shown below.
Is there any setting that we can do to increase the quality in UnityEditor?
And there’s another question is how can I let the DepthKit object effect by light in Unity?
Thanks very much.
@INungChuang I see that in the OBS instance running on the Unity computer, you have set the resolution of the received stream to 3840x2160, but can you confirm that that instance of OBS has a base/canvas resolution set to 3840x2160 (Settings > Video > Base/Canvas Resolution)? If the base/canvas resolution is lower, then the stream will be scaled down.
I also noticed in your combined-per-pixel stream that one of your sensors appears to be cutting off your mannequin with the far plane - This is indicated by the mannequin’s leg and neck being missing in the depth map (see below). Adjust the near and far planes of that sensor so the subject is in the yellow-green-blue range of the depth map like the other sensors.
Finally, how does the quality of the reconstructed livestream asset compare to an asset reconstructed from a recording? One thing to try (if you haven’t already) is to record a capture in Depthkit with the same configuration as your livestream, and export it with the same near- and far-plane positions and resolution constraints (3840x2152) as the livestream. In Unity, switch the Depthkit object’s video player from “Livestream Player (Spout)” to “Video Player (Unity)” and load your recording. This provides a baseline for quality, and isolates any quality issues introduced by the WebRTC pipeline, so configure the reconstruction settings of your Depthkit asset based on the recording until it looks the best, then switch the player back to “Livestream Player (Spout)” to see how the livestream version compares. (Note: You need to update the metadata with the appropriate file each time you switch the player types.)
Once you have confirmed the OBS base/canvas resolution in both instances of OBS, adjusted the near- and far-planes, and compared the livestream to a recording, let us know how the quality compares between the recording and the livestream.
I have answered your question about lighting a Depthkit asset in Unity in a separate thread.