Hi,
We’ve done a capture sessie in Depthkit Studio last week, captured four singers, and we’re now implementing this in a Unity project. We’re trying to play back our four DepthKit videos using AVPro.
We see a large drop in frame rate between the playback of 3 clips and 4 clips. Playback of 3 clips the frame-rate is 240 fps and if we add the fourth clip the frame-rate drops sharply to 6-7 fps. Do you know what is happening? Is there any way we can play 4 simultaneous clips at a frame-rate > 30-75 fps ?
@doubleA Based on what you shared, it’s tough to know for sure, but I suspect that the GPU’s decode capability is saturated when trying to decode 4x 4096x4096 videos simultaneously. You can monitor this in Task Manager’s Performance tab by selecting the GPU:
If Video Decode utilization hit’s 100% during playback, that’s likely what’s causing the low framerate. To address it:
Scale some or all of your videos down to a lower resolution (e.g. 2048x2048) until simultaneous playback comes in below 100% Video Decode utilization.
AVPro Ultra supports the HAP and HAP Q codecs, which generate larger video files, but may perform better depending on the publishing platform and rendering hardware.
Some other things I noticed based on the materials you shared:
The current version of Depthkit Studio Expansion Packages for Unity are tested in Unity v2020.3 (LTS) - I see you’re using v2021, so although you may not run into issues, we have not fully tested the packages in that environment.
Your volume density is set to 200 - This provides the high quality for non-realtime workflows, but can lead to low performance for realtime ones. We recommend bringing that down to a value of 150 or lower for all of your clips.
Thanks Cory, we’ll do some tests today, and will try 2048x2048 resolutions.
I wonder if you ever considered to use a non-linear vertical scale in the Depthkit renders so that you could use half of the pixel height for the face, and the other half for the rest of the body? That would help enhancing definition of the face in lower resolutions
@doubleA
I’ll jump here to comment on your feature suggestion. We are planning to improve the way we layout textures and depth in the combined per pixel atlas such that texture quality is preserved better on exports. But this is not likely to be a feature we tackle this year.
In lieu of that feature, we always recommend having a face-oriented camera or “hero” camera in your configurations. Having one camera close to the face will ensure the resolution is high in both color and depth, and works well with our realtime blending system in Unity.
Thanks James, sounds like an exciting update. We did use a Hero Camera (Blackmagic 6k), but even with this the more subtle details of the face get a bit lost within the full export.
Update on our playback of 4 clips. We are using half resolution clips for now (2k), this plays back fine. Maybe in the future with better hardware (4090ti ?) it would be possible to play back four simultaneous clips in higher resolution. Thanks for the quick support for this issue!
@doubleA - If you applied a high-resolution Cinema color source to your hero perspective, you can take advantage of that color resolution by enabling Refinement on that perspective and cropping it in closer to the subject’s face. There are a few things to consider when doing this:
This should improve texture detail, as the cropped color texture doesn’t need to be scaled down as much as the uncropped version to fit within the Combined per Pixel map.
The geometry detail within the cropped area will remain mostly unchanged.
Areas which had been covered by the uncropped sensor, but are now outside that perspective’s crop, will have to rely on other sensors for proper geometry reconstruction and texturing.
For future shoots, you can achieve similar results by placing the hero sensor/camera closer to your subject for higher detail in both geometry and texture.
Thanks Cory, I’ll forward this info to my developers.
As for your last tip, the hero camera is paired with a Kinect, if we move it closer we don’t have a full shot of the body anymore. How could we solve that?
@doubleA perspectives with partial coverage of the subject can be blended with other perspectives which cover the full subject, so if other sensors are positioned to cover the parts of the body below the head, there will be no gap in coverage.