First Time Depthkit Studio User w/ Results! Questions and Best Practices?

Please see my troubleshooting video on vimeo here:
Depthkit Studio Test Troubleshooting Video

Hello all! I’m super happy to say I’ve finally completed my first Depthkit Studio Test. Naturally, I have a bunch of questions after seeing my first captures in Unity. Much of my questions may definitely relate directly back to my setup not being perfect yet, but in the meantime, I’d love to hear some feedback before I improve the setup. Please feel free to hit me with any comments or questions. In the meantime below are my burning questions and setup specifics:

SETUP

  • I did two setups: a 3-sensor setup and a 5-sensor setup
  • I have not setup green screen yet, so these are without green screen and the matte process
  • Currently both captures are 1080p. I need to try 4k again another time, because I had some dropped frames on my first test
  • My 5-sensor setup was placed in a rectangle to get more space to move. However, looking back, I feel like keeping all sensors equidistant from each other in a square is probably what I should have done?
  • All my exports are Combined Per Pixel Video

QUESTIONS

  • My main question is what might be causing my 5-sensor capture to be blurry in the face. Is it my calibration? Is it the lack of 4K? Is it the fact that I did not export a Image Sequence instead? Is it all of the above? One more than the other?

  • Can I put my sensors in a rectangle formation or am I correct in thinking that a square is better for calibration?

  • How does my calibration look in general?

That’s it for now! Thank you all!

@Andrew Thank you for sharing! Your video tour of your project is extremely helpful for troubleshooting!

  • The blurred textures in Unity may be a few things.
    – First, make sure your calibration has a good overall Quality metric. (We are currently updating this in the upcoming release of 0.5.12, so stay tuned.) This one looks pretty good, but we can do a bit more to reveal how good it is - see below.
    – Make sure Refinement is enabled for all sensors of a Studio recording. Any sensors which don’t have Refinement enabled will use the depth sensor’s resolution to map it to the Combined-per-Pixel, rather than the higher resolution color sensor which results in higher detail.
    – In Unity, with the Depthkit asset selected, go to the Depthkit Studio Mesh Source component, and under Experimental Volume Settings, adjust the Depth Bias Compensation to account for the Kinect’s inability to accurately measure the distance of human skin. Sliding this back and forth will bring certain features of your subject into better “focus”, so it may improve the texturing of the face at the expense of the texturing of other areas of your subject.
    – Also in the Unity asset’s Depthkit Studio Mesh Source component, undisclose the View-Dependent Controls, and raise the View Dependent Color Blend Weight and some of the other sliders to clear up some of the texturing. (You can clearly see how this affects texturing by turning on the ‘Show Per View Color Debug’ option in the Look component.)
  • When recording, always keep an eye on the Recording Diagnostics panel to monitor for dropped frames. This is a more accurate way of monitoring system performance than the console. If your system is dropping frames, check it’s spec’s against our hardware requirements.
  • A rectangular volume does not necessarily introduce issues, but does have an effect on aspects of your capture. For example, if the rectangle is longer on the near-far axis to the “front” of your volume, it may actually capture any objects next to each other on the left-right axis (legs next to each other, arms next to torso) better because these things occlude each other less from the perspective of each sensor. The trade-off is that objects in-front and behind each other are more likely to occlude each other. Where a rectangular configuration may be slightly better for a subject standing square to the “front”, a squarer volume removes this bias and better accommodates different body positions and rotations. Another factor is how well the sensors on one side of the volume can link to the sensors on the opposite side during calibration - In a rectangular configuration, the two sensors on each long side of the volume may not get as accurate calibration samples as if they were closer together.
  • Even without a greenscreen, you can use tools like After Effects’ Rotobrush to create mattes to help carve away some of the extra geometry.
  • A quick side note: The lighting during Studio Calibration isn’t critical to a successful calibration, as this process only makes use of the depth cameras, which provide their own infrared light.
1 Like

Thank you Cory. Going back to the Studio today to try all your suggestions. These are GREAT! Thank you!

By the way, using the Unity sliders DEFINITELY improved the results. The face and body is way more clear or “in focus”. Now I will just need to learn how to smooth the edges out around the scan as these fixes introduced other artifacts etc, which makes sense. I’ll have to read through the documentation on what each slider does!

@Andrew The Refinement workflow will do a lot of work here to improve the assets. The better the masks, the better the renderer is at reproducing you subject (and the better it is at NOT reproducing anything that isn’t your subject :slight_smile: ).