In terms of performance on the face, we found that the three-headed camera was much better, and when using eight cameras, after using more calibration times, the face showed blurring and alignment problems. Can you give us some advice, in which direction should we try to calibrate? thanks
@weibinliu The ghosting / texture-misalignment visible in the face of your 8-sensor asset is likely due to a couple of factors:
- What Overall Quality is reported in the Multicam Calibration context for the 8-sensor configuration?
- If many sensors see your subject’s face, there will be more sources blended into the asset’s texture. If building your final product in Unity, you can use the Studio Look’s view-dependent controls, specifically ‘View Dependent Color Blend Weight’, to bias the texturing toward fewer sensors instead of many.
- Even with a good calibration, time-of-flight sensors like the Azure Kinect capture some materials like human skin with some degree of error - This is likely why the pattern on the shirt is clearer than the subject’s face. You can compensate for this by adjusting the ‘Depth Bias Compensation’ slider in the Studio Mesh Source component’s Experimental Volume Settings.
- Can you confirm that you are using FFMPEG to encode your Depthkit Studio Multiperspective Combined Per Pixel Image Sequence? Encoding the video any other way may lead to quality loss.
How far away are your sensors from the subject in the 8-sensor configuration? How far away are the sensors from the subject in a 3-sensor configuration? Let us know if adjusting the above settings help clear up your subject’s face.