Texture quality with 3 camera setup

Hey team, I’m seeing low texture quality in my studio captures (super blurry face). I’d like advice on ways you think I can improve results. A couple of notes:

  • Capturing front facing upper body, think zoom frame
  • Using 3 cameras mounted to desk
  • Goal is to run live eventually, so I can’t use refinement or other recording specific fixes

Here’s a sample of what I’m seeing. This is me rotating around the poster image but the video quality is very similar: https://vimeo.com/801392666

Here’s my camera positioning:


Note: The one in the center is about twice as close to the face. Was hoping this would increase facial quality.

And here’s the calibration results:




This is one of the highest calibration qualities I’ve been able to attain. I had to set the spatial and temporal thresholds super low to get them.

Would higher calibration quality metric fix this issue? It’s been very hard to get the quality even this high. I’ve been using the marker attached to cardboard mounted on a lamp but maybe its time to get a real stand so I can vary the height of the marker?

@JacobPeddicord Thanks for sharing all of these materials. There are a few areas I suggest honing in on to better understand the nature of this issue and ways to improve it:

image
Either of two things may be contributing to texture misalignment (seen here between Sensor 1 in red and Sensor 2 in green):

One way to quickly identify which is to switch from RGB texturing to Infrared texturing while in the Calibration context, and see if the alignment changes. If the alignment is identical in both texturing modes, a good calibration will likely fix the misalignment. If the alignment shifts between the two, then one or more of your sensors has a faulty factory calibration and needs to be replaced.

One thing that I think is contributing to the low calibration score is that your calibration samples are all on the same 2D plane (red), at roughly the same distance to the sensors, which is likely why the Coverage metric is so low. To fill that metric up and improve the quality of the calibration overall, collect samples from various positions on that plane, then move the chart to a new plane further from the sensors, and collect another set of samples. The more samples the sensors you collect from more distances, the more useful data the calibration algorithm has to work with.

Once you have more samples, you should be able to open up the Spatial Error and Temporal Stability filters and get a better score. With the data it is currently given, you can see that the current filter settings are removing all but 23 markers from your set, which may cause the calibration to align one point in 3D space well at the expense of alignment around the rest of your scene.

image
I haven’t varied the distances of the sensors as much as you are in your sensor configuration, so in principle, your instinct to get higher resolution from the center sensor makes sense, but there are some caveats to this:

  • To avoid harsh edges in the textures (where the edge of the center sensor overlaps with the other sensors) the textures will have to be blended in the Depthkit Studio Mesh Source component > Experimental Texture Settings > Edge Mask setting like Invalid Edge Width and Strength.
  • Depending on the View-dependent Texture settings, the benefits of the close sensor will fade away as the viewer moves off-axis of the center sensor and matches a perspective closer to one of the flanking sensors. You can bias this by lowering the Depthkit Studio Mesh Source component > Experimental Texture Settings > Advanced Per-Perspective Settings > Color Weight Contribution for the flanking sensors.

The Per-Sensor Debug Color mode in the Depthkit Studio Look component can be very helpful in seeing how the texture settings are blending the textures from the different perspectives together.

Also, to see exactly how these settings respond to changing view angles, rather than rotating the Scene view camera around the asset, rotate the asset relative to the Game view camera to see how the texture changes at different angles.

Depending on the spatial agency of your viewer, you may want to bring the flanking sensors further out to the sides, as viewing the subject from an angle beyond the physical sensor will reveal the missing geometry in the back.


This doesn’t speak to the texturing of your asset, but I also noticed you are using very high volume density and surface smoothing values. Typically, even in our highest-quality workflows, we rarely go above a volume density of 150-200, as doing so reduces real-time performance significantly. Applying high surface smoothing also negates many of the details achieved by a high density so experiment with bringing either or both of those values down.

The Surface Normal value is also very high, and may be negating the center sensor’s texturing. Enable the Per-sensor Color Debug view, and adjust this setting to more evenly blend the textures.

Try some of these suggestions above and let me know how your results compare.

Thanks for the detailed response Cory. I’ve ordered a stand that will help me capture samples in more positions. Will update next week when it comes in and try your suggestions.

Ok got some new equipment in that let me get better samples. Here’s some updated results:


I was able to go looser on the error filters this time. Still a lot of samples excluded but now I have a lot more valid ones.



I’m mostly curious if I’m approaching the quality you would expect in this scenario?

@JacobPeddicord This looks better, but the texture resolution seems low.

  • Which RGB resolution are the sensors set to when capturing? It’s best to set the RGB of all sensors as high as they can go before dropping frames, which for most computers is 1440p~2160p.
  • Is the exported Depthkit Combined-per-Pixel video 4096x4096 (or close to it)? If it’s being scaled down at all, you can expect to see a proportional loss of detail.

Also, when setting up your asset in Unity:

  • Click the ‘Load Front Bias Defaults’ button found in the Depthkit Studio Mesh Source component, as the starting point for setting the geometry settings. This will bias the mesh generation to fill in some of the holes you are seeing on the sides.
  • Increase only the View Dependent Blend Weight Slider to clarify the textures by weighing one’s contribution over the others based on the viewer position, rather than evenly blending all textures from all sensors together evenly.