[RESOLVED] Seam In Capture - Dialing in camera placement/light

Hello,

I have been experiencing problems with my captures. After completing an in-depth calibration and getting all the parameters to over 99%, I’m still getting significant separation in the final capture. I believe it might be camera placement, and I would love some feedback. Below are images of the rig, measurements of the camera placement in relation to one another, height from the floor, and downward angle. I’ve also included screenshots from the editor window.

Distance between cameras:

1-2 - 54"

1-3 - 53"

3-4 - 64"

4-5 - 60"

5-6 - 64"

6-7 - 72"

7-1 - 41"

Height from ground:

1 - 82"

2 - 28"

3 - 66"

4 - 29"

5 - 73"

6 - 28.5"

7 - 65.5"

Degrees facing down:

1 - 30

2 - 10

3 - 16

4 - 1

5 - 21

6 - 9

7 - 16

Thank you!

@AlanWinslow Thanks for sharing the screenshots and photos materials. To help others who may be experiencing this same issue, I’ll reiterate what you’re seeing with some of your visuals:

When your subject is facing your hero-position sensor, there is a visual artifact resembling a seam appearing right down the front of your subject’s face. At first glance, this seems especially puzzling because the here sensor (in this case the green Sensor 3 in the Depthkit 3D Viewport) is well-positioned with a clear view of the subject’s face.

image

Looking at a shaded view without texturing, we can see that this issue does not appear to manifest in the geometry - The shape of the face doesn’t have any kind of ridge or asymmetry which would indicate geometry misalignment.

Looking at the Texture Contribution view, which shows the composition of the texturing with color-coding by sensor shows that the green Sensor 3 is only contributing to a small area of the forehead, and that the flanking sensors (6/Violet and 1/Red) are texturing the rest of the face with a hard edge at the boundary of the two - right down the center of the face. This can be addressed in the following ways:

  • The Texture Blending options can be adjusted to soften the boundary between the two textures, specifically by lowering the Fixed Texture Blend value. Be sure to do this while looking at both the Textured and Contribution views to see how this adjustment affects texturing.
  • For Combined per Pixel assets destined for Unity, increasing the Dynamic Texture Blending value will cause much more of the 3/Green sensor to be used for texturing when viewing from the front. Again, do this while looking at both the Textured and Contribution views to see how this adjustment affects texturing, especially when moving the virtual camera around to different positions. These non-destructive settings will be retained once the asset is exported from Depthkit and imported into Unity, but can be further adjusted in Unity in the Depthkit Studio Mesh Source component’s texture settings.
  • After adjusting both Dynamic and Fixed Texture Blending settings, then adjust the texture spill settings below them to reduce and soften texture spill artifacts (e.g. the outline of an extended arm projected onto the chest).

  • Finally, one more change which can address these seams is to reposition the sensors to better cover the areas where they are appearing. The seam above looks like it’s the result of none of the sensors having a clear direct view of the affected area, resulting in texturing from oblique sensors which are capturing that part of the body from more extreme angles. This along with a known factory color-depth misalignment issue with some Azure Kinects can result in small pieces of background RGB data (like a dark background or a bright light) ending up in the textures of your capture at the edges of a particular sensor’s perspective. In your 7-sensor configuration, I would consider moving sensors 1/Red and 6/Violet further away from the front / toward the sides, but adding additional sensors to cover those areas would also work.

Let me know if you have any questions about the above, and please share your results once the above recommendations are applied. Thanks!

Less detailed and less elaborated then Cory:

I would say move the upper front even higher to somehow only use this sensor as top down & move the lower front to an hero position frontal around chest or face hight to cover full front with this.

This would be my first try after seeing your positions.

I can’t promise anything but this would be my try.

Greetings Martn

Hi Everyone,

Thank you so much for the feedback. I’ve made multiple moves with the cameras from their original positions, lowering and raising a foot. I also evened out the two side cameras, positioning them 10 degrees up and down. Finally, I evened out the lighting even more, ensuring even light was falling on the subject. I’m still getting significant splitting of the subject.

If anyone has any other suggestions, I’d love to hear them. Thank you so much for your support and help.

@AlanWinslow Any updates from our last session? How are the captures looking with the new sensor configuration and lighting?