Auto Background Removal Holes in Masks

Hi, we recently made a capture using a 10 Kinect setup and processed the stream from each sensor using the auto background removal script:

We applied the masks, adjusted the refinement filters, exported the footage as a Multi-Perspective CPP Image Sequence, encoded using h264 in ffmpeg, imported into Unity and further refined the mesh using the plugin settings. Some screenshots of the end result can be seen here:



There are large holes in the mesh that we were not able to fill in using the filters within Depthkit or the Unity plugin settings. We were thinking that the holes may be caused, to some extent, by holes that appeared in the refinement masks that were made using the auto background removal tool. Here are a few screenshots of the masks:



Can anyone advise the best approach to minimize the appearance of such holes in the refinement masks? The end result seems to have a lot more holes than appear in the masks though so perhaps there is some other contributing cause. If anyone has any advice in this regard it would be very much appreciated.

@BryanDunphy Thanks for sharing all of this material.

It looks like the primary cause of these holes in the asset is the holes in the mattes generated by the AutoMatteProcessor. Although this is a useful tool for quickly generating mattes, the results aren’t always perfect, and any parts of the background that the algorithm has trouble distinguishing from your subject will create black blotchy areas like the ones seen in your screenshots. I suspect what’s happening here is that the algorithm can’t differentiate between your subject’s dark clothing and a shadowy area of the background.

For future shoots, dressing the background with neutral solid backgrounds will aid the AutoMatteProcessor in making higher quality mattes.

For this material which has already been recorded, there are two ways to recover the areas of your asset with holes in them.

Repair the Mattes: Depthkit’s Refinement process assumes that the mattes perfectly segment your subject from the background, so if there are holes in the matte, the refinement settings are not going to recover any data that’s blacked out. Outside of Depthkit, in compositing software like Resolve/Fusion or After Effects, you can use tools like difference/delta keyers and rotoscoping tools (e.g. After Effects’ Rotobrush) on the original footage, or even tracking solid white shapes onto the mattes themselves, and then feed the new versions back into Depthkit and re-export the asset.

Mesh Reconstruction Settings: This is less effective than fixing the mattes, but in the Unity object’s inspector, under the Depthkit Studio Mesh Source component, you’ll find these settings:


The settings which have the greatest effect in filling the kind of holes are to reduce the ‘Weight Unknown’ and ‘Weight Unseen Falloff Power’ values. Reduce them, and then adjust the ‘Adjust Surface Sensitivity’ slider to taste - However, doing all of this make the renderer more likely to render extraneous geometry around your subject.

Also one side note: It looks like the Surface Smoothing on your asset is set very high. We recommend setting this between 0.3 and 0.5, and then using the other mesh reconstruction settings listed above to get rid of loose geometry.

Let me know if you have any further questions.

@BryanDunphy I took a look at the materials you emailed to me, and spotted some other issues in your CPP as well:

  • In general, it looks like the sensors are too close to the subject, and creating a volume that fits someone standing in the center with their hands by their sides, but fails to capture them moving to the edges of the stage, or reaching their hands out. It’s good practice to make sure that at least 3 sensors see any given part of the body from different sides so that that body part is accurately reconstructed. There are some moments in this CPP where the subject reaches or moves parts of their body outside of all but 1 or 2 sensors. Moving the sensors back away from the subject will define a larger volume at the expense of resolution.
  • When using the Azure Kinect’s Narrow Field of View mode, there are parts of the frame (namely the corners) where the color camera has coverage, but the depth sensor doesn’t. You can see this Depthkit’s Edit context when Refinement is disabled. When Refinement is enabled, however, the hole filling algorithm will fill any white areas in the mattes with values from neighboring pixels - At the edges of the frame where there is no valid depth data, the filled areas generate synthetic data which actually causes issues in the renderer. We recommend cropping out the left and right sides of each sensor to eliminate these synthetic corner artifacts.