Pairing workflow feedback

Hi,
I’m trying to push the cinema/depth pairing workflow to get the best results.
I’d be happy to have some feedback on my current approach to make sure I’m on the right path. I also have a few questions related.

My setup is composed of a 8k (UHD2) cinema camera rig paired with a kinect azure. Both cinema and deph lenses are rigged as close as possible and parallel to each other. The lens used is a Laowa 12mm zero-d (low distorsion). Both FOV are really similar for the depth resolution I’m aiming (640x576).

In my first pairing tests I managed to get an overall pairing accuracy of 66%, saying that I was still having a spatial offset between the Cinema RGB and the depth data once linked and synced. (The temporal sync is frame accurate and verified). I noticed that the lens I use for the cinema camera has significant breathing when moving the focus, I’m assuming that would impact the pairing worfklow. Additionally, I was probably aiming at doing too much coverage/sample.

I did another round of pairing test with a way better result:
Lens Accuracy 87.45%
Lens coverage 69.94%
Sampled volume 4.74 m3
Pairing accuracy 80-70% depending on the ratio of coverage vs precision.

Here is the method I used :

  • I determined a zone/volume in my setup that will be the area of the subject.

  • Unfortunately this 12mm lens has “breathing” when we adjust the focus. So I framed my subject/zone in order to determine the depth/cinema position and lock the focus and aperture to cover it properly.

  • Once I had that aperture/focus lock, I did the adobe lens profiling operation. The idea was to keep is exact same lens/camera parameter from profiling to pairing. Out of the box the lens have low distortion and the profile seems to be good when I check it in photoshop on test frames.

  • For the pairing process I started with the chart at 3ft from the cameras, then at the front, middle and slighty back of the established zone for the subject. For each of those z positions I captured samples by panning/tilting the camera rig in order to get minimum 9 samples covering the different zones of the third ratio.

This process gives me good result for a determined volume/zone of work, focus position and aperture.
I assume that any changes of the aperture and focus will have an impact.

Questions :

  • When capturing sample of the pairing chart, the pink dots are “dancing”, they are not staying fixed as it seems to be in the tutorial video. Anything wrong here ? (depthkit v0.6.2 - E__2023-03-17-io8k_dual_pov_v2 2023-03-17 11-58-18.mp4 - Google Drive)
  • Can you confirm that the breathing of a lens have direct impact on the lens creation profile as well as the pairing process ?
  • Is there anything I should adjust in my current process ?
  • What is a reasonable target for high level pairing accuracy ? is 90% realistic ? :slight_smile:

Thanks,
Martin - FnP studios.

@Olivier, thanks for the information about your process. Overall, you seem to have a good approach. To your questions:

  • Can you confirm that when pairing, the sensor is set to 1024x1024 WFOV Raw mode? I see in your video that it’s set to a wide mode, but it is important that you select one of the raw/unbinned depth modes for the most precise sampling. If it is set to one of the binned modes, this could contribute to the jitter you’re seeing. There is always some inherent noise in the depth sensor, so other factors could include the distance from the sensor to the chart (, and environmental infrared noise.
  • The breathing you see when rolling the focus will affect the texture alignment, so the best practice is to light your subject enough to allow using a smaller aperture / wider depth of field, and fixing the focus setting to eliminate breathing.
  • Gathering additional paring samples even further away than just beyond the distance to your subject will increase your Sampled Volume, and likely increase your overall calibration.
  • It should be possible to fill up the pairing accuracy to 100%, though this may require sampling beyond a point of diminishing returns. I usually make this judgement by eye. In addition to the pairing chart in each sample, I like to include 3D objects with clearly visible geometry and texture features so that I can scrutinize the alignment once the pairing is generated - One easy version of this is cardboard boxes hung on top of stands throughout the volume, with a corner of each box facing directly toward the lens, and each side adjacent to that corner taped/painted a different color. When evaluating the pairing, looking at each box from different perspectives makes it easy to tell if the geometry edges line up with the color boundaries (good pairing), or if the color from one side is bleeding around the corner (bad pairing or faulty sensor).

Hi Cory,
Thanks a lot for the usefull informations, that’s confirming our thoughts!
So far our latest setup with 6 pov including 2x 8k cameras and 4x 4k cameras is giving us good results.

Our pairing for each pair of depth/cinema is around 80% and the calibration of the 6 pov is 84% precision and 81% coverage.

I can confirm the settings for the calibration process are good as well as controlling the IR polution.
My feeling is that we have sample further than the optimal distance for the depth cameras. Close to the camera, the tracking on of the aruco is super accurate, but in the back of the setup it’s becoming difficult. Saying that I use the temporal filer to adjust.

Best regards.
Martin, FnP studios.