Camera Pairing in processing cinema captures not successful

Starting Point:
The capturing is already a few months behind and we have also processed and exported succesfully.
But now we are on a different computer and would like to enhance the quality of alignment in the edit-section as you see in this video from depthkit: https://youtu.be/HikWkQ35drk?si=g55Zvmyo2wvlogb-&t=888

Problem:
But “CINEMA CAPTURE” is not active and the question mark says we should pair cameras, which I did, but nothing happens. Please see the screencapture here: depthkit-screencapture-camera-pairing-cinema-capture-2025-01-31 113602.mp4 - Google Drive
You see that the depth-files and cinema-files are both there and synced.

In the Log-Window you see nothing happens either. (the red error-messages come from missing sequences, that are no longer wanted but still in the project library)

Solution wanted:
We want to be able to adjust the alignment of the takes.

@JanFiess - Can you clarify which alignment you are trying to adjust:

  • You captured with Orbbec Femto Bolt sensors AND one or more external cameras simultaneously as described in our Depthkit Studio + Cinema documentation, but the Depthkit Cinema pairings cannot be enabled in the Depthkit Studio Editor context, and you would like to enable them.

OR

  • You captured with only Orbbec Femto Bolt sensors, and are trying to manually adjust the color-to-depth alignment of each sensor’s data to better align the texture contributions from specific sensors as described in Sensor-only texture alignment documentation.

To better understand your issue, can you also please share screen captures of:

Additionally, sharing (here or via a private channel like email) an exported clip of the Depthkit capture in Combined-Per-Pixel (CPP) video format will allow us to inspect the asset for further causes of any issues or artifacts. Another option is to share the a copy of the entire Depthkit directory, which will allow us to open the project on our side and inspect all of the above ourselves.

@JanFiess - Checking in on this ticket. If you’re still experiencing this issue, please provide the additional information requested above. I’ll keep an eye out.

Hi Cory

We used the 6 Femto sensors and 6 external cameras.

Screenshots and Export:
https://drive.google.com/drive/folders/15RweaDR5zZ7kIHvvMm83jkmV_ozEZpaU?usp=sharing

To be honest, the subject was too far away from the cameras/sensors and therefore the quality is decreased. Still we are trying to get the most out of it.

Nick (co-worker of Jan)

@JanFiess Nick, thanks for sharing the additional visuals.

First, I am not sure why the Cinema Pairings are not available to apply to your captures. To be able to see what is going on, I would need to take a look at the complete Depthkit project directory which includes these captures and calibration, as well as any additional Depthkit project directories which contain the Cinema Pairings you were trying to import.

For the reasons listed below, it’s unclear if enabling Cinema texturing on this asset and others captured using the same configuration will improve the result.

Setting that issue aside, from what I see, it looks like the Studio calibration is generally good as indicated by the lack of any seams or misalignments in the shaded view of the subject. Additionally, the lack of background pixels appearing around the edge of the subject in the CPP video/image color tiles gives me confidence that the internal calibrations of each sensor are also good.

I agree that positioning the sensors closer to the subject would have resulted in higher quality, but you may be able to slightly improve what you have by:

  • Adjusting the walls and especially the top of the bounding box in closer to your subject using the handles in the 3D Viewport or the settings in the Edit > Isolate panel.
  • Adjusting the Surface Reconstruction > Depth Bias setting. It looks like this setting may be too low, causing the geometry to be “squished”, but I am not sure if that is intentional to reduce other artifacts. The default value of 8.0mm is usually best for reconstructing human faces.
  • The export you shared with me looks like it is constrained to 2048x2048 resolution. Is this required for the project you are working on? A higher resolution export will preserve more detail.

It’s also worth noting that because none of the sensors are positioned directly in front of the subject’s face, the texturing for the face is split between two sensors off to either side (as seen in the cyan-green texturing contribution view in your screenshot). When viewing the asset from the side, this looks fine, but when viewing the subject from the front, subtle differences between these two sensors and the way that their color data is projected on the reconstructed asset might produce artifacts like asymmetries or the appearance that the subject is “cross-eyed”.

Let us know if you’re able to share the Depthkit project directories referenced above to further troubleshoot.

@JanFiess Nick, I received the project you sent -Thank you for this.

With the included calibration and take data, I am able to open the project and play the capture with just the textures from the Orbbec sensors, just the same as it appears you are able to on your end.

However, there is no data included to support a Depthkit Studio + Cinema workflow:

  • In the Camera Pairing view, there are no sensors listed, indicating that none of the sensors used for the multicamera takes have been paired with external cameras in this project. Additionally, within the project folder’s _calibration bin, there is only a world calibration which registers the positions of the Orbbec sensors relative to each other, but no deviceID_############ bins which would hold the registration data to pair each Orbbec sensor with a camera. Were the Cinema camera pairings generated in a different Depthkit project? If so, I will need that project folder as well.
  • In the RGB recordings, I don’t see any external cameras mounted to/near sensors visible in frame (though not all sensors are visible) - Can you confirm which sensors had corresponding external cameras?

Hey Cory
I checked back.

  • You were right: no additional/external cameras were used.
  • I can check with the production team on Wednesday to see if there are any deviceID_### bins on the backup discs (which I don’t have access to right now). Or would this be needless, if no external camera has been used anyway?

So, if no external cameras were there to be paired to the sensors and therefor no camera pairing possible, is the cinema workflow impossible? So no way to adjust the alignments or improve our results?

Kind regards
Nick

@JanFiess Nick - The deviceID############ bins would only be present if external cameras were used, so I wouldn’t expect them to exist in your project now that you have confirmed none were.

Manually adjusting the color-depth alignment of clips captured with only sensors is possible using the Sensor-only texture alignment workflow, however this is intended for specific Azure Kinects whose internal factory calibration misaligns the color and depth data to a much greater degree than what I can see in your project. In other words, it is a complex workaround which may not improve your results much as you used the more precise Femto Bolt sensors.

I took a pass at setting the mesh reconstruction and texturing settings from your project, and was able to get the results which looked best to my eye with the following: