Camera pairing failed

Tried Deptkit Cinema and failed to pair the camera. we got 0% lens accuracy, though the lens profile seems to be good. we tried out the lens profile both in auto mode and in ‘Five Parameter Radial & Tangential Distortion’ mode, both gave zero accuracy when inserting the pairing videos. the upper matrix was high, around 70-100%, but the overall result wase bad (0% every time).
gave it a try with many different samples and went on the whole workflow number of times, resulting 0% every time. it there anything fundamental we are missing? we used a setup of Sony a7r3 and 16 mm lens, Azure Kinect.

another issue we got is that we couldn’t run depthkit on a computer with AMD processor - it resulted a broken image form the Kinect and run the software leggily (photo attached). used an Intel based pc and it worked. are you familiar with this issue? can it be solved somehow

Can you specify “AMD” I never had any issue with a AMD system (and depthkit) as I own only AMD system but “strong” once :slight_smile: … I used “strong” with the intention of being not exact :slight_smile: … what means strong … what means AMD … do you have an ryzen 9 5950X AM4 socket ? or even an AM5 socket system ??? or do you have AM4 socket Ryzen 5 XXXX ??? to your other issue … I thought to now that actually for depthkit Cinema (camera Pairing) a global shutter camera should be used … as a real sync of the azure and something else can only be true synced when it comes to global shutter … or does it work with non global shutter camera ? … maybe we could gather here in the forum a white list of camera´s which are maybe even recommended for camera pairing ???

thanks.we used:
AMD Ryzen 7 5800X 8-core
RTX 3090
CPU Driver version: NVIDIA 526.98
Azure SDK: 1.4.1
depthkit info

attaching a Google drive folder of the project, including the different lens profiles we made and the pairing videos. Yonatan_TestDay_1412 - Google Drive
in this version we had only 8 pairing samples. we did try before with a large number of samples, resulting paring accuracy of 0% each time.

*creating a lens profile using a printed table didnt work, switching to laptop gave no errors in Lens profile Creator. we calibrated both with lens profiles set to auto and to Five Parameter. running the workflow on both wide field and 4:3 didnt work for us as well.

If theres any more info needed let me know.
Thanks again

@Yonatan (including @user2 since it looks like you are working on the same thing), can you share some more details about your setup and workflow?

  • Which camera and (A7Riii) model of lens are you pairing to the sensor?
  • Which recording settings are you using to capture the pairing samples and clips? Codec, file container format, resolution, sensor crop, etc.
  • Are you transcoding or converting the files before ingesting them into Depthkit during the pairing process?
  • How is the camera mounted to the sensor? (Do you have pictures?)
  • What is your sampling strategy - Where are you positioning the chart relative to the sensor/camera, how far is it? Where is it in the frame? Which way is it facing?
  • Which depth resolution is the sensor set to when sampling? From the screenshot, it looks like a WFOV mode, but is it 512 Binned, or 1024 Raw (15 Hz)?
  • Are you getting any errors once you generate the intrinsic lens calibration with Adobe Lens Profile Creator - even when using the digital display rather than the printed chart?

The Pairing Accuracy metric generally accumulates as you add more and more samples. If it doesn’t increase with additional samples, it’s possible the underlying intrinsic calibration generated in the Adobe tool could be invalid. But if it does increase with more samples, I recommend adding up to a total of 25-35 pairing samples.

@Yonatan , @user2 , @MartnD - I am parsing the AMD issue out of this thread, and addressing it in a new thread here. Let’s keep this thread to the Cinema issue, and troubleshoot the AMD one there. Thanks!

Thanks Cory,
We used 16-35mm f/2.8 GM Lens. shot on f/8.

Did not Transcode the videos before pairing. I see now that the fps the camera takes is 29.97 not 30. Should I transcode the videos next time?
On sampling we moved around the checkerboard trying to cover as many angles and areas of the lens as possible. about 2 meters far from the camera. We moved the rig carefully horizontally across the room, tilting and panning the rig as well.
On sampling we set the Kinect to 1024 Raw (15 Hz).
Unfortunately I don’t have a picture of the rig from the shooting, but we used the Quick release mount base (and the rest of the equipment on Depthkit documentation) for the rig.
In lens profile creator setting to ‘auto’ gave no errors, while ‘five parameters’ did.
On the first test we used around 20 samples.

@Yonatan

  • We have tested the 16-35mm f/2.8 GM, and it works well - Just be sure to tape the zoom ring so the focal length is locked.
  • The A7Riii (along with almost all cameras from Sony, Canon, and Panasonic) only support 29.97fps rather than the true 30.0fps used by the Azure Kinect, but this discrepancy in framerate is usually not too much of an issue, and has no effect on calibration. This framerate discrepancy, as well as the fact that the Azure Kinect and the camera are running off of independent clock sync generators can lead to small amounts of drift over time. For short clips, temporal alignment can be adjusted during the edit stage of the Depthkit Cinema workflow. For long clips (more than 1-2 minutes), you may need to bring the clips into an editor, apply a small speed change (e.g. 99.73%) and render out the re-timed clip - preserving the resolution & aspect ratio - before ingesting into Depthkit.
  • When you moved the rig across the room, did you stop various distances? Can you share a screenshot of the thumbnails of all of your pairing sample clips?
  • The calibration metrics are low, but they aren’t zero, leading me to believe the intrinsic lens calibration and pairing chart are working. The Pairing Accuracy metric is the result of multiplying the other metrics together, so the near-zero Sampled Volume metric is likely what’s bringing the overall accuracy down - More samples at various distances and frame positions should increase this.

Thanks,
I only have the footage of the last test made with small number of samples. deleted the previous ones Because we got in all tests 0% and thought it had no relation to the number of pairing samples, we started the workflow again creating also new lens profile.

Should we try different pairing technique and focus on that? I think at one point we did try keeping the rig in place and moving the board around, is that more recommended?

@Yonatan In the thumbnails you shared, I am noticing that your rig positions are only changing the distance (good) and apparent angle of the chart (bad) from the camera’s perspective, and that the chart stays relatively centered in frame (bad). See this picture where I have superimposed all of the chart positions:

overlayed_samples

The red areas of the frame have no samples. To solve this, instead of moving the rig off to the sides, simply pan the camera so that the chart moves to the left and right sides of the cameras frame, and capture your samples. Then tilt the camera down so that chart is in the top of the cameras frame, and repeat the sampling of the left, center, and right sides of the frame. Finally, repeat this whole process at a few different distances so that each part of the camera’s frame has samples at different distances. (Remember to keep the chart within the frame of both sensor and camera!)

Also, try to keep the chart as square to the sensor/camera rig as possible throughout this, meaning that the chart should never be tilted back or panned of to the side as seen in the C0174 thumbnail.

1 Like

@Yonatan Did you try adjusting your sampling strategy? Have you been able to get a better calibration? Let me know if you have any updates so I can close this ticket.

Thanks @CoryAllen for checking up!
I got a moderate score of 59 :confused:
this time shot the cinema footage using 24mm lens. recorded around 40 samples in the technique you suggested.

NEW PROJECT SCORE

It gives me hope that with more pairing samples we can get there.

I wanted to continue the workflow to check how it looks on moderate score (in case that in our shooting day that’s all we’ll get) but unfortunately, I’m stuck now on Synchronizing the footage (Should I open a new post on that?)
after pasting the synced timeline i get an error on the sensor video part - drop frames and can’t proceed.

error

I recorded on 1080p, 640 Narrow raw. did not get errors while recording. My computer should run it without issues, and the Kinect is plugged to the motherboard.

still getting that drop frame error and can’t synchronize. tried:

  • changed the powerline frequency to 50 HZ (this is what we’re using here)

  • shot on lower color resolution

  • changed preference to max memory usage.

synced in premier, only cutting the cinema file so won’t be needing to change clips in Depthkit, tuned the alignment a bit and it works. not ideal though

@Yonatan

Even though you can pair a 24mm lens, that focal length only corresponds to a small area of the depth frame, so you’ll be effectively cropping away much of the depth sensor’s resolution. A 16-17mm lens is a better match for the depth sensor.

When recording, are you dropping any frames as reported by ‘Frames Dropped’ above the record button? I assume having “shot with a lower resolution” means you changed the sensor’s color mode (which is often helpful for increasing capture performance) - Changing your Cinema camera’s resolution requires you to create a new lens profile and Camera Pairing.

In the Editor, is “Dropped frames, can’t proceed” what you see when you rollover the alert (:warning:) icon? Is there anything else in that warning message, or any messages in the console when you link the Cinema clip to your Depthkit clip? Without linking the Cinema video, does the capture play back smoothly, or is it choppy? If you export the Depthkit clip as a Combined-per-Pixel video without having linked a Cinema video, does the resulting video play back smoothly, or is it choppy?

Thank Cory.
I tested the 24 mm lens (with A7R3) because I had it available, even though it is not typically recommended. the results were unexpectedly good.
I didn’t change the camera resolution, only the sensors. I didn’t receive a low performance warning in the editor, but I did get a message on the console:


The playback in depthkit was slightly slow, but exporting to unity worked well. When using the R3 camera, I didn’t experience this issue and was able to sync in depthkit without needing to encode the footage. working with S3 (same settings exept shooting 10 bit) freezed the software and did not work without encoding.

@Yonatan, which specific codec are you recording with on the A7Siii? Have you tried the different options available (XAVCS, XAVCSI, XAVCHS)? If using the h265-based XAVCHS, ensure you have installed the Windows h265 plugin from the Windows Store to enable direct support for h265 codecs in Windows.