Depthkit Studio Best Practices

This thread has been broken out of a separate thread regarding the Femto Bolt sensor to address some of the other topics raised there.

@Terence - Before getting into the specific topics mentioned, I want to roll up some of the information from the other thread and recap what your success criteria are (please correct me if I am misrepresenting anything):

  • Reliable, sustained capture of 10 Azure Kinects, each set to 1440p color and 640x576 NFOV depth, with your ‘fixed’ studio PC, containing an Intel i9-10980XE CPU and RTX A5000 (Ampere-gen) GPU.
  • Reliable, sustained capture of 7-8 Azure Kinects, each set to 1440p color and 640x576 NFOV depth, with your Dell Precision 7780 mobile workstation PC, containing an Intel i9-13950 HX CPU and RTX 5000 (Ada Lovelace-gen) GPU.
  • Reliable, sustained capture of 3-5 Azure Kinects, each set to 2160p color and 640x576 NFOV depth, with your backup mobile PC, containing an unknown CPU and RTX 3060 Ti (Ampere-gen).

Topics in order of when they come into play in the Depthkit Studio workflow:

Hardware for 10x1440p Capture: It’s unclear which of your systems is experiencing dropped frames or other performance issues. The spec’s of your tower indicate that the Ampere-gen GPU is likely the bottleneck for 10x1440p capture - The new Ada Lovelace generation of GPU has unlocked greater capture performance through the re-introduction of additional NVENC hardware resources, but Turing- and Ampere-gen GPU’s are more limited. The spec’s of your Dell Precision 7780 mobile workstation look up to task, but we have seen performance affected by thermal throttling, particularly in laptop form factors before. In general, we try to target 0 frames dropped at all, and our Depthkit Studio Hardware systems are rated for such. Tolerance of dropped frames is up to you based on how bothersome the stutter of a dropped frame is within your project’s context.

Sensor Positioning: Though closer sensors to your subject result in higher resolution, 50cm (0.5m) is the minimum distance that can be detected by the sensor, and placing the sensors at exactly that distance jeopardizes that some parts of your subject may come too close to the sensor to be captured (due to overexposing the depth camera).

As the distance between the sensors and the subjects also defines the size of the capture volume, the sensors usually need to be placed at a distance that frames the subject fully in any position in the capture volume - This usually works out to about 1.5m from the nearest edge of the desired capture volume to the sensor to capture a subject of average height.

Our latest 10-sensor captures have been arranged in such a way that resembles the 5-sensor array in our documentation, but with 3 additional sensors mirroring the hero position on the remaining 3 sides of the volume, and 2 more overhead (see above photo, with one Kinect and one Bolt in each position for side-by-side testing).

Lighting: In general, it is best to set proper lighting and proper sensor exposure settings at the time of capture, as “pushing” exposure of the encoded RGB data is limited due to the data coming off the sensors and how Depthkit compresses it.

Surface Detection with ToF Sensors: The sensor’s inability or partial ability to capture different hair textures is covered in our material/garment reflectivity documentation. We have found that the most effective solution is to have dry shampoo (available in different tints for different hair colors) on hand and apply to hair and other surfaces (perhaps even body paint) as needed. If using aerosols like dry shampoo, spray them well away from the stage where the particles may interfere with the sensors. The color of the surface doesn’t explicitly interfere with any part of the capture, but dark materials may be more likely to absorb infrared signals just as they absorb visible light, making them more challenging to detect with the sensors.

Capturing with Different Color Resolutions per Sensor: If your system is unable to maintain capture at a particular specification without dropping frames, you can reduce the color resolution of individual sensors to maintain the performance of the system overall. Details of the higher-resolution sensors will be retained, particularly in the combined per pixel image sequence and textured mesh sequence export formats.

Object/Background Removal: As of Depthkit v0.7.0 released in July 2023, Depthkit Studio no longer uses mattes to remove extraneous depth data from Depthkit Studio captures. This was done to speed up the post production process by implementing a new fusion algorithm and a bounding box within the edit stage. While experimental workflows exist to mask objects from the combined per pixel video export, current best practices are to remove any object from the capture volume which you don’t wish to be embedded within the capture. We recognize there are some scenarios where object removal is necessary, and are open to discussing your use cases further to prioritize this functionality in future releases of Depthkit.

Arcturus Integration: Depthkit Studio captures can be exported in Textured Mesh Sequnce (OBJ) format, which vary in size depending on the Mesh Density specified in Depthkit’s Editor > Mesh Reconstruction settings - Higher density reulsts in higher quality at the expense of storage needed. These OBJ’s are immediately ready to import locally into the Arcturus HoloEdit desktop application, which then facilitates data transfer and cloud processing within the application. Any support for different playback devices like iPhones, Android devices, and HoloLens is subject to Arcturus’ HoloSuite platform support.

Thanks Cory. This is extremely helpful.

Firstly, the rig I want to focus on for our meeting on 24th Jan is the first in your list. This is what I want to use for capturing the Venice Biennale contemporary performers.

Sadly, there is not more I can do to upgrade it, as the existing cpu is locked to the existing motherboard, and moving to A5000Ada from RTX A5000 is too expensive and disruptive to contemplate right now.

I want to get the best out of this rig even if it means dropping a sensor or two. However, it has run as is at 10x 1440p 30fps for 2-3 mins with no or extremely few dropped frames (if this introduces jitter, I would prefer to target zero).

I will expand the capture area as you suggest to sensors arranged in 3m diameter circle with 2m diameter capture footprint. Space is limited in my studio particularly if I have lighting on heavy stands, but I will get as close to this as I can. I will have to remove some green screens to free up space, but will try to make the sides where this occurs as uncluttered as possible.

Prior to 24th I will organise the rig as you have done in your illustration comparing Femto Bolt side by side with AK. Femto Bolt x 4 have now shipped. If they or more arrive in time for me to upgrade the rig (with DepthKit Studio R 8.0) would you recommend me installing them with AK’s making up the shortfall to 10 sensors? Before or after our meeting? I am not sure how you sync a mixed sensor rig given the differences in sync cables etc? I will install in such a way that the first and last sensors in series can be easily taken out of play if frame drop is experienced with 10 sensors. We can check prior to calibration and calibrate the number of sensors that work best. I have adapted your calibration marker stand to x3 the number of markers maintaining their size which in trials further reduces calibration times.



I will take your advice re lighting during capture rather than adjusting during post processing.

That’s it for now. I will respond further later during the day tomorrow.

Thank you for taking the time to address these matters.

1 Like

PS. The pics are of the second rig with laptop in our green screen area. But I cannot rely on this area being available for my use on (immovable) scheduled capture days. Performers are flying to London and staying overnight in hotels.

PPS. All 10 Fembot Bolts have now shipped.

@Terence thanks for the update.

Sensor Compatibility: Glad to hear your 10 Femto Bolts are on the way. Depthkit 0.8.0 supports capturing with either the Femto Bolt or the Azure Kinect, but not a mixture of both simultaneously, so be sure that only one type of sensor is connected to your PC at a time. (Depthkit will prompt you to disconnect one type if it detects both.)

Capture Performance: If upgrading hardware isn’t an option to achieve sustained capture performance, then lowering the resolution of individual sensors or removing them from the system altogether until you have a configuration which works is workable - You’ll just need to adjust your sensor positions to accommodate.

Sensor Positions: A quick clarification:

To capture a 2m x 2m area (or 2m diameter circle), the sensors should be placed ~1.5 meters back from the edge of the capture area, which in this case would be a ~5m diameter circle. Because you’re working in limited space, you may only be able to get the corner sensors to this distance, so be a deliberate as possible about aiming each sensor to cover as mush of your subject as possible.

Looking forward to further updates.

More great advice. Thank you Cory.

Hi Cory.

Just checking. If the capture circle is 2m diameter (1m radius) and the sensors need to be 1.5m from its edge, shouldn’t thus be 3.5m circle and not 5m?

Thanks.
Terry

@Terence My math is 1m from the center to the edge of the capture volume, plus 1.5m from the edge of the capture volume to the sensor (to fit the subject in frame even when they are at the edge of the volume) = 2.5m total radius from the center to each sensor = 5m diameter.

Sorry. Please ignore. You are right.

Would sensors around a 4.5m diameter circle work? For 2m diameter capture circle? Or is it too tight?

You are right. Of course. Would 4.5m work instead?

@Terence My recommendation is to look a the color and depth feed of each sensor (simply select it in the Depthkit Core Record tab) and see what each position can capture. If the movement of your subject fits fully within both color and depth frames, then that distance will work - If it doesn’t, you’ll need to move that sensor further away, or rely on other sensors from other angles to capture the parts of the subject that that sensor cannot frame.

Thanks Cory. I will try that. If you see my latest email today with a new capture studio layout you will see that the sensors are on the circumference of a 5m diameter circle, hopefully allowing a 2m diameter capture footprint.

@Terence I looked over the diagram you sent, and my only adjustment would be to ensure that the overhead sensors align with the front and rear of your subject (as opposed to one sensor over each shoulder) so that they effectively cover areas like the crown of the head and the chest when occluded by hand gestures. Otherwise, it looks great!

Thanks Cory

I only managed a 4.5m radius for 4 sensors aligned with the capture stage diagonals and around its perimeter. 4 sensors are closer along in the centre of each side on a 2.7m diameter. Your comments about the two overhead sensors noted. I have alerted the artists that they must stay within a 1.5m diameter circle.

We have our meeting online with James tomorrow. All the new sensors and sync devices/cables have arrived. Tomorrow (at home) I get 1m extension data cables with screw connection from NewNex, USA. So we will install on Thursday, and test next week (with an artist) so we can review outcomes with you on 24th Jan. The final c10 artist captures are 7 -29 Feb.

Thanks for your support. It is greatly appreciated.

Kind regards
Terry

@Terence Thanks for the update.

Regarding the Newnex cables, we haven’t specifically tested those cables with the Femto Bolt, so if you run into anny connectivity or performance issues, please start a new thread on that topic which includes the exact SKU of cable you’re using.

Hi Cory
I will of course. I have tested AK approved NewNex cables with Azure Kinect over distances up to 12m (with your recommended extension cables). And they work perfectly. So I am not expecting any issues as the 5Gb/sec Data Rate is the same.
Kind regards
Terry

2 Likes

Hello Terence, I just read the conversation between Cory and you. How did it work out? How did the recording of the performers go?

greetings Martn