DepthKit Studio 7-8 sensor configuration options for best captures

Potentially what will get the best moving standing person capture? 6 or 7 sensors at 1440p 30fps or 8 sensors at 1080p 30fps or 8 sensors at 1440p 20fps? Stands are movable including boom for overhead.

What sensor placement is recommended for 7 or 8 Azure Kinect without green screen? 6 sensor arrangement with 2 sensors overhead? Or one overhead and two face cameras at opposite ends of the capture stage?

I am trying to tune my portable laptop based system using
Dell Mobile Precision Workstation 7780 CTO
Intel Core i9-13950HX, 36MB Cache, 32 Threads, 24 Cores (8P+16E) up to 5.5GHz, 55w, vPro.

Hi @Terence - Unless the capture subject has props or challenging materials which would be challenging to capture with fewer sensors, we recommend 6-7 sensors set to 1440p 30Hz.

For additional detail on the face, try setting only the hero sensor to 2160p, leaving all others at 1440p, and see if your computer can capture that combination of resolution without dropping frames.

For positioning a 7th sensor, see which areas of a 6-sensor capture you would like higher fidelity, and place the 7th sensor to target those areas. For example, if your 6-sensor captures have any “webbing” artifacts between the subject’s legs, you can place the 7th sensor below the hero sensor at around knee-level to get better valid data in that area.

Thanks Cory. I will try that tomorrow.

The good measure studio and I are involved with capturing a large number of the 25 individual contemporary performers for the Venice Biennale press days 17-20 April 2024.

[

vivaar.org

](http://www.vivaar.org/)

We will not be using WebXR as in the promotional website but an App (yet to be decided) that will allow visitors to experience the performers in the same outdoor spaces they performed in but in mixed reality on a smartphone. Until the Biennale ends in November.

If you want more information or to provide support or advice in any way please let me know.

We will be testing my portable rig with laptop in the University of Venice.

Kind regards
Terry

Hi Cory

[
NVENC-video-performance-comparison-1.png

Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture | NVIDIA Technical Blog
developer.nvidia.com

](Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture | NVIDIA Technical Blog)

My Dell laptop has RTX 5000 Ada. Should I use the associated drivers described as giving a 40% improvement in processing multiple video encoding streams? Do you have any experience with these drivers? Any conflict with latest release of DepthKit Studio?

Many thanks
Terry

@Terence Which drivers are you referring to? We generally recommend using the latest official drivers from Nvidia for any of their GPUs.

We have tested Depthkit with an RTX 6000 ADA and driver version 528.95 (31.0.15.2895), which was not only successful, but when paired with a i9-10980XE 20-core CPU was able to capture 10 sensors set to 1440p color with very few frames dropped. This increase in performance is likely due to the additional NVENC encoding hardware found on the newest generation of professional GPUs, not from the AV1 codec you’re discussing here.

We have another round of testing coming up where we will be testing the RTX 4000 ADA, which has the same encoder profile as the RTX 5000 ADA, with the latest drivers. We’ll let you know what our results are.

@CoryAllen when you direct compare the RTX A4000 and the RTX A4000 ADA I would love to hear about it as i have an RTX A4000.

BTW the prices of all this cards are a bit insane :slight_smile:

Funny to me is as well that I could record in the days when EFEVE studio was around I could record “blind” 8 times 3.8K with an RTX 1660 Super :slight_smile:

.Question: … did Depthkit even consider to activate a BLIND recording way …

possible workflow: You do matching and a Test records in HD (for example), but if you are “only” on an A4000 then you would switch the visualisation during recording “off” and then afterwards you can activate line skipping in an view port (for example if needed) to do a fast preview and then you render with the A4000 the 8 x 3,8K :slight_smile: … it should work …

I am aware even the 2D preview during recoridng would make the recording difficult and the 3D visualisation is beautiful and fantastic to have, but if someone needs only sometimes this super high 3,8k per view direct from depthkit the user could either update to an A6000 ADA or he stays on A4000 and switches to BLIND modus ??? what about that ???

Would love to hear what you think Cory.

Greetings Martn