Potentially what will get the best moving standing person capture? 6 or 7 sensors at 1440p 30fps or 8 sensors at 1080p 30fps or 8 sensors at 1440p 20fps? Stands are movable including boom for overhead.
What sensor placement is recommended for 7 or 8 Azure Kinect without green screen? 6 sensor arrangement with 2 sensors overhead? Or one overhead and two face cameras at opposite ends of the capture stage?
I am trying to tune my portable laptop based system using
Dell Mobile Precision Workstation 7780 CTO
Intel Core i9-13950HX, 36MB Cache, 32 Threads, 24 Cores (8P+16E) up to 5.5GHz, 55w, vPro.
Hi @Terence - Unless the capture subject has props or challenging materials which would be challenging to capture with fewer sensors, we recommend 6-7 sensors set to 1440p 30Hz.
For additional detail on the face, try setting only the hero sensor to 2160p, leaving all others at 1440p, and see if your computer can capture that combination of resolution without dropping frames.
For positioning a 7th sensor, see which areas of a 6-sensor capture you would like higher fidelity, and place the 7th sensor to target those areas. For example, if your 6-sensor captures have any “webbing” artifacts between the subject’s legs, you can place the 7th sensor below the hero sensor at around knee-level to get better valid data in that area.
Thanks Cory. I will try that tomorrow.
The good measure studio and I are involved with capturing a large number of the 25 individual contemporary performers for the Venice Biennale press days 17-20 April 2024.
We will not be using WebXR as in the promotional website but an App (yet to be decided) that will allow visitors to experience the performers in the same outdoor spaces they performed in but in mixed reality on a smartphone. Until the Biennale ends in November.
If you want more information or to provide support or advice in any way please let me know.
We will be testing my portable rig with laptop in the University of Venice.
Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture | NVIDIA Technical Blog
](Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture | NVIDIA Technical Blog)
My Dell laptop has RTX 5000 Ada. Should I use the associated drivers described as giving a 40% improvement in processing multiple video encoding streams? Do you have any experience with these drivers? Any conflict with latest release of DepthKit Studio?
@Terence Which drivers are you referring to? We generally recommend using the latest official drivers from Nvidia for any of their GPUs.
We have tested Depthkit with an RTX 6000 ADA and driver version 528.95 (188.8.131.5295), which was not only successful, but when paired with a i9-10980XE 20-core CPU was able to capture 10 sensors set to 1440p color with very few frames dropped. This increase in performance is likely due to the additional NVENC encoding hardware found on the newest generation of professional GPUs, not from the AV1 codec you’re discussing here.
We have another round of testing coming up where we will be testing the RTX 4000 ADA, which has the same encoder profile as the RTX 5000 ADA, with the latest drivers. We’ll let you know what our results are.