+10 Azure Kinect

I’ve just connected 12 sensors to Depthkit Studio and seems to work ok (2 min capture at 1080p without dropping frames with a nVidia GP100, also tried at 720p with livestreaming enabled and it runs smooth). is there any software restriction later in the chain that am I missing? (ie calibration, refinement, exporting, unity expansion plugins) or can I do some full tests with a 12 sensor setup?

1 Like

Hi, @DAMIANTURKIEH. Amazing that you have successfully captured and livestreamed with 12 sensors! Apparently the most recent Azure Kinect SDK lifted the limit from 10 to 100.

As far as I know, the greatest risk to using more than 10 sensors is that the interleaved timing pattern of the ToF emitters is designed for up to 10, so look for any signs of sensor-v-sensor interference.

As is, our Studio Expansion packages are coded to support only 10 sensors, but you’re welcome to modify the shader to lift this limit. Let us know how it goes!

1 Like

Thanks @CoryAllen, I’ll test it, which line/s of code should I change to lift 10 sensor limit?

@CoryAllen I’ve just tried and the studio expansion packages seems to be working ok with 12 sensors without having to change any shader, but the Studio Mesh Sequence workflow is giving some problems, when I drag the image sequence in the timeline and play, it says error “RenderTexture.Create failed: requested size is too large.”
The textures are 16704x2880 pixels (12 sensors at 1440p, refined with some cropping and exported a multiperspective cpp png image sequence directly from Depthkit without reordering in rows)
What is the max allowed texture?
I wish to export a PLY sequence at best quality for using it in other apps (Holosuite/Nira/Unreal)
Thanks

@DAMIANTURKIEH I believe in Unity 2020 the maximum texture size is 16384, which is just slightly smaller than your width. Going off of this forum post, haven’t tried empirically

If you were to use our multi-row stacking it may work!