I just have some theoretical questions to help me understand the potential limit of Depth Kit Livestreaming Quality.
Does Depth Kit Studio support Multi-GPU for its recording/livestreaming process?
Has anyone tested Depth Kit Studio on a Multi-GPU setup? if so, how big was the improvement in performance and which CPU were you using/was there a CPU Bottleneck?
@BUTNV - Depthkit does not support multi-GPU explicitly, so any benefits brought on from using multiple GPU’s would only occur at the driver level. This is not something we have tested.
Is there a particular use case or performance target you have in mind for livestreaming?
Thanks for the response.
We are mainly focusing on a 10 sensor livestream use case, currently being able to run it at 1080p although the quality when streamed into Unity is lower than that of a recording.
I’m just planning ahead for the future on potential hardware upgrades. I saw your post about the 6000 ada tests which was very helpful in that regard.
@BUTNV - Thanks for the context.
Depthkit recordings currently make use of the updated meshing algorithm introduced in version 0.7.0 in July of last year, which generates meshes directly in Depthkit without the need for greenscreens, and export of those meshes straight out of Depthkit in widely-used formats like OBJ (and soon PLY and Draco!). This meshing within Depthkit also cleans up the depth data found in our exported Combined-per-Pixel format.
Our roadmap includes plans to bring these newer meshing and cleanup stages to our livestream pipeline. If you’re interested in ways to accelerate this, please reach out to email@example.com and we’ll discuss the details there.