Another question from David:
Is there a way to improve the “stiching” and “sync” between the different perspectives when working in Unity? (We are working with 5 cameras).
Merci!
Olivier
Another question from David:
Is there a way to improve the “stiching” and “sync” between the different perspectives when working in Unity? (We are working with 5 cameras).
Merci!
Olivier
Could you post some example videos of what you are seeing? We can advise on improvements to stitching based on seeing what type of issue you are encountering, there are lots of different things that can interfere with a good stitch.
If you are having sync issues, this is likely a hardware issue with sync occurring during capture - and has nothing to do with the expansion package. Read about synchronization set up here.
Merci @James. I will write back with more information.
It looks like you may be using the Depthkit Core renderer on Depthkit Studio Data.
Can you confirm you are using using the Depthkit Studio Expansion or Depthkit Studio Lite renderer?
Bonjour @James , we are using Depthkit Core and Depthkit Studio Lite. Merci!
Hi @OlivierAsselin Got it, thanks for the info.
The trade off with the Studio Lite renderer is that it bypasses the Fusion step that the classic Studio renderer uses. This is a huge performance gain, but also introduces gaps where there is not camera data coverage.
However you should be able to get much better results with the Studio Lite renderer based on the screenshots provided. The Studio Lite renderer has settings for refining edge quality that should remove the jagged edges you are seeing.
Take a look at the masking settings in Configuring and Optimizing section of the Studio Lite documentation to see if you can get better quality
The other approach to get Fusion working in mobile AR is to adopt a pre-compressed workflow through one of our third party integrations: Microsoft Mixed Reality Capture or Arcturus HoloEdit. These workflows allow you to export fused geometry sequences from Depthkit using the Mesh Sequence Exporter, then compress them in the other systems for high performance playback that maintains quality.
To test this direction, I recommend you see what your captures look like when rendered through the classic Studio Mesh source without being concerned about performance in playback. If you can achieve an acceptable quality with that renderer, these integrations will allow you to achieve performant playback with the captures on mobile device.
Merci @James for the detailed message.
Hi James, our team is unable to obtain a fused video sequence with the Mesh Sequence Exporter. What we obtain is a PLY sequence, in other words, 30 individual images per second that aren’t fused together. Are we lacking a step in our export process? Thank for your help.
This is expected behavior of the exporter, it creates a sequence of geometry and texture each representing one individual frame. When I referred to fusion, I can clarify that each frame in the sequence is one fused piece of geometry.
If you want to create a single file out of the sequence, you can use Arcturus HoloEdit or Microsoft Mixed Reality Capture to create a playable sequence file. There are also plugins for tools like Blender that let you playback geometry sequences as well.