@clydedesouza Happy to chat about this in depth, although I want to make sure I’m on the same page with you in terms of what you are looking at in the depth data. Am I correct in understanding that you do not see these edges in the Azure Kinect viewer or are you referencing the color video from the sensor?
The depth data will have this edge due to the distance between the depth ranges in this clip. This is helpful when you take the clip in Unity or a similar tool, so the depth ranges aren’t merged together. That being said, this can be altered with the help of the Refinement workflow if you want to fill in these edges. It somewhat depends on what your subject is and how you want it to be seen in Unity or your preferred platform.
As for what Depthkit can capture, we optimize for high quality human captures, but the options are limitless. You can very much treat your Azure Kinect like a video camera and capture environments as well as people. The main obstacles include shiny objects which may experience more depth noise, but it is totally possible! A really beautiful project, Vestige, is a great example of a piece that experiments with a similar idea of a subject’s environment, and they really take it in a wonderful, abstract direction. Hope this is helpful! Let me know if you want to hop on a screen share or call to chat more in depth about your specific captures.