A better download link than AWS please

I’m a new DepthKit Pro subscriber, as my Azure Kinect’s arrived.
Sadly for the past 2 hours I’ve been trying to download the 50 mb depthkit and it’s failing miserably, everytime.
Please let AWS know this or is it hosted on a very low tier at AWS?

An alternate mirror would be helpful for payed subs.
Kind Regards.

Thank you for letting us know, Clyde! This download should only take a few moments. Our apologies for the inconvenience.

1 Like

I managed to download it later that evening.
I’m liking the whole UI of the software - very intuitive to use. I’ve come across one problem - though, I’ve not tried things out extensively.

  • The problem I’m getting is a black “outline” around the edges in my Combined perpixel video. See the still frame attached.

  • the main Video (captured at 1080p and in the Take___sensor_ folder) from the Azure Kinect does not have this border.
    Any ideas what could be causing this?

Regards,

Hey @clydedesouza! There was a large AWS outtage recently that affected anyone that used AWS in our region, so it was unfortunate that you happened to try downloading Depthkit at the same time. Everything should be working fine though now, let us know if you have any other issues!

1 Like

Thanks Kyle,
Good to know it was just an anomaly. I did manage to download it later that evening.
Regards

Hey @clydedesouza, regarding your previous post on the exported data:

  1. Optimize your subject by adjusting your depth range to focus on the volume of your subject.
  2. Refine your depth data with the Refinement workflow to fill in any missing depth information (as well as to clean up remaining depth artifacts).

I hope this is helpful! Let me know if you have any other questions. This tutorial may be helpful as well.

1 Like

Hi Jillian,
I’ll be doing a new set of tests this weekend. However, would it be right to then say that DepthKit is biased toward “human/subject” captures?

I ask because while people are one of the main subjects of interest to me, I also want to produce “slice of life” styled volumetric pieces where a little bit more of furniture or surrounding context is captured that an audience can then “step into” , albeit within single sensor frustrum limits.

This would be particularly interesting and important once DepthKit Studio comes out.

Is there a more in-depth explanation of why the black border/edge, even when the camera is head-on?
Regards,

@clydedesouza Happy to chat about this in depth, although I want to make sure I’m on the same page with you in terms of what you are looking at in the depth data. Am I correct in understanding that you do not see these edges in the Azure Kinect viewer or are you referencing the color video from the sensor?

The depth data will have this edge due to the distance between the depth ranges in this clip. This is helpful when you take the clip in Unity or a similar tool, so the depth ranges aren’t merged together. That being said, this can be altered with the help of the Refinement workflow if you want to fill in these edges. It somewhat depends on what your subject is and how you want it to be seen in Unity or your preferred platform.

As for what Depthkit can capture, we optimize for high quality human captures, but the options are limitless. You can very much treat your Azure Kinect like a video camera and capture environments as well as people. The main obstacles include shiny objects which may experience more depth noise, but it is totally possible! A really beautiful project, Vestige, is a great example of a piece that experiments with a similar idea of a subject’s environment, and they really take it in a wonderful, abstract direction. Hope this is helpful! Let me know if you want to hop on a screen share or call to chat more in depth about your specific captures. :nerd_face:

Hi Jillian,
I was a bit late in replying, but now seriously evaluating the Kinect route versus depth-from-stereo (depthmaps) for volumetric filmmaking with pesudo-6Dof effect.

Both approaches have their pros/cons, but I feel overall, i’m biasing more toward the depthmap from stereo approach.

I’ll end up possibly using the AzureK workflow only for green screen human composites, wherein the original depth map captured from the AzureK (even via their own krecorder app) will be converted to gray scale.

Through trial and error, I’ve managed to massage a good gray-scale depth map out of the original 16bit depth maps from the kinect.

Suggestion:
I see a way to keep subscriptions flowing for DepthKit Pro by adding the following features:

  • Adding ability to export depth map as standard grayscale 8 bit format (white for near black for far)

  • script/shader for unity under the depth kit plugin to output a whole scene in the Plasma/color depth style, to then be imported back into the DepthKit pro to be manipulated and/or converted to Grayscale depth

Meanwhile, to the original “border edges” issue, I’ve found out from Azure Kinect’s own site, that the problem stems from invalidated pixels caused by exposure interval in the raw capture.
What they don’t tell you, is how to fix/mitigate this. Which, in essence severely cripples the use of any environment capture with the Kinect. (the resulting video will have ghastly edges at wrong depth)

Image attached from Kinect website
invalidatedPixels

Thanks for this update, Clyde! I’d love to learn more about your suggestions. Would you be interested in chatting more in depth so we can better understand your needs? If available, I will follow up via email to coordinate.