Video compression messing up depth data

Knowing Depthkit uses the mp4 video data to generate the 3D meshes in Unity, I’ve been tinkering around with editing and processing the video files. What I’ve noticed is that the depth colour data (even when I’m exporting clips at max quality from Premiere) always seems to distort, and develop shelves/ripples/weird edging when it’s back in Unity.

Anyways my question is - is anyone else successfully editing the videos and managing to preserve the depth data so the shape is faithfully maintained?

Hi Ben,

Great question. The combined per pixel format is great because it let’s you take advantage of video workflows, with benefits like good compression and audio sync. The risk as you have discovered is that it can be sensitive to compression, and even more so to color space conversions. If you are exporting from Premiere at max quality and seeing shelves/ripples in Unity, it could likely be a color space conversion issue. We have not done a certified set of tests for Premiere encoding. We recommend professional workflows all use FFMPEG, as it provides greater control and is free.

We’ve just updated our documentation and added an asset encoding guide focused on the best practices that we’ve developed using FFMPEG. Check it out here: Image Sequence encoding

While it’s designed with Depthkit Studio (multi-sensor) in mind, everything also applies to Depthkit Cinema and Depthkit Core (single-perspective) captures as well.

Let us know if that helps. If you do find a workflow for Premiere that preserves the color space and compression, also please share!

1 Like

Thanks for your response James, that’s really helpful to know.

As far as I’m aware FFMPEG can recompress video sure, but as it’s command line controlled with no GUI or editing tools, offers no opportunity to actually edit the video other than maybe trim it or reprocess it. So any cleanup would be almost impossible using it alone…

But looks like it’s possible to export from Premiere using FFMPEG… So maybe I’ll give that a go!


Hey @Psicon_Lab,

Yes command line interfaces are pretty clunky! One thing I want to ensure is clear when you are saying ‘recompress the video’ is that we recommend working from Combined Per Pixel image sequence exports, which are uncompressed. Depthkit’s native h.264 video exports apply compression that can’t be adjusted. They work great for fast exports, but for quality sensitive workflows it’s always best to go with image sequences out of Depthkit.

A few hints for working with Premiere-- make sure that the “color range” is set to PC color space not TV. TV color space applies a range limit to the top and bottom of the color range, which creates artifacts in the depth. See more here in a rant from the one and only John Carmack: John Carmack 7 hrs · Adventures with ffmpeg and color ranges. A video legacy iss... | Hacker News

Second, the Color Space should be explicitly set to BT.709.

If you do both of these things, you should e able to get compression pretty high without introducing too many artifacts. Using FFMPEG, we have been compressing 5 and 10 camera Depthkit Studio outputs as low as 5Mbps without significant quality loss!

Let us know how your tests go!

Hi @James ,
ok great thanks for the tips! I’ll try it out today hopefully.

Yeah, I don’t have an image sequence to work from only the exported video as (I think) the Depthkit project seems to have messed up and when I open the folder with the clips in using DK, I can no longer access the EDIT tab… the clips are not listed.

Is it possible to edit DKclips when all you have are is the folder containing the clip data (“TAKE_X_X_X/_sensor” with the “depth” and “sensor01” folders inside?). I appreciate this is probably a question for another thread…!

Hi, @Psicon_Lab. In order to re-export a full quality image sequence, you do need an in-tact project complete with project JSON and the data bin for each clip you want to work on, all in the same directory structure they were originally recorded. Is it possible that the pieces simply got moved around?