Hi,
We are using a 4 Kinect setup, including 1 paired with a cinema cam. We are encoding the sequence as described in the documentation here.
Is there a recommendation in settings that we should follow for this particular setup, like the amount of rows? Same question for the FFmpeg settings.
@GlennWustlich, the formatting and encoding options for your asset are largely determined by the way they will be rendered/played back.
The multi-row formatting process is only necessary if you are trying to fit your asset within the constraints of video codecs and graphics processing pipelines - One common target resolution is 4096x4096, so if for example your 4-sensor asset is 6400x1400, you can reformat it into two rows to make it 3200x2800 (half the width, double the height). In general, you want to choose the number of rows which gives the resulting asset a similar width and height. However, if you are creating geometry sequences for use in 3rd-party pipelines/software, you can keep the image sequences full size in a single row.
Hi Cory, thanks for the reply.
Unfortunately we did not manage to get good results with it. I have sent you an email with one of our captures. Would be great if you could have a look! We really need this problem tackled as we can not continue with our production in the current state.
@GlennWustlich It looks like the asset you emailed has Refinement enabled, but no Refinement masks applied. Refinement masks are the most effective way to clean up your asset and get rid of extra geometry (like the floor, but also random bits of geometry rendered around your subject).
Correct I am aware of that. The example is just to showcase the difference in export between the out-of-the-box cpp video from Depthkit and the image sequence export + ffmpeg conversion. These two are using the same settings in Depthkit.
The example has no masks because Depthkit would otherwise not export the video because of the resolution as is written in the documentation.
In the video you can see that the difference in quality between the two is huge.
Any idea what is causing this? Either a setting in the python script or something in the ffmpeg conversion setting?
@GlennWustlich Ah, I see now what you’re comparing. If you run ffprobe on the resulting ffmpeg-encoded video, what does it report? Specifically, with regard to the color space and color matrix metadata (as explained here)?
Thanks for the additional info. I’ve taken a closer look at the command you’re running, and I was able to reproduce the issue on my end. The command you’ve provided does not specify the color space and color range settings within the video filter configuration. While it is setting the metadata appropriately, the underlying video data is not correct. Please add the following option to your ffmpeg command to configure the video scaler appropriately: