Question to future Roadmap = Higher resolution for the multi sensor PNG sequence?

Hello Depthkit Team,

is there a chance to get a different export from a normal 6 azure sensor setup ?

I did lately recordings with DepthKit Studio and used 6 attached Azure Sensors.

I observed for the PNG export I somehow receive first the one RAW PNG file (4512x1184)…
which I then transcode to the two raw PNG sequence RGB + Depth (2256x2366)…
but this PNG file I receive there is somehow half lower quality as each sensor was recording 2560x1440
as the RGB is actually placed next to depth and by that the resolution is halfed.

The multisensory PNG workflow gives (each RGB 592 x 752) definitely higher results as multisensory MPG export gives for each sensor (RGB 320x270) but the original resolution of each RGB is 2560x1440 (minus depth octa corners reduction around 1184x1440) so somehow the quality is on 25% in the PNG and in the MP4 export on 1/8.

I am aware that there is as well the additional external camera solution with replacing paired RGB (what I haven’t tried yet) but it would be already amazing to as well go for the original RGB stream of the Azure and with an option of manual exposure settings plus the option not to reduce the size of the original stream for half in the PNG export.

It would be amazing already to say inside of DepthKit where the program still has access to the original sized sequences that I decide there to export 2(Row)x3(Sensor) sensor positioning and an backdrop of original sized backdrop or with an scale factor 0.9 shrinked or 0.8 / 0.7 / 0.6 / 0.5 shrinked and so on and then later during the ffmpeg process go for the fitting individual chosen shrink factor backdrop size as well noting the shrink factor inside of the meta file … that the in unity the unity plugin can easy know which scalefactor is used for the MP4 file.

Is this maybe on the near future roadmap ?

greetings Martn

Hi Martn,

Depthkit Studio does maintain the original resolution of the Azure Kinect footage, with one caveat which may be what you are encountering

If you are using the Refinement workflow the output resolution will match the color frame. The depth resolution will be enhanced to match the color stream. Keep in mind that any crop applied will reduce the output frame size, but this is a crop not a downscale.

If you are not using Refinement, then the output resolution will use the Azure Kinect resolution.

The caveat I mention is that in Depthkit Studio exports, the first perspective exported will set the resolution of the subsequent perspectives. So there may be some light scaling done to the perspectives through this process.

There is no difference in export resolution between the Depthkit Studio Multi-sensor Video and Image Sequence.

Hope this helps clarify the Depthkit Studio export resolutions.

Hello James,

I didn’t use the refinement workflow. (only background reduction)

I recorded with 6 Sensors and I didn’t know if I can export 6 times a single sensors out and then apply them in Unity back together.

So I went for the Multi Sensor PNG sequence export.

And the results are like I described?

I exported a sequence with standard settings for Multi Sensor PNG out

and the Resolution of this (4512x1184) for all 6 Sensors in one PNG File

Then I followed the description how to transform it into a two row PNG with the example Python Script which is download able … and the resolution changed to 2256x2366

The uploaded picture has the background removed that is the only change it did in this example?

… and then I used the example Code for FFmpeg commando line transcode into the MP4 file

and the MP4 contains all 6 sensors in a resolution of

Video resolution

so somehow each sensor which was recorded each in
RGB 2560x1440
and
Depth Resolution narrow 640 x 576

Azure Depth Resolution

So it looks to me that there is an reduction of the used pixel ?

or do I miss understand something in the process ?

Maybe the RGB gets reduced to settings of depth ?

I dont know …

My first thought was that it would be great to get the chance to export a monster PNG where the 6 RGB are in full resolution and in a special different pre-setted section are the 6 depth informations …

and then out of this monster backdrop would be a PNG sequence which then could be transformed into MP4.

James as I mentioned I haven’t experimented with the Studio function of pairing to each camera a extra external camera and I am planning to do that in the moment it is possible.

But I don’t know if this chances anything about the export of the PNG sequence if i replace before export and then export from DepthKit it would have better colours, but still when it comes to multi sensor export a intense size reduction if I understand the workflow correctly.

Or is there a chance later in Unity to use for example the PNG to MP4 file and then attach additionally there the 6 video streams plus pairing information and then the full resolution of the what ever sized video is positioned on the Mesh?

looking forward to your reply maybe I simple have to change some settings for my export and I would directly get what I hope for.

Greetings Martn.

Hi @MartnD

When not using Refinement, you are correct that the RGB color gets reduced to the resolution of the depth. Enabling on Refinement on each perspective, with or without a linked Refinement Mask, will conform the depth to the color resolution, in your case 2560x1440.

Our roadmap does include plans to make the export formats and resolutions more configurable.

1 Like

Thy James,

so I have to have a look into the export again.

I will double check the documentation which is general amazing btw. :wink:

Refinement needs to be activated to each perspective (even without refinement mask changes)

That is an important statement as I think I tested refinement and I cropped each perspective, but the I had error messages when I wanted the multi CCP image sequence export.

I thought “off course” I cant crop each clip different when I want to have a group export into one backdrop with repetitive pattern … so I switch refinement OFF and tested only taking away the background and that worked then.

so refinement needs to be ON … no cropping … and it will work?
I hope I got it correctly that the PNG will be a bigger even in the Multi CCR PNG Sequence exporter.

The workflow with this PNGs or the MP4s is really amazing !!!
I have for Years EFEVE and they dont offer this RGB+Depth they offer PLY Sequence and OBJ Sequence workflow what are monster big asset folders which are much more difficult to handle.

(What they have lately is that you can manual expose each sensor, do low resolution tests and then switch to blind recording (no pre-visualisation) and then you can record 6 x 3,8K on one machine.

I would love to experiment with the OBJ Sequence exporter to compare the OBJ sequence power of DepthKit …

As well interesting for me as I was twice invited into the Beta Testing Group of HoloEdit and last year on an Film Education Workshop in October Arcturus gave me another time the chance to show during the workshop the power of Volumetric Post Production through HoloEdit, and this workshop will happen this year October again. I will another time experiment with photogrammetry meets volumetric for documentary storytelling. And try to get more traditional 2D storyteller on the road of Volumetric storytelling.

So if there is any chance to get the chance to experiment with
“depthkit.streamingimagesequenceplayer”
“depthkit.studio.meshsequence”

I would love to see how this workflow works.

I have three next volumetric projects with two for sure it will be Depthkit Studio (with one of these two maybe even DepthKit Studio+ HoloEdit) and with the third I am not sure yet what is the needs and aims of the project so maybe PLY workflow EFEVE or third time Depthkit Studio.

2022 is kind of an exciting volumetric year for me.

So looking forward for many more workflows to learn.

Greetings Martn

1 Like

Hey Martn,

Thanks for your thoughtful note, glad to hear Depthkit Studio can fit into your 2022 volumetric plans. Some comments to your post below:

Thank you, it’s very nice to hear this! Documentation always a work in progress, and would love your feedback on where it can be improved too!

Getting an error message when exporting Image Sequence could be because the texture is just too big for your graphics hardware. Try cropping a little further on the different perspectives to get it in range. And make sure you are using the Image Sequence export option as opposed to Video, as the dimension limits on video are much more restrictive.

Glad you think so! We find working with video much easier than Geometry Sequences for most use cases, both from a file size perspective as well as from a tooling perspective (ie we can use Adobe Premiere to lay back audio, for example). The PLY/OBJ format can be very portable which has its advantages. This is why we have begun to introduce both formats to the Depthkit workflow.

Sure! These are available in Phase 8, so please email support@depthkit.tv if you didn’t receive the update and we can share it with you.

Exciting that you are interested in using Depthkit and HoloEdit integration. Please keep us posted on your progress!

So great that you have many projects upcoming Keep us posted!