Very first feedback to version 0.7.0

hey @CoryAllen and @James

As I only did to now some very few first recordings, I can say, I am amazed about the performance and the speed of the workflow.

I am working with 8 azure sensors on my second slower system in which I transfered my A4000 and my first impression is it will work two. :slight_smile: what is amazing !!!

During the first hours I had same fresh thoughts for operating depthkit studio and what maybe will find a way into future version of depthkit

here my workflow thoughts:

  1. in the editor tab changes in finetuning in texture blend and surface
    → a Undu/redo button could be usefull …

  2. A framedrop history for each recording → After Recording is finished (just to double check)
    for example to find the frame or if it was a longer recording and you see at that frame was
    on sensor 4 an dropout … that you can decide to split and record from a particluar moment one
    more time instead of full one more time

  3. In the calibration tab : calibration sensor links → A button for “show only linked” switch on and off
    option

  4. sensor names are not visable in the assambeld stage when i am not at sensor link sub function
    … but for depth range changes in the record tab it would be helpfull too, as I have the impression
    I win performance when i take away the full view to limited depth range

  5. As mention I tested during calibration to limit the depth range (before in recording tab) but after
    recording and tuning o the bouncing box I see all recording area full again?
    → means limitation of depth doesnt work during recoridng but it helps somehow for performance
    → I have at least the feeling!
    → If I limit the depth range additionally to the boundingbox then I have a smoother recording
    process, but as I sad during recording it is back … ? This is a bit confusing as it has an impact,
    at least it feels like.

  6. An option for original azure RGB color matching would be nice !!! Means shooting a color card
    during calibration process

    With for example the Datacolor SpyderCHECKR 24

    then like for example in davinchi with the presented function of some squres then draged
    over the chart in the footage and then creating a cube file for each sensor …
    that the skin tones are nicely fitting into each other

    example 1: https://www.youtube.com/watch?v=2VwR9QubmtE

    at the moment without this function I would take the original recordings an do the job inside of
    davinchi and export same format same length same clip and replace the original one .

    example 2: https://www.youtube.com/watch?v=DTeowVGOcGQ

  7. for surface and texture finetuning a button back to zero changes … or back to automated offer

  8. the grouping of the editing tab is very nice but it would be nice to be able to open the group and
    maybe switch one sensor of or do after recoridng depth range limitations if not before applied.

  9. for the bounding box function I have an inspiration from EFEVE … It was possible to create several
    bounding boxes, so you could cut out an object from inside of the bounding box
    (detailed remark: here in efeve it was needed to flip the removale function of the second or third
    bounding box to be functional inside of the main bounding box or a sub box.

OK thats for the moment:
I will test on …
I love the new version in general and I look forward to start to pla with my test recordings inside of unity.

greetings Martn

1 Like

Hey @James

For the bounding box inspiration, I just recorded for you fast a screencap out of old EVE …

I didn’t work on matching or anything inside of old eve; I just tried to fast show two bounding boxes :slight_smile:

box inspiration (vimeo.com)

  1. Then, one more function/idea came to me when I recorded the box inspiration clip for you…
    In my old EVE, I could do some tricky settings with the same machine and the same setting to record all eight sensors with 3,8K UHD. I had to switch off while recording seven previsualization of the eight used sensors … means I kept only my front primary sensor activated to see the action, and the rest were switched off → but recorded as the previsualization uses the performance power of the PC and graphic card.

  2. while testing the new cleaning functions of the new depthkit 0.7.0, I thought it was for sure easier without green-masking and creating the masks in after-effect… but my results in the past were even cleaner. So having the chance to decide if you go “without” green masking and external mask creation or “with” or combine them even … could also be interesting. You have the function anyway in the old build, so reactivating and leaving it in the software as a possible way to walk → could be a nice thing … :slight_smile:

Greetings Martn

Hi, @MartnD - Thanks for the feedback! There’s a lot to dig into here.

  • We have discussed the idea of generating a log of dropped frames, as well as additional data pertaining to the recording. We’ll consider your suggestion a +1 to this idea.
  • Can you share more about the idea of a “Show Only Linked” filter in calibration? What is the purpose of this function? What are you looking to hide (or see more clearly)?
  • I haven’t seen any changes in performance based on different depth ranges in the depth preview, but I will check with our technical team to see if the depth ranges are a factor and follow up.
  • Regardless of where the depth ranges are set during preview, the entire range of each sensor is recorded during capture, then cropped using the bounding box introduced in Depthkit v0.7.0.
  • A color correction workaround already exists (as you mention) where after capture, you can bring the individual RGB videos into color grading software like Resolve, then make your corrections, and export the corrected videos in a Windows Media Houndation-compliant codec (e.g. H.264 MP4) with matching resolution, framerate, and duration, and move them back into the original sensor folders with the name ‘sensor.mp4’ - Depthkit will interpret the color-corrected versions as the original recording.

We’ll certainly consider these suggestions when updating our product roadmap. Thanks again!

Hey @CoryAllen,

… one more thing … → I know, I still didnt went deeper into the topic “show only linked” as after working longer with depthkit → now I actually not super extrem need it → I thought it gives a faster overview, but I would say forget this kind of feature request :wink:

now to the new actual topic :slight_smile:
I just worked with an old test recording I did around a year ago during testing for a client.

I successfully imported and where able to do some comparing test exports. Still, I realized that depthkit doesn’t have a single frame forward or backward function for exactly finding the moment of the Slate (as we used a traditional old-school hand slate). As well, there is no information on which frame is displayed to you … or that you could jump precisely to a particular frame … in my workflow with the old EVE program, I took the clip in Davinchi find in there the moment when the hands clamp. Then I write down the frame number, and back inside the EVE program, I were able to export from this frame(number input/starting frame) to this frame(number imput/ending frame).

this could be usefull as well if you export for example the OBJ sequece but you write your harddrive full … then you see at the last file to where it managed to export and you could start a new export from the exact next frame to then later combine these different export sessions.

I, for example, tried now to export this long test record (6296Frames) → as CCP image sequence → No Problem → as OBJ it always breaks! :frowning:
Sometimes as the drive is to full or now! even when I have 100GB still free → last try it stopped at 1440frames → i wanted to now start the export later but then the File nameing is changed → as i would change the inpoint → means the naming of each frame is not fixed it gets created always from inpoint to outpoint → here it would be usefull that the export starts with the real frame number lets say 10secs pre-run → so starting frame would be 301 … buy that i could export in batches from frame to frame and i would have no problems that the OBJ exporter breaks.

Or I didn’t see the frame number if it is displayed somewhere. Maybe you have that. Can you tell me something about that case regarding Handslates and finding the exact recording frame?

Greetings Martn

@MartnD - Thanks for the additional feedback. I have captured this in our internal product feedback documents, and we will consider it for future Depthkit versions.

Currently there are no frame-by-frame transport controls or direct text entry to seek a particular frame in the Depthkit user interface.

There is however a workaround where you can close Depthkit, backup the project JSON, and modify the “inTime”, “lastPlayheadTime”, and “outTime” objects with "recordings" / TAKE NAME / "compositions" / "default" to reflect the specific frames you would like the clip to have set the next time you open it in Depthkit, where “ticks” represents the frame number. Also be sure to set “timebase” to 30 to represent 30 frames per second.

@MartnD - Following up to let you know that one of the features you requested has been included in Depthkit 0.7.1 released today. From the release notes:

Depthkit now generates a report in each Depthkit Studio take folder which logs all sensor events, including dropped frames, organized per sensor, to more quickly and effectively trace sensor-related issues.

1 Like