Depthkit file gets distorted with camera in Unity

Wondering if there are any solutions to distortion that I am getting with a depthkit clip. I have a depthkit clip playing inside of a 360 video. It comes into my 4k video very small so I am scaling the depthkit clip in Unity which seems to be working. I’m using the main camera in my Unity scene to move the holographic figure to a new position in the 360 video and when I do, all the nice detail gets really messy and kind of folds on itself.
I feel like I’ve seen this happen in other works but hoping there is a way around it…?

Hey Elizabeth, would you mind sharing a screen shot of this so we can get a better idea of what is happening? Feel free to post here or email it to me at support@depthkit.tv.

Yes - thank you - images coming later today.

Thanks!

HI Jillian
I have three jpgs attached. Once I brought the depthlkit file into Unity and started re-positioning, it got really distorted.

distort3.jpg shows the image closest to what I captured but the edges got jagged. (I’m getting nice results with Pro prior to bringing into Unity!)

You can see in distort2.jpg how the image loses smooth edges completely and almost folds in on itself. In distort1.jpg I’m losing part of the figure

which I don’t want to happen.

I’m wondering if there is a better way to change how the figure gets positioned in Unity? I don’t need to even animate - just change where it starts in the 360 space.

I was using the main camera because positioning the depth kit clip

didn’t seem to make any change.

Appreciate any suggestions!

Thanks

Elizabeth

distort1.jpg

distort3.jpg

Hey Elizabeth, thanks for sharing these images. This looks like it may due to some color space conversions in Unity. Are you using the Unity video player in your Depthkit clip? If so, I recommend trying AVPro. In case this doesn’t solve the issue, I recommend just downloading a trial of AVpro just to test. If that doesn’t fix the issue, let me know!

See details in our troubleshooting section of the Unity plugin.

Hey Jillian,

I think you may be right about the AVPro player - just didn’t expect a $450 plugin!

Seems that this is the go to process for video in Unity though. Thank you for your reply.

You are always so prompt with assistance!

Totally understand! I definitely recommend testing with the trial version in case this doesn’t solve the issue. Also, you can purchase platform specific versions of the plugin for a lower price. I believe the Windows only version is $150 if that is helpful.

@jillianmorrow did it work for you with AVPro player ? Can you maybe share compare screenshots … before AVPro and with AVPro - if you went for this plugin ? Thanks in advanced Martn …

Hi @MartnD! AVPro can be a great choice depending on your project needs. AVPro can be beneficial when it comes to performance, but it doesn’t typically provide a visual difference unless you are running into specific artifacts.

Are you trying to decide between the Unity Player and AVPro? Are you currently running into issues with the Unity Player?

I am testing at the moment a lot, and I would like to avoid spending money where it is not needed. As you mentioned, the AVPro player is creating an extra point of investment. I have at the moment one sensor, but I consider getting a second one, but with depthkit, at the moment, I only can use one. I am not a big studio and a private freelancer artist who is into experiments with depth recording, but the perspective of that only the studio version will handle more than one sensor is not the best … I want to test several sensors in a parallel setup up next to each other and check how the overlaying point of view helps the filmed object … it would be exciting too to have a more significant array next to each other. Still, it looks like they are, in the end, all this sensors would be singular sensors. It would be much better if you connect them … with the link cables … then you would create e new virtual device what has a broader view instead of talking from the software point of view that these linked sensors are still several. To see them as one new “how every how many” would be a much better solution … but I am just a film human with a bit of IT knowledge, not a programmer …

Thanks for clarifying and sharing your creative needs here @MartnD! We always appreciate the chance to better understand how we can improve Depthkit for the community. I’m very curious, what kind of content are you hoping to capture with this broader view? Are you thinking about environments or any kind of specific subject? What kind of range or distance are you hoping for?

It looks like that a lot of users want to have a full-body from all sides, but as well, if a full theater stage could be filmed and the sensors (let’s say 6 or 9 ) are 1,5 meters away from each other on one parallel line … who knows if the several actors on the stage get nicely covered by this frontal line of sensors. As well the Azure stands for AI, why the AI can’t get to know during observing, the several actors on stage at the moment the actors doesn’t cover the background( means no depth shadow), the background. -> This could lead that the non-moving elements becoming a stable and reliable background, and the depth shadow does not affect the background anymore. If someone could program such a function, it would improve the work with these sensors and enrich the position of depthkit. As well as i described, it would be fantastic that the sensor at the moment it is linked is not understood as several sensors … why not read it as one … and saying the algorithm that the reading of the front is several meters spread by the sensors’ distance.

1 Like

Thanks so much for going into detail!