Head tracking in volumetric capture

Hi everyone,

This is my first post on this forum here, so nice to meet you all! I am a creative dev working with musicians and theatre producers to create augmented stage performances. I am currently working on a VR piece in Unity in which we want to combine the VFX look and shader graph look, I want to visualize facial features using the shader graph look and the rest of the body with VFX graph. For the position of my mask I need to track the head position and rotation and I was wondering if anybody had already done this work? I was thinking to draw a couple of bright green dots on actor’s faces and get their UV coordinates to work out head pos + rot. Is there something that already does this plug and play?

Welcome @Ruudop_den_Kelder1! It’s a great question. Depthkit doesn’t have an out of the box solution for 3D face tracking, but I can propose a few ideas that should be achievable with some custom development:

You may be able to combine Depthkit with something like this: GitHub - keijiro/FaceLandmarkBarracuda: MediaPipe face landmark detection model for Unity Barracuda

You could either run the face detection on the CPP video, then translate the key points to the depth map to look up their 3D position. Or alternatively you can render Depthkit to a render texture, use facetracking on that, and also look up 3D in the Z buffer on the Render Texture.
Simplest is to just animate a target.

A dead simple approach that doesn’t get you face features but can give you a way to segment the head or face from face is to animate a 3D transform along with the timeline manually.

Alternatively you could attempt to do color segmentation based on skin tone, to separate face pixels from the rest of clothing, then add effects onto that layer only.

Let me know if any of these approach make sense!

1 Like

Hi James!

Thanks for your extensive reply, very helpful! They all make sense, I’ll probably start with the simple manual animation approach for now.

@James is there actually a way of animating the depthclip frames on a unity timeline in order to be able to sync it with the position of the mask?

@Ruudop_den_Kelder1, you can add Depthkit objects to Unity timelines by following this guide, but this doesn’t guarantee that your Depthkit asset and mask will play back in perfect lockstep at runtime, so alternatively, you can play Depthkit Combined per Pixel image sequences using Unity’s streaming image sequence player by following this guide - This solution likely won’t be performant enough for realtime playback, but will keep your assets in sync for rendering if that’s helpful.