Attached is a video (from 1:20) of our Studio capture with 5 Kinects including one Blackmagic cam. The mesh itself is fairly decent but the textures in the face are not, because of the texture blending. At the end of the video I am playing with the View Dependent Blending slider. Here the texture quality in the face gets better because the Cinema RGB texture of the front cam has a higher priority. Would it be possible to reflect this in the mesh export also? Or have a masking option where the face would only use the Cinema RGB texture? Would love to know this!
Getting a high quality face texture requires the an excellent calibration, and your video here seems to have some significant calibration issues. As you have seen, the view dependent texturing within Unity allows for many calibration issues to be avoided by dynamically changing the texture balance to bias towards the textures from cameras that are most incident with the player view. This view-dependent trick only works in real time, so once you export a mesh a fixed texture must be generated.
Currently there is no way to change the bias in the texture that is exported in the mesh sequence. For the clip above, I recommend using the Calibration Adjustment to better match the cinema perspective to the other perspectives to avoid texture ghosting and splitting you see in the clip above.