Workflow for fullbody 3D Photoscan

What is the best workflow for capturing a full body with Depthkit Studio and process a single frame to get a fullbody 3D scan with the highest quality as possible?
I’ll share my current workflow, please let me know how can I improove it.

I had to do it last week for a client and now I need to export the mesh.

  1. Stage: I’ve used a 10 sensor configuration with 3 aditional cinema cameras.
  2. Stage: I’ve moved all the sensors very close to the character to get better details and captured a woman and then a man,
  3. Depthkit: Set up the Azures at 4K 15hz, (I don’t care about 15fps since i’ll use only 1 frame),
  4. Depthkit: I’ve captured about 30secs with the characters steady. It dropped a lot of frames but lukely it seems that it drops frames of all the cameras at the same time, so there are moments that I have valid frames of all sensors.
  5. Premiere: Then I’ve imported all the sensor.mp4 videos to Premiere, choosen the best moment, and exported a single frame of each sensor for masking it in Photoshop.
  6. Photoshop: Masked the selected frame with Photoshop and exported a single lumma matte frame
  7. Premiere: Converted that frame into a .mp4 with the same duration as the take
  8. Depthkit: Import all the lumma matte mp4 into Depthkit refinement workflow
  9. Depthkit: Exported a multiperspective CPP image sequence from Depthkit
  10. Unity: Used Depthkit Unity expansion packages to setup the clip and exported a .PLY with the Studio Mesh Sequence export workflow of only the masked frame. The resulted PLY was very good but with not too much detail as the capture have, I believe because Depthkit Unity packages are optimized for volumetric video and not for photo.
  11. Depthkit: Tried and alternative workflow, instead of using Unity, I’ve exported each perspective from Depthkit as an OBJ sequence (only the masked frame) without decimation
  12. Meshlab: Imported each perspective OBJ into Meshlab
  13. Meshlab: select faces with edges longer than 0.002 (is that treshold ok?) and deleted them (for each perspective)
  14. Meshlab: Flatten visible layers
  15. Meshlab: Export as OBJ, the result was with much richer in details and higher density (18 millon vertices) than using the Depthkit Unity packages, but the blending of each perspective was not so good.

What do you think? sorrry I cannot share the captures because I’ve signed an NDA with the client, but I’ve also captured my self and my team so after delivering the final OBJ to the client I’ll apply the best workflow to our captures to share the results with the community

@DAMIANTURKIEH This is an interesting use case. Based on your description, there are a couple of areas to focus on to guarantee you’re getting the best quality:

  • Make absolutely sure that the CPP image which generates your PLY is the exact frame that you generated the Refinement Mask for. This may be challenging, since your subject was captured in a stationary position, but it’s important to ensure that the masks align with the depth data.
  • In Refinement settings, adjust the crop of each sensor to be a uniform width & height - e.g. If one sensor is cropped Top:50 and Bottom:150, then the total vertical crop is 200, and the other sensors should be cropped to the same total (Top:100 Bottom:100, or Top:75 Bottom:125, etc. as well as Left & Right). This will prevent aliasing artifacts when all of the sensors are conformed to each other during CPP export.
  • In Unity, set the Volume Density as high as you can without getting errors. This will give you the most detail in the geometry at the expense of processing time and file size.
  • OBJ exports from each sensor most accurately replicate the original depth data from the sensors, however as you mentioned, without the fusion process found in our Unity renderer, the edges of each component mesh are likely to produce artifacts.

Out of curiosity, have you run the same images through a photogrammetry pipeline to compare the results? I’d be interested to see the difference.