@NAL - Have you cloned one of our example 8th Wall projects? They include an input system to be able to pinch/zoom/rotate the Depthkit asset, which means the end user can determine the size and orientation of the capture.
If instead you’re looking to looking to forego the input gestures and change the initial scale of the capture, I believe you’ll have to do that in the 8th Wall JavaScript code by applying a transformation. I referenced something similar in a thread about asset rotation, but this type of 3D transformation in WebXR is outside the scope of the Depthkit workflow, and instead falls within development in 8th Wall. You may get a faster response by posting in the 8th Wall forums.
@NAL Some further information on this: The Depthkit 8th Wall examples use relative scale to size the capture appropriately for the distance it’s placed from the AR device (e.g. smaller if placed on a nearby surface like a table), but alternatively, you can reconfigure the 8th Wall project to use absolute scale instead.
@NAL Following up to see if you you were able to set the scale of your Depthkit assets in 8th Wall using absolute scale instead of relative scale. If this is still an issue, let me know - Otherwise I will close the ticket.
@NAL Can you describe the issue you’re having a bit more? As Cory mentioned, the 8th Wall examples we have published use relative scale, meaning that the actual scale of the object will depend a lot on what the camera is seeing. For example, if you are testing by pointing the camera on a tabletop, you’ll get a hologram that fits on top of the table. If you point the camera instead at an entire room or outside, the hologram will appear larger.
There is nothing special about how the Depthkit object works, except that it assumes that the world units are in meters. You should be able to set the scale, orientation, and position in the same way you would for any other 3D object in the scene.
As for image targets, the clip will not automatically be able to start playing unless it is muted, as this is a browser limitation. You’ll still need to use some kind of user interaction like a tap-to-place event to get it to start playing with audio.
I hope this helps, and I do encourage you to ask on the 8th wall. forums as well.
Hi @Tim, thank you, I managed to implement image target with the Depthkit object and was able to scale the object relatively to the image. Image detection was sufficient to trigger the audio.
Everything is working now, I just have sometimes an issue where the hologram seems to be “shifted” and we can see a grey area and some elements that should not appear with my mask. Most of the time, the same hologram will be playing correctly, but sometimes, in the same session, after several loops, the hologram will start to play in a shifted position (see pictures attached). Do you know if there is a way to fix this? Thank you.
@NAL If it occurs after a number of loops, it might be the case that the mesh sequence and texture video file are different numbers of frames. Have you compared the frame numbers of each? This ffprobe command is an easy way to get the number of frames in a video file: ffprobe -v error -select_streams v:0 -show_entries stream=nb_frames -of default=nokey=1:noprint_wrappers=1 video.mp4
@NAL Following up to see if you’re still experiencing this issue. Were you able to confirm that the texture video and mesh sequence contain the same number of frames?
Hi @CoryAllen, thank you, I checked the number of frames in the video: there are 239. And there are 239 mesh files. Do you know if anything else could be causing this issue? Thank you.