There are some problems when I use depthkit studio, please help me solve them

After I ended recording, when I checked the video I had just recorded, there was only a cuboid frame without the person and scene I had just recorded, and it showed: [Depthkit: : I0: : VideoPlayerWMFSourceReader: : seekToTime @ 1358] Error seeking video.
Also, recently I’ve noticed that when I open depthkit studio, it often shows no sensor connection.

Hi, @JieLi - Sorry to hear you’re experiencing this issue.

To make sure I understand, is the issue that when you open a recorded Depthkit Studio (multi-sensor) clip in Depthkit’s Editor context, the capture is not displayed in the 3D viewport, but you do see the 3D bounding box (outlined in white)?

If this is the case:

If you share a screenshot of what you are seeing, that may help quickly determine what the issue is. Let me know the answers to the above, and I’ll guide you from there.

May I ask how to place the three cameras so that the figure can focus on the picture? I followed the document where the hero camera was pointed at the person’s eyes, and the other two cameras were silghtly lower than the hero camera and pointed at the person’s chin. At present, three figures appear in the picture.


Hi, @JieLi - Thank you for the photo and screenshot. It looks like we are trying to solve multiple issues, so I will address each issue individually:

  1. Empty cuboid frame - Can you confirm that this issue is that when you open a recorded Depthkit Studio (multi-sensor) clip in Depthkit’s Editor context, the capture is not displayed in the 3D viewport, but you do see the 3D bounding box (outlined in white)? This is possible when making a recording without having calibrated first. The screenshot you shared looks like the sensors have not yet been calibrated. Can you confirm that you have calibrated the sensors in the Studio Calibration context? Do you see anything when you click-and-hold any of the handles while adjusting the bounding box? Please share a screenshot of the Depthkit interface when you are moving the edges of the Bounding Box. Which should look like this:

  2. Seeking video error - This error sounds like Depthkit attempting to load your recording at an invalid time. Does the issue persist if you click any of the transport control buttons (Play, Jump to In Point, Jump to Out Point) or drag the playhead to a new position on the timeline? These should update the clip’s current frame, and hopefully eliminate that error.

  3. No sensor connection - There are many reasons a sensor might not appear in Depthkit covered in our sensor connection troubleshooting documentation, but it looks from your most recent post like your sensors are connected. Are you still having this issue?

  4. Sensor position - From your photo, it looks like your sensor positions follow our 3-sensor recommendation very well, but are placed/aimed in such a way that you’ll only capture the upper half of a standing subject. If your goal is to capture just the bust of your subject, then this should work well, but if your goal is to capture the full body of your subject, I recommend moving the sensors further apart.

  5. Generating a Depthkit Studio calibration - From your screenshot, it looks like you have not yet calibrated your sensors, as shown by the meshes of the floor and walls not aligning with each other or the Depthkit floor grid. With three sensors, you should be able to calibrate fairly quickly, especially if you follow the recommendations in our fast calibration tutorial. As mentioned above, this may also solve your empty cuboid issue.

Please respond with updates for each issue above, and I will guide you accordingly.

Thank you very much for your help and advice. Now I have a new problem. I currently use six cameras, during the recent recording process, I found that the front three cameras and the back three cameras appear double shadows during the arm swing process, what should I do?


@JieLi - This usually indicates an issue with the sync cables. For example, one sensor may not have the sync cables plugged into the correct ports. Please follow the guidance in our sync documentation to address it.

This does affect the recording, so be sure to fix this before proceeding with any further captures.

As for the connection of the sync cables, I checked it again according to the instructions and there was no error, after my examination, I think there may be a problem with the positioning of the sensor, and i am readjust it to make sure it is closer to the more reasonable location.

@JieLi Moving a sensor closer will provide greater detail for the parts of the body which that sensor detects, however the sensor position does not have any effect on the issue you are seeing.

To further troubleshoot:

  • Create a new project and a quick calibration - It doesn’t need to be 100%, just enough to approximately align all of the sensors.
  • Stop calibrating, and unplug all of the USB cables of the sensors from the computer.
  • Plug back into the computer only the controller sensor (the sensor at the beginning of the sync daisy chain) and the next sensor in the sync chain - These two sensors should be connected with one sync cable.
  • Start streaming, confirm that the sensors start streaming immediately, and that the sync status at the top of the viewport is the chain link icon:
    image
    not the alert :warning: icon.
  • Step into the volume, and move around to see if the stream from one of the sensors lags behind.
  • If the streams are in sync, stop streaming and plug in one more sensor, which is the next in the sync chain - It’s important to ensure that you are plugging them in the same order as the daisy chain. Resume streaming and perform the movement test to look for lag. Repeat this addition of one sensor at a time and testing until you find the sensor with the lag issue, either indicated by a delayed stream, or by the Sync: :warning: status at the top of the viewport.
  • Once you identify a sensor with lag, check that the sync cable connecting it to the previous sensor in the sync chain is fully plugged in to the Sync In port of that sensor, and fully plugged into the Sync Out of the previous sensor. If it is, you may need to replace that cable altogether.
  • If you replace the cable and the sensor is still exhibiting the lag issue, please provide any other information which might point to why this sensor is lagging - For example, is it plugged into a different kind of USB port on the computer?

Try these troubleshooting steps, and let us know what you learn.

As a quick follow-up to my last post: If you’re seeing that 5 sensors capture without the issue but 6 sensors produce the issue, it could be because your computer is not up to spec for 6-sensor capture, and is dropping frames. This could be the case if you’re using an Nvidia consumer GPU (e.g. 4060, 4070, 4080, 4090) rather than a professional GPU.

Can you please share the spec’s of your PC, as well as a video showing the issue in Depthkit while recording with 6 sensors? Filming the screen with a phone is fine, as long as both the pointcloud view in the 3D viewport and the Dropped Frame & Record Buffer diagnostics are visible. Please also share a video of the same clip playing back in the Depthkit editor. I’ll keep an eye out for updates.

Our Equipments:
Six Azure Kinect cameras.
The spec’s of our PC:
System:Windows10
CPU:13th Gen Intel(R) Core™ i9-13900K
Mainboard:ASUS Gigabyte z790 DDR5
RAM:Corsair 32*4 DDR5
Hard drive:Samsung 1T M.2-nvme2280
Graphics card:NVIDIA GeForce RTX 4080 16G
USB Ports:The computer host comes with five USB3.2 ports and one USB3.0 port.
USB Cables:Three UGREEN 5m USB 3.0 Active Extension Cable and three Logitech 5m USB 3.0 Active Extension Cable.
Sync Cables:A 3.5mm audio cables for each sensor.
Related screenshots:
系统
显卡
Links to our videos:

@JieLi Thanks for providing the spec’s and videos. A couple of things I noticed:

  • The issue seen in Videos 1 & 2 where the stream from one of the sensors seems to be temporally ahead of the others (as the subject’s right hand quickly lowers) seems to be resolved in Videos 3-6. Did you make any changes to the system between capturing these recordings?
  • Are you using a StarTech USB 3.0 PCIe expansion card in the computer to connect the sensors? Plugging all of the sensors into the motherboard USB ports might result in USB bandwidth issues, and this model of PCIe card provides 4 compatible ports, each with its own USB controller, to prevent bandwidth issues.
  • Your GPU - the Nvidia RTX 4080 - only supports capturing up to 5 sensors due to hardware limitations present in that series of GPU. To successfully capture 6 sensors or more, you’ll need a professional GPU such as the Nvidia RTX A4000. See our GPU documentation for more details.

Thank you very much for your help and advice.

Now we’re using five sensors and setting up another layout, and the recordded videos are better now.

However in the point cloud video we shot, when I zoomed in, there was a double shadow, and the more we zoom in, the more obvious it becomes.
Could you please help me find out what the problem is the video.

Link to the video: https://youtube.com/shorts/xLR7VbQzVzM?feature=share
And the attachment is the picture about integrated layout.

发件人:Cory Allen via Depthkit Community notifications@depthkit.discoursemail.com
发送日期:2023-09-01 01:53:32
收件人:lijie@hfut.edu.cn
主题:[Depthkit Community] [Depthkit] There are some problems when I use depthkit studio, please help me solve them

@JieLi Glad to hear you’re making progress. In the video you shared, it looks like both the calibration issue and temporal sync issue have been resolved. In general, this looks like the quality we expect from a 5-sensor configuration, though it’s difficult to know based only on the pointcloud view shown in the video. Can you share the combined-per-pixel video export + metadata, or record your screen with Mesh preview enabled?

I am not sure what you mean by “double shadow” - Can you screenshot and circle the artifact you are referring to?

Please also be aware that in the test you recorded, you can expect to see occlusion artifacts when two people are in the volume at the same time, as they occlude each other from the view of some of the sensors. You can expect better results with just one person inside the capture volume.

We shot some videos. When exporting, only mesh and RGB pictures and videos were found, as shown in the pictures below, and no point cloud data was found. Where can we get the raw point cloud data?
We shot some videos. When exporting, only mesh and RGB pictures and videos were found, and no point cloud data was found. Where can I get the raw point cloud data?


@JieLi Can you share more context about your workflow, such as the environment that you will be bringing the pointclouds into?

In some environments, it’s possible to remove the meshing from textured mesh sequence (OBJ) files and interpret them as points.

If you specifically need the resulting meshes in PLY format, you can use tools like the command-line meshlabserver to batch-convert the OBJ sequence to PLY’s as in this this thread .

Let us know how you intend to use this data, and we’ll guide you accordingly.

@CoryAllen did you not tell in an other thread, longer ago, that if you edit one of the depthKit scripts and change some lines of code then the unity plugin visualize presents the input.mp4 or png sequences as point clouds ? …but if the client doesn’t want to use the Unity workflow and only wants ply assets folder your old meshexporter Arcturus workflow,which is not anylonger a option as the in app solution of obj out is now the selected official road, but you had in the past this Unity package … maybe this could help the user ? … if the export from their 5 sensor recording would be a the high res png sequence and the meta data file and your old script together with the timeline workflow would create as well a pointcloud Asset folder within attached texture file which looked very good to me in the past … maybe even if possible the ply + texture cord could find in an future version of depthKit it even like the obj workflow it’s road into the main app :wink:

Greetings Martn