I created a Unity project for iOS and augmented reality with ARKit. The app works fine on both my iPad Pro (2nd generation) and my iPhone (7 plus), but i get a crash with the following error as soon as my Depthkit Clip (multi-camera) is instantiated in my scene :
IOGPUMetalCommandBufferStorageAllocResourceAtIndex: failed to allocate pooled resource at dbClass: 5 dyld4 config: DYLD_INSERT_LIBRARIES=/Developer/usr/lib/libMainThreadChecker.dylib:/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib (lldb)
Here are plugins and versions i used :
- Unity 2020.3.2f1
- ARKit 4.2.2
- ARFoundation 4.2.2
- ARSubsystems 4.2.2
- XR Plugin Management 4.0.1
- Depthkit Core latest
- Depthkit Studio latest
When i look at my profiler in XCode while running the app, the GPU memory seems occupied, but not maxed out, so it doesn’t seem to be the issue. Does anyone know what could be the issue here?
The error looks related to GPU memory as you say. The biggest impact on internal buffer sizes would be volume bounds and volume density; these two things determine the total number of voxels used by the clip for surface reconstruction. It is possible that with a high density and/or large volume bounds, the voxel buffer is hitting some internal limit on the size that the buffer can be.
We expose the computed total voxel count within the Depthkit Studio Mesh Source component, near the Volume Density slider.
In our tests, we’ve successfully used total voxel counts of up to 2.5M, with a density of 130 voxels per meter on an iPhone 12. However, performance will scale depending on how recent the CPU/GPU is in the device. If you’re targeting older hardware, keeping the range somewhere around 100,000 - 500,000 total voxels is ideal. In our tests this translated to a voxel density of between 50-80 voxels per meter.
It is important with mobile devices to get the volume bounds as tight as possible without clipping your subject, to maximize the budget of total voxels, allowing you to potentially turn the volume density up.
The other thing I can think of that could be going on is that the video resolution may be too high for the device to play back.
What resolution is the video asset you are using on iOS? I have not found any official documentation for the maximum supported video resolution for each iDevice, but there may be differences between models.
We’ve successfully tested a 4096x1440 clip on an iPhone 12, but I can’t guarantee this resolution will work on an iPhone 7, you may need to try a lower max resolution such as 2048.
Hi @Tim ,
I got my hands on a more recent device : iPad Pro 5th generation. I tested the same build and the app still crashed when most of the voxels show up. But it seems the messages are different this time, i get the two following errors. The first one appears 3 times in my XCode log, and the second one appears every frame afterwards :
2022-03-07 10:55:17.607366-0500 OPERAMTL[1090:272644] Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
2022-03-07 10:55:23.836929-0500 OPERAMTL[1090:272645] Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
I have not been able to replicate these errors.
Can you be more specific about the crash? Does the crash occur immediately, or after some time? Do you ever see the Depthkit clips rendered at all?
What else do you have in the scene? Can you try a minimal test where you just have the Depthkit clip in the scene and nothing else?
Can you also please provide the version numbers for the depthkit packages? This can be found in the package manager in Unity
Yes, i see the Depthkit Clip running at first because the capture is relatively empty when it starts. It crashes when our character enter the scene. The scene is already emtpy, i can test without ARKit plugin, is that what you meant @Tim ?
I will also test with a single-camera capture as well as with new multi-camera captures. I use Depthkit Core version 0.11.1 with Depthkit Studio version 0.7.1.
@DavidDuguay Are you setting the surface buffer capacity while the subject is in the volume? If you set it to an empty volume with no triangles, it may not be apportioning resources according to what is needed to render the full asset.
@Tim, yes, i set the surface buffer capacity while the subject is the volume, because the subject is already in the volume at the start of the capture. How can I empty the volume and set the surface buffer capacity without the subject?
@DavidDuguay a test without ARKit may be worthwhile to narrow down the issue. I am also curious to know if you have updated to the Phase 8 packages, and whether or not that fixes this issue.
If not, we will need you to provide us a minimal reproducible unity Project so our engineering team can diagnose the issue and ultimately fix it.
@Tim I have not tested without ArKit yet, but I’ll try that if it still doesn’t work after the phase 8 update. I’ll let you know the results.