Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
0
0
121
1w
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
133
Apr ’25
Vision Pro 画面传输至 Mac 后分辨率偏低
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。 期望 优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
0
0
218
Dec ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
71
Apr ’25
vision shareplay nearby codes expired
it looks like one week after accepting as a nearby other AVP device... it expires since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together is there any roundabout for this issue or straight to the wishlist ? thanks for the support !!
0
0
76
Sep ’25
RoomPlan crash on iOS 16 devices even when using available checks
I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features. Xcode error dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ I can repro using the Apple sample code. https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience Modify RoomCaptureViewController.swift as follows. Remove try finalResults?.export(to: destinationURL, exportOptions: .parametric) Add if #available(iOS 17.0, *) { try finalResults?.export(to: destinationURL, exportOptions: .model) } else { try finalResults?.export(to: destinationURL, exportOptions: .parametric) } I would have expected this code to at least compile and run on older devices. When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
0
0
171
Apr ’25
Implementing Foveated Streaming with Apple Vision Pro
Hello, Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it. I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience? Thanks,
1
0
687
Dec ’25
ARKit Eye Tracking Calibration Issues - Word-Level Reading Tracking Feasibility
Hi Apple Developer Community, I'm developing an eye-tracking application using ARKit's ARFaceTrackingConfiguration and ARFaceAnchor.blendShapes for gaze detection using Xcode. I'm experiencing several calibration and accuracy issues and would appreciate insights from the community. Current Implementation Using ARFaceAnchor.blendShapes (.eyeLookUpLeft, .eyeLookDownLeft, .eyeLookInLeft, .eyeLookOutLeft, etc.) Implementing custom sensitivity curves and smoothing algorithms Applying baseline correction and coordinate mapping Using quadratic regression for calibration point mapping Issues I'm Facing 1. Calibration Mismatch Red dot position doesn't align with where I'm actually looking Significant offset between intended gaze point and actual cursor position Calibration seems to drift or become inaccurate over time 2. Extreme Eye Movement Requirements Need to make exaggerated eye movements to reach screen edges/corners Natural eye movements don't translate to proportional cursor movement Difficulty reaching certain screen regions even with calibration 3. Sensitivity and Stability Issues Cursor jitters or jumps around when looking at center Too much sensitivity to micro-movements Inconsistent behavior between calibration and normal operation 4. I also noticed that tracking on calibration screen as well as tracking on reading screen works better as expected when head movement is there, but I do not want much head movement. I want tracking with normal eye movement while reading an Ebook. Primary Question: Word-Level Eye Tracking Feasibility Is word-level eye tracking (tracking gaze as users read through individual words in an ebook) technically feasible with current iPhone/iPad hardware? I understand that Apple's built-in eye tracking is primarily an accessibility feature for UI navigation. However, I'm wondering if the TrueDepth camera and ARKit's eye tracking capabilities are sufficient for: Tracking natural reading patterns (left-to-right, line-by-line progression) Detecting which specific words a user is looking at Maintaining accuracy for sustained reading sessions (15-30 minutes) Working reliably across different users and lighting conditions Questions for the Community Hardware Limitations: Are iPhone/iPad TrueDepth cameras capable of the precision needed for word-level tracking, or is this beyond current hardware capabilities? Calibration Best Practices: What calibration strategies have worked best for accurate gaze mapping? How many calibration points are typically needed? Reading-Specific Challenges: Are there particular challenges when tracking reading behavior vs. general gaze tracking? Alternative Approaches: Are there better approaches than ARKit blend shapes for this use case? Current Setup Devices: iPhone 14 Pro iOS Version: iOS 18.3 ARKit Version: Latest available Any insights, experiences, or technical guidance would be greatly appreciated. I'm particularly interested in hearing from developers who have worked on similar eye tracking applications or have experience with the limitations and capabilities of ARKit's eye tracking features. Thank you for your time and expertise!
0
0
731
Oct ’25
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
871
Nov ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
297
Apr ’25
A Summary of the WWDC25 Group Lab - visionOS
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for visionOS. I saw that there is a new way to add SwiftUI View attachments in my RealityView, what advantages does this have over the old way? Attachments can now be added directly to your entities with ViewAttachmentComponent. The removes the need to declare your attachments upfront in your RealityView initializer and then add those attachments as child entities. The new approach provides greater flexibility. Canyon Crosser and Petite Asteroids both utilize the new approach. ManipulationComponent looks really cool! Right now my app has a series of complicated custom gestures. What gestures does it handle for me exactly, and are there any situations where I should prefer my own custom gestures? ManipulationComponent provides natural interaction with virtual objects. It seamlessly handles translation and rotation. You can easily add manipulation to a SwiftUI view like Model3D with the manipulable view modifier. The new Object Manipulation API is great for most apps, and is a breeze to implement, but sometimes you might want a more custom feel, and that’s ok! Custom gestures are still fully supported for that scenario. I saw that there is a new API to also access the right main camera. What can I do with this? Correct, in visionOS 26, you can access the left and right main cameras. You can even access them simultaneously as a stereo pair. Camera access still requires a managed entitlement and an enterprise license, see Accessing the main camera for more details about those requirements. More computer vision and machine learning use-cases are unlocked with access to both cameras, we are excited to see what you will do! What do I need to do to add spatial accessory input for my app? First, use the GameController framework to establish a connection with the spatial accessory, and then listen for events from the controller. Then, you can use either RealityKit, ARKit, or a combination of both to track the accessory, anchor virtual content to it, and fine tune the accessory interaction with the content in your app. For more details, check out Discovering and tracking spatial game controllers and styli. By far, the most difficulty with implementing visionOS apps is SwiftUI window management…placing, opening, closing, etc. Are there any improvements to window management in visionOS 26? Yes! We recommend watching Set the scene with SwiftUI in visionOS. You can use the defaultLaunchBehavior to choose whether a particular window is presented (or suppressed) at launch. You can also prevent a window like a secondary toolbar from launching as the initial window using .restorationBehavior(.disabled). Adopting best practices for persistent UI provides a great overview of SwiftUI window management on visionOS. As for placing windows, there is still no API for an app to specify the placement of its windows other than relative placement. If that is a feature you are interested in, please file an enhancement request for it using Feedback Assistant! How to get access to the Enterprise API? First, request the entitlement and license through your Apple Developer or enterprise account. Once these have been granted, include the license and entitlement in your project. Then you can build, test, and distribute as an in-house app.
4
0
333
Jul ’25
ImmersiveSpace orphaned when WindowGroup closes
Environment visionOS 26.1, Xcode 26.1.1 Problem When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit. Steps WindowGroup opens ImmersiveSpace in .onAppear User clicks X to close window .onDisappear fires but async cleanup cancelled ImmersiveSpace remains active, user trapped Expected ImmersiveSpace dismissed when window closes Actual ImmersiveSpace remains active Code .onAppear { Task { await openImmersiveSpace(id: "VideoCallMainCamera") } } .onDisappear { Task { await dismissImmersiveSpace() // Gets cancelled } } What I've Tried Task in .onDisappear ❌ scenePhase monitoring ❌ High priority Task ❌ .restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup) Question What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
1
0
226
Dec ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
763
Sep ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
2
0
200
1d
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
137
May ’25
360° video playback Issue
When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0. The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself. Here is the relevant code: private func createVideoSphere(content: RealityViewContent, player: AVPlayer) { let sphere = MeshResource.generateSphere(radius: 1000) let material = VideoMaterial(avPlayer: player) let entity = ModelEntity(mesh: sphere, materials: [material]) entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside content.add(entity) player.play() } The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file What I've tried: I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements. Questions: Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video? Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial? Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference. Any guidance would be greatly appreciated. Thank you!
0
0
132
1w
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
114
May ’25
I built apple.PHASE with Unity and targeted with visionOS, but Reverb does not sound.
Environment Versions ・macOS15.6.1 ・visionOS26.0.1 ・Xcode16.1 or 26.0.1 ・unity6000.2.9f1 ・Apple.core3.2.0 ・Apple.PHASE1.2.7 ・polyspatial2.4.2 With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted. What is required to make Early Reflection and Late Reverb take effect on a visionOS device build? action taken ・created a SoundEvent. ・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer. ・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb. ・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None. ・in project settings > Audio, set Spatializer plugin to PHASE Spatializer. ・from there, build for visionOS.
0
0
793
Nov ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
582
Jul ’25