Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

ImagePresentationComponent .spatialStereoImmersive mode not rendering in WindowGroup context
Platform: visionOS 2.6 Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context. This is what I’m seeing: Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace Mode switching API: All mode transitions work correctly (logs confirm the component updates) Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts. This is where it’s breaking for me: Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change The API calls succeed, state updates correctly, but the immersive content doesn’t render. Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve: Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc. Tapping a spatial photo smoothly transitions it to immersive mode in-place. The immersive content appears to “grow” from the original window position by just changing IPC viewing modes. This proves the functionality should be possible, but I can’t determine the correct configuration. So, my question to is: Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of? Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints? Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances? How do you think Apple’s SG app achieves this functionality? For a little more context: All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive] The spatial photos are valid and work correctly in pure immersive space Mixed immersive space is active when testing window context No errors or warnings in console beyond the successful mode switching logs I’m getting Any insights into the proper configuration for window-hosted immersive content
1
1
201
Aug ’25
attenuation map covers over object
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
1
0
111
Apr ’25
Possible contradiction between ARKit's definition of UIDeviceOrientation.landscapeRight and the actual definition of UIDeviceOrientation.landscapeRight
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation. In the ARKit documentation for ARCamera.transform, it says the following: This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side. Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as: The device is in landscape mode, with the device held upright and the front-facing camera on the right side. There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation. It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit. So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
1
0
107
Jun ’25
I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
In Vision OS app, We have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility, and state preservation, but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
179
Mar ’25
ARKit camera transform orientation vector doesn't match physical device heading (despite `.gravityAndHeading`)
Hi all, I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using: let config = ARWorldTrackingConfiguration() config.worldAlignment = .gravityAndHeading Thus, I would expect the world alignment to behave as given in the gravityAndHeading page. The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated. However, I'm seeing a major inconsistency: When the rear camera is physically pointed toward true North, I would expect: cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North) But instead, I'm consistently seeing: cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South Meanwhile, the translation vector behaves as expected: As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption. For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this: Camera Transform = simd_float4x4( [ [0.98446155, -0.030119859, 0.172998, 0.0], [0.023979114, 0.9990097, 0.037477385, 0.0], [-0.17395553, -0.032746706, 0.98420894, 0.0], [0.024039675, -0.037087332, -0.22780673, 0.99999994] ]) As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North. So here is my question: Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North? Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch. Thanks in advance!
1
0
140
Jun ’25
CapturedRoom.Section is missing a lot of information
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation. When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
1
0
386
Sep ’25
Is `ParticleEmitterComponent` implemented for `RealityKit` on iOS?
Hi there, I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
1
0
150
May ’25
Apple Vision Pro Developer Strap: Video Out and Recording Time Limit
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround? Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool? I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
1
0
176
Jun ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
457
Jul ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
1
0
117
Jun ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
591
Sep ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
299
Sep ’25
WebXR and PSVR2
We got very excited when we saw support for the PSVR2 on WWDC! Particularly interesting is WebXR to us, so we got the controllers to give it a try. Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
1
5
381
Jun ’25