0
I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned.
Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera.
What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose
Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors.
Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
VisionPro 开发,XCode,我想在窗口中找到一个显示模型的图片。这个模型可以改变它的材料,它不是唯一的形象,它正在改变。如何将此模型转换为图像
Topic:
Spatial Computing
SubTopic:
ARKit
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes.
For testing, we used this 8K video:
https://deovr.com/nwfnq1
The video was played using the following code:
@objc public func playVideo(urlString: String)
{
guard let url = URL(string: urlString) else { return }
let pItem = AVPlayerItem(url: url)
playerItem = pItem
pItem.preferredForwardBufferDuration = 10.0
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
kCVPixelBufferMetalCompatibilityKey as String: true,
]
let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes )
pItem.add(output)
playerItemObserver = pItem.observe(\.status)
{
[weak self] pItem, _ in
guard pItem.status == .readyToPlay else { return }
self?.playerItemObserver = nil
self?.player.play()
}
player = AVPlayer(playerItem: pItem)
player.currentItem?.preferredPeakBitRate = 35_000_000
}
When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: -0.07s | buffer: 0.00s
🟡 Buffer ahead: 2.94s | buffer: 3.49s
🟡 Buffer ahead: 2.50s | buffer: 4.06s
🟡 Buffer ahead: 1.74s | buffer: 4.30s
🟡 Buffer ahead: 0.74s | buffer: 4.30s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.09s | buffer: 4.30s
🟠 Playback may stall
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.07s | buffer: 1.43s
🟣 Buffer full
🟡 Buffer ahead: 0.47s | buffer: 1.65s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.10s | buffer: 1.65s
🟠 Playback may stall
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟣 Buffer full
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 2.00s
🟡 Buffer ahead: 0.68s | buffer: 2.27s
🟡 Buffer ahead: 0.09s | buffer: 2.27s
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.22s | buffer: 2.22s
🟡 Buffer ahead: 1.05s | buffer: 3.05s
🟡 Buffer ahead: 1.12s | buffer: 4.12s
🟡 Buffer ahead: 1.18s | buffer: 5.18s
🟡 Buffer ahead: 0.72s | buffer: 5.72s
🟡 Buffer ahead: 1.27s | buffer: 7.28s
🟡 Buffer ahead: 2.09s | buffer: 3.03s
🟡 Buffer ahead: 4.16s | buffer: 6.10s
🟡 Buffer ahead: 6.66s | buffer: 7.09s
🟡 Buffer ahead: 5.66s | buffer: 7.09s
🟡 Buffer ahead: 4.66s | buffer: 7.09s
🟡 Buffer ahead: 4.02s | buffer: 7.45s
🟡 Buffer ahead: 3.62s | buffer: 8.05s
🟡 Buffer ahead: 2.62s | buffer: 8.05s
🟡 Buffer ahead: 2.49s | buffer: 3.53s
🟡 Buffer ahead: 2.43s | buffer: 3.38s
🟡 Buffer ahead: 1.90s | buffer: 3.85s
We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained.
Can you advise whether this is a player issue or if we’re doing something wrong?
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more?
I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
Hi Apple Developer Community,
I'm developing an eye-tracking application using ARKit's ARFaceTrackingConfiguration and ARFaceAnchor.blendShapes for gaze detection using Xcode. I'm experiencing several calibration and accuracy issues and would appreciate insights from the community.
Current Implementation
Using ARFaceAnchor.blendShapes (.eyeLookUpLeft, .eyeLookDownLeft, .eyeLookInLeft, .eyeLookOutLeft, etc.)
Implementing custom sensitivity curves and smoothing algorithms
Applying baseline correction and coordinate mapping
Using quadratic regression for calibration point mapping
Issues I'm Facing
1. Calibration Mismatch
Red dot position doesn't align with where I'm actually looking
Significant offset between intended gaze point and actual cursor position
Calibration seems to drift or become inaccurate over time
2. Extreme Eye Movement Requirements
Need to make exaggerated eye movements to reach screen edges/corners
Natural eye movements don't translate to proportional cursor movement
Difficulty reaching certain screen regions even with calibration
3. Sensitivity and Stability Issues
Cursor jitters or jumps around when looking at center
Too much sensitivity to micro-movements
Inconsistent behavior between calibration and normal operation
4. I also noticed that tracking on calibration screen as well as tracking on reading screen works better as expected when head movement is there, but I do not want much head movement. I want tracking with normal eye movement while reading an Ebook.
Primary Question: Word-Level Eye Tracking Feasibility
Is word-level eye tracking (tracking gaze as users read through individual words in an ebook) technically feasible with current iPhone/iPad hardware?
I understand that Apple's built-in eye tracking is primarily an accessibility feature for UI navigation. However, I'm wondering if the TrueDepth camera and ARKit's eye tracking capabilities are sufficient for:
Tracking natural reading patterns (left-to-right, line-by-line progression)
Detecting which specific words a user is looking at
Maintaining accuracy for sustained reading sessions (15-30 minutes)
Working reliably across different users and lighting conditions
Questions for the Community
Hardware Limitations: Are iPhone/iPad TrueDepth cameras capable of the precision needed for word-level tracking, or is this beyond current hardware capabilities?
Calibration Best Practices: What calibration strategies have worked best for accurate gaze mapping? How many calibration points are typically needed?
Reading-Specific Challenges: Are there particular challenges when tracking reading behavior vs. general gaze tracking?
Alternative Approaches: Are there better approaches than ARKit blend shapes for this use case?
Current Setup
Devices: iPhone 14 Pro
iOS Version: iOS 18.3
ARKit Version: Latest available
Any insights, experiences, or technical guidance would be greatly appreciated. I'm particularly interested in hearing from developers who have worked on similar eye tracking applications or have experience with the limitations and capabilities of ARKit's eye tracking features.
Thank you for your time and expertise!
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation.
In the ARKit documentation for ARCamera.transform, it says the following:
This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side.
Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as:
The device is in landscape mode, with the device held upright and the front-facing camera on the right side.
There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread
The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation.
It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit.
So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below:
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
I work on a game where I use timeline animations in Reality Composer Pro.
The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView.
And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture.
For example in the picture, the yellow sphere loops the following animation:
0 to 100
100 to -100
-100 to 0
If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops:
100 to 200
200 to 0
0 to 100
I already tried all kind of setups - like:
Setting the animations relative to root, parent, local
Using behaviors (on Added to Scene, on Notification)
And finally even by accessing the availableAnimations directly and saving the playback controller of the animation
There I saw, if I manually trigger the following code before switching the scene, everything works as expected:
Button("Reset") {
animationPlaybackController.time = 0
animationPlaybackController.pause()
animationPlaybackController.stop(blendOutDuration: 0.00001)
}
But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene.
I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked.
So am I doing something wrong in general or is there a way to fix this?
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro.
I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
Is there any way to convert TextureResource to Image
I am running a Spatial Rendering App template demo, it shows “No People Found ” “There is no one nearby to share with”.
How can I stream videos rendered by Mac to my vision pro
I am using macOS 26.0, visionOS 26, Xcode 26
Topic:
Spatial Computing
SubTopic:
General
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
Hello,
There are three issues I am running into with a default template project + additional minimal code changes:
the Sphere_Left entity always overlaps the Sphere_Right entity.
when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity
when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity
When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior.
These issues are simple to replicate:
Create a new project in XCode
Choose visionOS -> App, then click Next
Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next
Save you project anywhere...
Replace the entire ImmersiveView.swift file with the below code.
Run.
Try to manipulate the left sphere, you should get the same issues I mentioned above
If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues.
I am running this in macOS 26, XCode 26, on visionOS 26, all released lately.
ImmersiveView Code:
//
// ImmersiveView.swift
//
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView")
@State var collisionBeganUnfiltered: EventSubscription?
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add manipulation components
setupManipulationComponents(in: immersiveContentEntity)
collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in
Task { @MainActor in
handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB)
}
}
}
}
}
private func setupManipulationComponents(in rootEntity: Entity) {
logger.info("\(#function) \(#line) ")
let sphereNames = ["Sphere_Left", "Sphere_Right"]
for name in sphereNames {
guard let sphere = rootEntity.findEntity(named: name) else {
logger.error("\(#function) \(#line) Failed to find \(name) entity")
assertionFailure("Failed to find \(name) entity")
continue
}
ManipulationComponent.configureEntity(sphere)
var manipulationComponent = ManipulationComponent()
manipulationComponent.releaseBehavior = .stay
sphere.components.set(manipulationComponent)
}
logger.info("\(#function) \(#line) Successfully set up manipulation components")
}
private func handleCollision(entityA: Entity, entityB: Entity) {
logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)")
guard entityA !== entityB else { return }
if entityB.isAncestor(of: entityA) {
logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent")
return
}
if entityA.isAncestor(of: entityB) {
logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)")
return
}
reparentEntities(child: entityA, parent: entityB)
entityA.components[ParticleEmitterComponent.self]?.burst()
}
private func reparentEntities(child: Entity, parent: Entity) {
let childBounds = child.visualBounds(relativeTo: nil)
let parentBounds = parent.visualBounds(relativeTo: nil)
let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x)
let childPosition = child.position(relativeTo: nil)
let parentPosition = parent.position(relativeTo: nil)
let currentDistance = distance(childPosition, parentPosition)
child.setParent(parent, preservingWorldTransform: true)
logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)")
child.components.remove(ManipulationComponent.self)
logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)")
if currentDistance > maxEntityWidth {
let direction = normalize(childPosition - parentPosition)
let newPosition = parentPosition + direction * maxEntityWidth
child.setPosition(newPosition - parentPosition, relativeTo: parent)
logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)")
}
}
}
fileprivate extension Entity {
func isAncestor(of other: Entity) -> Bool {
var current: Entity? = other.parent
while let node = current {
if node === self { return true }
current = node.parent
}
return false
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature)
IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro?
Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it.
Apologies for my lack of knowledge.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
visionOS
I need help to wrap my head around this...
If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps.
If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops.
If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine.
I don't understand. How can I make the ARView work as efficiently as QuickLook?
Environment
Xcode: 16.2
VisionOS SDK 2.4
Swift 6.1
Targets: Apple Vision Pro (immersive space)
Frameworks: ARKit, RealityKit, SwiftUI
What I’m Trying to Do
I have a view-model class PlacementManager that holds two AR providers:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit).
What’s Happening
If I declare them as :
private let worldTracking = WorldTrackingProvider()
private let planeDetection = PlaneDetectionProvider()
I get compile-errors when I later do:
self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant
If I change them to un-initialized vars:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
then in my init() I get:
self used in property access 'worldTracking' before all stored properties are initialized
Code snipet
@Observable
final class PlacementManager : ObservableObject {
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
// … other props …
@MainActor
init() {
// error: self.worldTracking used before init…
planeAnchorHandler = PlaneAnchorHandler(rootEntity: root)
persistenceManager = PersistenceManager(
worldTracking: worldTracking,
rootEntity: root
)
// …
}
@MainActor
func setEnvironment(env: Environnement) async {
let newWorldTracking = WorldTrackingProvider()
let newPlaneDetection = PlaneDetectionProvider()
try await appState!.arkitSession.run(
[ newWorldTracking, newPlaneDetection ]
)
self.worldTracking = newWorldTracking
self.planeDetection = newPlaneDetection
// …
}
}
What I’ve Tried
Giving them default values at declaration (= WorldTrackingProvider())
Initializing them at the top of init() before any use
Passing the new providers into arkitSession.run(...)
My Question
What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that:
They’re fully initialized before use in init(), and
I can swap them out later in setEnvironment(...) without compiler errors?
Any pointers (or links to forum threads / docs) would be greatly appreciated!