切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。
期望
"· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力;
· 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.
At a recent community meeting we were wondering how Apple creates this soft-edge effect around the occlusion cutouts. We see this effect on keyboard cutouts, iPhone cutouts, and in progressive spaces.
An example: Notice the soft edged around the occlusion cutout for the keyboard
One of our members created some Shader Graph materials to explore soft edges. These work by sending data into the opacity channel of the PreviewSurface node.
Unfortunately, the Occlusion Surface nodes lack any sort of input. If you know how to blend these concepts with RealityKit Occlusion, please let us know!
When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0.
The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself.
Here is the relevant code:
private func createVideoSphere(content: RealityViewContent, player: AVPlayer) {
let sphere = MeshResource.generateSphere(radius: 1000)
let material = VideoMaterial(avPlayer: player)
let entity = ModelEntity(mesh: sphere, materials: [material])
entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside
content.add(entity)
player.play()
}
The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file
What I've tried:
I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements.
Questions:
Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video?
Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial?
Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference.
Any guidance would be greatly appreciated. Thank you!
I want to select a sub model under a large model in a mixed space, and when I select this sub model, I will add a stroke to it, similar to the effect of selecting a model in Reality Composer Pro ,How to create entity strokes similar to this effect
I'm capturing a room via RoomPlan API and would like to access the DepthMap(sceneDepth) or SmoothDepthMap(smoothedSceneDepth) from my own provided ARSession for RoomCaptureSession.
But both depth maps are empty when handling the delegates. I have not found a solution yet. So is it even possible? Because i have not found any documentation of what RoomCaptureSession overwrites in the ARSession if I provide my own ARSession instance.
Here is a example code snippet of what i'm trying to do:
private let arSession = ARSession()
private lazy var roomPlanCaptureSession = RoomCaptureSession(arSession: arSession)
let arConfig = ARWorldTrackingConfiguration()
//Create semantics for ARconfig which is used for ARSession
var semantics: ARWorldTrackingConfiguration.FrameSemantics = []
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
semantics.insert(.sceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) {
semantics.insert(.smoothedSceneDepth)
}
arConfig.frameSemantics = semantics
//set delegates
roomPlanCaptureSession.delegate = self
arSession.delegate = self
//Check if device support for depthMap
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth){
arSession.run(arConfig)
}
else{
print(".sceneDepth is unsupported.")
}
//run roomcapture scan config
let captureConfig = RoomCaptureSession.Configuration()
roomPlanCaptureSession.run(configuration: captureConfig)
//trying to get sceneDepth
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
print("session delegate capture: sceneDepth: \(String(describing: frame.sceneDepth))")
//prints: session delegate capture: sceneDepth: nil
also in this video from 2023 it is say that i can pass custom ARSession to my RoomPlan.
Explore enhancements to RoomPlan - Video
Quote 3:00: Here is the init and stop function in previous RoomPlan. And here is how you pass over a custom ARSession to init function. Any custom ARSession with ARWorldTrackingConfiguration will be honored inside RoomCaptureSession.
anyway I welcome any input. maybe im doing something wrong. :)
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
I'm placing sphere at finger tip and updating its position as hand move.
Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately.
When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly.
Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load)
yet the lag still occurs even when CPU usage remains below 100%.
I’m using the following code for hand tracking:
private func processHandTrackingUpdates() async {
for await update in handTracking.anchorUpdates {
let handAnchor = update.anchor
if handAnchor.isTracked {
switch handAnchor.chirality {
case .left:
leftHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: leftHandJointEntities)
case .right:
rightHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: rightHandJointEntities)
}
} else {
switch handAnchor.chirality {
case .left:
leftHandAnchor = nil
hideAllJoints(in: leftHandJointEntities)
case .right:
rightHandAnchor = nil
hideAllJoints(in: rightHandJointEntities)
}
}
await MainActor.run {
handTrackingData.processNewHandAnchors(
leftHand: self.leftHandAnchor,
rightHand: self.rightHandAnchor
)
}
}
}
And here’s the function I’m using to update the joint positions:
private func updateHandJoints(
for handAnchor: HandAnchor,
with jointEntities: [HandSkeleton.JointName: Entity]
) {
guard handAnchor.isTracked else {
hideAllJoints(in: jointEntities)
return
}
// Check if the little finger tip and intermediate base are both tracked.
if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip),
let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip),
tipJoint.isTracked,
intermediateBaseJoint.isTracked,
let pinkySphere = jointEntities[.littleFingerTip] {
// Convert joint transforms to world space.
let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform
let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform
// Extract positions from the transforms.
let tipPosition = SIMD3<Float>(tipTransform.columns.3.x,
tipTransform.columns.3.y,
tipTransform.columns.3.z)
let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x,
intermediateBaseTransform.columns.3.y,
intermediateBaseTransform.columns.3.z)
// Calculate the midpoint.
let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0
// Position the sphere at the midpoint and make it visible.
pinkySphere.isEnabled = true
pinkySphere.transform.translation = midpointPosition
} else {
// If either joint is not tracked, hide the sphere.
jointEntities[.littleFingerTip]?.isEnabled = false
}
// Update the positions of all other hand joint spheres.
for (jointName, entity) in jointEntities {
if jointName == .littleFingerTip {
// Already handled the pinky above.
continue
}
guard let joint = handAnchor.handSkeleton?.joint(jointName),
joint.isTracked else {
entity.isEnabled = false
continue
}
entity.isEnabled = true
let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform
entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x,
jointTransform.columns.3.y,
jointTransform.columns.3.z)
}
}
I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue.
Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ
Vision Pro Recordings:
https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz-
https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0
Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated.
Thanks!
I like to compose an APN message. (using FCM)
what shall I do for it?
Topic:
Spatial Computing
SubTopic:
ARKit
I am allowing users to go through and capture different rooms, and add a custom label to that room. Is there a way to store data about this in the captured room so that it persists into the final merge? As it is now, My users mark all their merges with custom labels, but after merging there is no way to remember which room is which in the merging process so they have to go through and manually add the labels back. For larger floor plans this is not ideal.
I want to implement the functions in this video, how should I set the window
I want an AR character to be able to look at a position while still playing the characters animation.
So far, I managed to manually adjust a single bone rotation using
skeletalComponent.poses.default = Transform(
scale: baseTransform.scale,
rotation: lookAtRotation,
translation: baseTransform.translation
)
which I run at every rendering update, while a full body animation is running.
But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints.
I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro.
But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
Problem Description
I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden.
However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space.
Current Implementation
Here's the relevant gesture detection code in my ImmersiveView:
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@EnvironmentObject var appModel: AppModel
@Environment(\.openWindow) private var openWindow
var body: some View {
RealityView { content in
// RealityView content setup with panoramic sphere...
let rootEntity = Entity()
content.add(rootEntity)
// Load panoramic content here...
}
// Using SpatialEventGesture to handle multiple spatial gestures
.gesture(
SpatialEventGesture()
.onEnded { eventCollection in
// Check menu visibility state
if !appModel.isPanoramaMenuVisible {
// Iterate through event collection to handle various gestures
for event in eventCollection {
switch event.kind {
case .touch:
print("Detected spatial touch gesture, showing menu")
showMenuWithGesture()
return
case .indirectPinch:
print("Detected spatial pinch gesture, showing menu")
showMenuWithGesture()
return
case .pointer:
print("Detected spatial pointer gesture, showing menu")
showMenuWithGesture()
return
@unknown default:
print("Detected unknown spatial gesture: \(event.kind)")
showMenuWithGesture()
return
}
}
}
}
)
// Keep long press gesture as backup
.simultaneousGesture(
LongPressGesture(minimumDuration: 1.5)
.onEnded { _ in
if !appModel.isPanoramaMenuVisible {
print("Detected long press gesture, showing menu")
showMenuWithGesture()
}
}
)
}
private func showMenuWithGesture() {
if !appModel.isPanoramaMenuVisible {
appModel.showPanoramaMenu()
if !appModel.windowExists(id: "PanoramaMenu") {
openWindow(id: "PanoramaMenu", value: "menu")
}
}
}
}
What I've Tried
Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other.
SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu.
Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space.
Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently.
Expected Behavior
When the panorama menu is hidden (after 5-second auto-hide), users should be able to:
Perform a pinch gesture (indirect pinch) to show the menu
Tap in space to show the menu
Use other spatial gestures to show the menu
Questions
Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView?
Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures?
Should I be using a different gesture approach for visionOS immersive spaces?
Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them?
Environment
Xcode 16.1
visionOS 2.5
Testing on Vision Pro device
App uses SwiftUI + RealityKit
Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated!
Additional Context
The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden.
Thank you for any help or suggestions!
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations.
If I don't run testAnimation nothing happens.
If I run testAnimation I see the same animation as If I had called
entity.playAnimation(entity.availableAnimations[0],..)
here's the full code I use to animate a single bone:
func testAnimation() {
guard let jawAnim = jawAnimation(mouthOpen: 0.4) else {
print("Failed to create jawAnim")
return
}
guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return }
let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false)
print("controller: \(controller)")
}
func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? {
guard let basePose else { return nil }
guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else {
print("Target joint \(self.jawBoneName) not found in default pose joint names")
return nil
}
let fromTransforms = basePose.jointTransforms
let baseJawTransform = fromTransforms[index]
let maxAngle: Float = 40
let angle: Float = maxAngle * mouthOpen * (.pi / 180)
let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1))
var toTransforms = basePose.jointTransforms
toTransforms[index] = Transform(
scale: baseJawTransform.scale * 2,
rotation: baseJawTransform.rotation * extraRot,
translation: baseJawTransform.translation
)
let fromToBy = FromToByAnimation<JointTransforms>(
jointNames: basePose.jointNames,
name: "jaw-anim",
from: fromTransforms,
to: toTransforms,
duration: 0.1,
bindTarget: .jointTransforms,
repeatMode: .none,
)
return fromToBy
}
PS: I can confirm that I can set this bone to a specific position if I use
guard let index = newPose.jointNames.firstIndex(of: boneName) ...
let baseTransform = basePose.jointTransforms[index]
newPose.jointTransforms[index] = Transform(
scale: baseTransform.scale,
rotation: baseTransform.rotation * extraRot,
translation: baseTransform.translation
)
skeletalComponent.poses.default = newPose
creatureMeshEntity.components.set(skeletalComponent)
This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ?
For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ?
Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
Topic:
Spatial Computing
SubTopic:
General