Hello there,
I'm having trouble matching what I see in the scenekit editor and the output of the resulting scene in a scnview.
For a glitter effect I have set a high value on the diffuse intensity which looks fine in the editor but when running the game the colors are much darker. To see if the intensity value is merely capped I have set the same multiplier on the hat below - but it is blown out which looks to me like there is some grading going on
I have tried to switch on hdr rendering but that didn't make a difference.
I tried disabling linear rendering and that simply made everything darker still - which I expect.
Does someone have an idea what else this could be? What rendering is the scenekit editor using and how can I match it?
Interestingly when I take a screenshot of the editor window for this post, the image is also blown out... what is going on? :)
Thanks so much for any pointers,
Seb
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
Hi everyone,
I’m developing an iPad app that will be running continuously with the screen always on — similar to a restaurant ordering system.
I understand that some of the newer iPad Pro models are equipped with OLED displays. I'm concerned about the potential risk of screen burn-in due to static UI elements being displayed for extended periods.
Does burn-in occur on the OLED iPad Pro models under such usage?
Would it be advisable to implement a screen saver or periodically animate/change parts of the UI to prevent this?
Any insights or best practices would be greatly appreciated. Thank you!
Topic:
Graphics & Games
SubTopic:
General
My app is being rejected and all I'm being told is that it is spam.
I've tried improving various aspects of the game, but I just receive the same copy and paste rejection message each time.
I have no idea if I'm moving in the right direction or what part of my game needs to be changed or improved. Is there a game quality benchmark document or some kind of resource I can use to better understand why my game is being rejected and how to bring it to a level that meets apple's standards.
I'm implementing optimized matmul on metal: https://github.com/crynux-ai/metal-matmul/blob/main/metal/1_shared_mem.metal
I notice that performance is significantly different with different threadgroup memory set in
[computeEncoder setThreadgroupMemoryLength]
All other lines are exactly same, the only difference is this parameter.
Matmul performance is roughly 250 GFLops if I set 32768 (max bytes allowed on this M1 Max),
but 400 GFLops if I set 8192.
Why does this happen? How can I optimize it?
Topic:
Graphics & Games
SubTopic:
Metal
Had anyone experienced convexCast causing a crash and what might be behind it?
Here's the call stack:
My app is running Compute Shaders that use non-uniform thread groups.
When I run the app in the debugger with a simulator target the app crashes on encoder.dispatchThreads and the error message is:
Dispatch Threads with Non-Uniform Threadgroup Size is not supported on this device.
Previously the log output states that:
Metal Shader Validation is unsupported for Simulator.
However:
When I stop the debugger and just run the app in the simulator without the debugger attached, the app just runs fine and does not crash.
The SwiftUI Preview that also triggers the Compute Shader when preparing data also just runs fine without a crash.
I can run and debug on a real device no problem - I just don't have all sizes available.
Is there anything I need to check in my lldb/simulator configuration? It obviously does work, just the debugger cannot really deal with it?
Any input would be nice as this really slows my down as I have to be extremely careful when debugging on the simulator.
Hello !
We are working on a real-time 2-player online game targeting multiple Apple devices.
The following issue only occurs on tvOS:
When selecting matchmaking to connect with another random player, the native Game Center interface opens and begins the matchmaking process.
Almost immediately after clicking "start", the following log appears in the console, and the matchmaking screen remains indefinitely without completing:
Timeout while starting matching with request: <GKMatchRequestInternal 0x30d62f690> {
defaultNumberOfPlayers : 0
isLateJoin : 0
localPlayerID : U:bea182d69b85f0839e3958742fbc4609
matchType : 0
maxPlayers : 2
minPlayers : 2
playerAttributes : 4294967295
playerGroup : 1
preloadedMatch : 0
recipientPlayerIDs : <__NSArrayM 0x3034ed5c0> {}
recipients : <__NSArrayM 0x3034ee280> {}
restrictToAutomatch : 0
version : 1
archivedSharePlayInviteeTokensFromProgrammaticInvite, inviteMessage, localizableInviteMessage, messagesBasedRecipients, properties, queueName, recipientProperties, rid, sessionToken : (null)
} . Error: (null)
However, as shown in the code snippet below, the task does not complete when the log appears. But when we manually cancel the matchmaking process, the "User cancel" log is correctly triggered.
var gkMatchRequest = GKMatchRequest.Init();
gkMatchRequest.MinPlayers = 2;
gkMatchRequest.MaxPlayers = 2;
var matchRequestTask = GKMatchmakerViewController.Request(gkMatchRequest);
matchRequestTask.ContinueWith(t => { Debug.LogException(t.Exception); }, TaskContinuationOptions.OnlyOnFaulted);
matchRequestTask.ContinueWith(t => { Debug.Log("User cancel"); }, TaskContinuationOptions.OnlyOnCanceled);
matchRequestTask.ContinueWith(t => { Debug.Log("Success"); }, TaskContinuationOptions.OnlyOnRanToCompletion);
We have tested this on multiple Apple TV and network types (Wi-Fi, 5G, Ethernet), but we consistently encounter this bug along with the same log message.
Could you please help us understand or resolve this issue?
Thank you.
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here:
https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent
Here is a simplified version of the code I'm running in another application:
import SwiftUI
import RealityKit
import AVFoundation
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!)
let videoEntity = Entity()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.desiredImmersiveViewingMode = .full
videoPlayerComponent.desiredViewingMode = .stereo
player.play()
videoEntity.components.set(videoPlayerComponent)
content.add(videoEntity)
}
}
}
Full code is here:
https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample
But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes:
App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid
Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error}
CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience
[0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things:
https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video
Steps to reproduce the issue:
- Open AIVUPlayerSample project and run. Look at the logs.
All code can be found in ImmersiveView.swift
Sample file is included in the project
Expected results:
If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace.
Am i doing something wrong in the code? I'm basically following the documentation here.
Feedback ticket: FB19971306
Hello, I am quite new to using the metal API and was wondering if it was common (or even possible) if you knew that, when a pipeline was created, you never needed to make another one with the same shaders again, if it is safe to release the library the was used to reference the shaders? Only asking because this is possible in other apis, but apple never mentions (as far as I have found) if this is safe or not safe to do.
Topic:
Graphics & Games
SubTopic:
Metal
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text.
However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer.
//
// TVView.swift
// Mistery
//
// Created by Joje on 31/07/25.
//
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import AVFoundation
final class TVView: UIView {
// propriedades animacao texto
private var textAnimationTimer: Timer?
private var fullTextToAnimate: String = ""
private var currentCharIndex: Int = 0
// propriedades video estatica
private var player: AVQueuePlayer?
private var playerLayer: AVPlayerLayer?
private var playerLooper: AVPlayerLooper?
var onNextButtonTap: () -> Void = {}
// MARK: - Subviews
// imagem da TV
private(set) lazy var tvImageView: UIImageView = {
let imageView = UIImageView()
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.image = UIImage(named: "tvFinal")
imageView.contentMode = .scaleAspectFit
return imageView
}()
// texto que passa dentro da TV
private(set) lazy var storyLabel: UILabel = {
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
//label.backgroundColor = .gray
label.textColor = .red
label.font = UIFont(name: "MeltedMonster", size: 30)
label.textAlignment = .left
label.numberOfLines = 0
label.text = ""
return label
}()
private(set) lazy var nextButton: UIButton = {
let button = UIButton(type: .system)
button.translatesAutoresizingMaskIntoConstraints = false
//button.backgroundColor = .darkGray
button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside)
return button
}()
// MARK: - Lifecycle
override init(frame: CGRect) {
super.init(frame: frame)
backgroundColor = .black
setupVideoPlayer()
addSubviews()
setupConstraints()
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18)
setupFisheyeEffect()
}
private func setupFisheyeEffect() {
// cria o filtro
guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")}
storyLabel.layer.shouldRasterize = true
storyLabel.layer.rasterizationScale = UIScreen.main.scale
// define os parametros
filter.setDefaults()
// centro do efeito
let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY)
filter.setValue(center, forKey: kCIInputCenterKey)
// raio de distorção
filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey)
// intensidade de distorção
filter.setValue(7, forKey: kCIInputScaleKey)
storyLabel.layer.filters = [filter]
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - Button actions
@objc private func didPressNextButton() {
onNextButtonTap()
}
@objc private func animateNextCharacter() {
guard currentCharIndex < fullTextToAnimate.count else {
textAnimationTimer?.invalidate()
return
}
let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex)
let partialText = String(fullTextToAnimate[...currentTextIndex])
storyLabel.text = partialText
currentCharIndex += 1
}
public func updateStoryText(with text: String) {
textAnimationTimer?.invalidate()
storyLabel.text = ""
fullTextToAnimate = text
currentCharIndex = 0
textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true)
}
// MARK: - Setup methods
private func setupVideoPlayer() {
guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else {
print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov")
return
}
let playerItem = AVPlayerItem(url: videoURL)
player = AVQueuePlayer(playerItem: playerItem)
// LINHA COM POSSIVEL ERRO
playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = .resizeAspectFill
if let layer = playerLayer {
self.layer.addSublayer(layer)
}
player?.play()
}
private func addSubviews() {
self.addSubview(storyLabel)
self.addSubview(tvImageView)
self.addSubview(nextButton)
}
private func setupConstraints() {
NSLayoutConstraint.activate([
// TV Image
tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor),
tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
tvImageView.widthAnchor.constraint(equalTo: widthAnchor),
// TV Text
storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50),
storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35),
storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42),
//TV Button
nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190),
nextButton.widthAnchor.constraint(equalToConstant: 100),
nextButton.heightAnchor.constraint(equalToConstant: 160)
])
}
}
#Preview{
ViewController()
}
I have several games on the app store which are setup as "For Kids" which means these games cannot have any way to access the outside world including the App Store. No problem. These games have worked fine for years until iOS 26.
Now, all my updates are being rejected because the new version of Game Center running on iPadOS 26 has a way for people to exit the game and go to the App Store. I have no control over this since it's built into Game Center, and the app review folks want me to put a "parental gate" on it, but there's no way to do that because... well... it's in Game Center, not my code.
So, I'm unable to update my apps because of this. Presumably, the existing versions on the app store still do this exact same thing, so my update isn't going to make any difference. Does anyone know of a way to make that crap at the top go away so this isn't an issue?
Topic:
Graphics & Games
SubTopic:
GameKit
Hello,
We are working on a real-time 2-player online game targeting multiple Apple devices. The following issue only occurs on tvOS:
When selecting matchmaking to connect with another player, the native Game Center interface opens and begins the matchmaking process.
Almost immediately, the following log appears in the console, and the matchmaking screen remains indefinitely without completing:
Timeout while starting matching with request: <GKMatchRequestInternal 0x30d62f690> {
defaultNumberOfPlayers : 0
isLateJoin : 0
localPlayerID : U:bea182d69b85f0839e3958742fbc4609
matchType : 0
maxPlayers : 2
minPlayers : 2
playerAttributes : 4294967295
playerGroup : 1
preloadedMatch : 0
recipientPlayerIDs : <__NSArrayM 0x3034ed5c0> {}
recipients : <__NSArrayM 0x3034ee280> {}
restrictToAutomatch : 0
version : 1
archivedSharePlayInviteeTokensFromProgrammaticInvite, inviteMessage, localizableInviteMessage, messagesBasedRecipients, properties, queueName, recipientProperties, rid, sessionToken : (null)
} . Error: (null)
However, the task does not complete when the log appears (our Debug.Log are nerver called).
But if we manually cancel the matchmaking process, the "User cancel" log is correctly triggered.
Here is a code snippet for the request :
var gkMatchRequest = GKMatchRequest.Init();
gkMatchRequest.MinPlayers = 2;
gkMatchRequest.MaxPlayers = 2;
var matchRequestTask = GKMatchmakerViewController.Request(gkMatchRequest);
matchRequestTask.ContinueWith(t => { Debug.LogException(t.Exception); }, TaskContinuationOptions.OnlyOnFaulted);
matchRequestTask.ContinueWith(t => { Debug.LogInfo("User cancel"); }, TaskContinuationOptions.OnlyOnCanceled);
matchRequestTask.ContinueWith(t => { Debug.LogInfo("Success"); }, TaskContinuationOptions.OnlyOnRanToCompletion);
We have tested this on multiple devices and network types (Wi-Fi, 5G, Ethernet), but we consistently encounter this bug along with the same log message.
Could you please help us understand or resolve this issue?
Thank you.
I'm using a class with tangents to render on RealityKit for VisionOS but in Vision26 it cause a crash on App and there not documentation how implement cp_drawable_compute_projection I have tried a few options but without success. Could you help me to implement it ?
The part of code is:
return drawable.views.map { view in
let userViewpointMatrix = (simdDeviceAnchor * view.transform).inverse
let projectionMatrix = ProjectiveTransform3D(
leftTangent: Double(view.tangents[0]),
rightTangent: Double(view.tangents[1]),
topTangent: Double(view.tangents[2]),
bottomTangent: Double(view.tangents[3]),
nearZ: Double(drawable.depthRange.y),
farZ: Double(drawable.depthRange.x),
reverseZ: true
)
let screenSize = SIMD2(x: Int(view.textureMap.viewport.width),
y: Int(view.textureMap.viewport.height))
return ModelRendererViewportDescriptor(viewport: view.textureMap.viewport,
projectionMatrix: .init(projectionMatrix),
viewMatrix: userViewpointMatrix * translationMatrix * rotationMatrix * scalingMatrix * commonUpCalibration,
screenSize: screenSize)
}
Can't seem to get the Metal HUD to display value range's (pre 26 Tahoe). The documented environment variable MTL_HUD_SHOW_VALUE_RANGE doesn't seem to work.
https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance#Display-the-value-range-of-metrics
Anyone having any luck?
Topic:
Graphics & Games
SubTopic:
Metal
I need a MeshResource from ModelEntity to generate a box collider, but ModelEntity fails to load USDZ files from the Reality Composer Pro (RCP) bundle.
This code works for loading an Entity:
// Successfully loads as generic Entity
previewEntity = try await Entity(named: fileName, in: realityKitContentBundle)
But this fails when trying to load as ModelEntity:
// Fails to load as ModelEntity
modelEntity = try await ModelEntity(named: fileName, in: realityKitContentBundle)
I found this thread mentioning:
"You'll likely go from USDZ to Entity which contains a MeshResource when you load/init the USDZ file."
But it doesn't explain how to actually extract the MeshResource. Could anyone advise:
How to properly load USDZ files as ModelEntity from RCP bundles?
How to extract MeshResource from a successfully loaded Entity?
Alternative approaches to generate box colliders if direct extraction isn't possible?
Sample code for extraction or workarounds would be greatly appreciated!
Hi there,
I've discovered an issue with gesture handling in RealityKit when setting the camera’s fieldOfViewOrientation to horizontal. For instance, if I render a simple box at the center of the view with a collision shape that exactly matches its dimensions, the actual hit area behaves as if it's smaller than the box. Additionally, when attempting to drag the box away from the center, the hit area appears misaligned—offset slightly towards the center.
Since the default fieldOfViewOrientation is vertical and everything works as expected in that mode, it seems that the gesture system might be assuming a vertical FOV. Given that the API allows setting it to horizontal, perhaps gestures should function correctly regardless of the orientation?
Thank you!
Topic:
Graphics & Games
SubTopic:
RealityKit
I want to display the refresh rate on the screen of the iPhone (14 Pro Max), but each app of the App Store's
"Floating Clock", "Floating Clock Premium" and "Screen Renewal Rate - RefreshRate Realtime" can only display two of 60Hz and 120Hz, and I wonder if it can display numbers of 1Hz, 10Hz, tens of Hz, or more than 3 levels. Does anyone know how to do it?
Topic:
Graphics & Games
SubTopic:
General
Dear Apple Developer Support,
I hope this message finds you well. I am a game developer looking to integrate Game Center into our game. Before proceeding, I would like to understand the analytical capabilities available post-integration. Specifically, I am interested in tracking detailed metrics such as:
The number of new player downloads driven by Game Center features (e.g., friend challenges, leaderboards, or achievements).
Data on user engagement and conversions originating from Game Center interactions.
I have explored App Store Connect’s App Analytics but would appreciate clarification on whether Game Center-specific data (e.g., referrals from challenges or social features) is available to measure growth and optimize our strategies.
If such data is accessible, could you guide me on how to view it? If not, are there alternative methods or plans to incorporate this in the future?
Thank you for your time and assistance. I look forward to your response.
Best regards,
Bella