Hi there,
Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad?
Would like to see things like Compiled Shaders for my Apps on tvOS
.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm developing a game that supports GameKit turn based matches. What I don't understand is this:
Is tapping on the Game Center notification push messages the only way for the GKTurnBasedEventListener to trigger? What if someone misses the push message (swiping it away by accident or something like that) but still wants to join? Is there some inbox somewhere where the pending messages can be seen or fetched?
Also it was mentioned in a very old WWDC video (from 2013, I think that's the latest with information about turn based matches) that the notification also includes a badge for the icon. However, I do not understand how to implement that. Is there any documentation for that?
View Layout
Add the following views in a view controller:
Label
View A, with a subview of the same size: MTKView A
View B, with a subview of the same size: MTKView B
Refresh Rates of Each View
The label view refreshes at 60fps (driven by CADisplayLink).
MTKView A and B refresh at 15fps.
MTKView Implementation Details
The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering.
The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame.
self.metalView.enableSetNeedsDisplay = NO;
self.metalView.paused = YES;
A new high-priority queue is created for drawing, instead of handling it on the main queue.
MTKView Latency Tracking
The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer.
The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView.
Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that.
I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism.
Observation from Instruments
From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details.
Questions
According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer?
The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens?
Explanation of the Reasoning Behind Some MTKView Code Details
Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering.
Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
I want to fade objects in and out, and while setting an entity's OpacityComponent works, animating it doesn't seem to do anything.
In the following code the second sphere should fade out, but it keeps its initial opacity. On the other hand, the animation that changes its transform works. What am I doing wrong?
class ViewController: NSViewController {
override func loadView() {
let arView = ARView(frame: NSScreen.main!.frame)
let anchor = AnchorEntity(.world(transform: matrix_identity_float4x4))
arView.scene.addAnchor(anchor)
let sphere = ModelEntity(mesh: .generateSphere(radius: 0.5))
anchor.addChild(sphere)
sphere.components.set(OpacityComponent(opacity: 0.1))
let sphere2 = ModelEntity(mesh: .generateSphere(radius: 0.5))
sphere2.position = .init(x: 0.2, y: 0, z: 0)
anchor.addChild(sphere2)
sphere2.components.set(OpacityComponent(opacity: 0.1))
sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: 0, timing: .linear), duration: 1, bindTarget: .opacity))
sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: Transform(translation: SIMD3(x: 0.1, y: 0, z: 0)), timing: .linear), duration: 1, bindTarget: .transform))
view = arView
}
}
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away.
The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors.
Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
I am trying to learn the new Metal Peformance Primitives APIs. I have added the MetalPeformancePrimitives framework and included the header in my shader code as per documentation
#include <MetalPeformancePrimitives/MetalPeformancePrimitives.h>
Unfortunately, Xcode complains that the header cannot be found. How do I include it properly?
I am using Xcode 26 on Tahoe. The MetalPeformancePrimitives framework is present on my machine and I can inspect the headers in the filesystem.
Topic:
Graphics & Games
SubTopic:
Metal
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text.
However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer.
//
// TVView.swift
// Mistery
//
// Created by Joje on 31/07/25.
//
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import AVFoundation
final class TVView: UIView {
// propriedades animacao texto
private var textAnimationTimer: Timer?
private var fullTextToAnimate: String = ""
private var currentCharIndex: Int = 0
// propriedades video estatica
private var player: AVQueuePlayer?
private var playerLayer: AVPlayerLayer?
private var playerLooper: AVPlayerLooper?
var onNextButtonTap: () -> Void = {}
// MARK: - Subviews
// imagem da TV
private(set) lazy var tvImageView: UIImageView = {
let imageView = UIImageView()
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.image = UIImage(named: "tvFinal")
imageView.contentMode = .scaleAspectFit
return imageView
}()
// texto que passa dentro da TV
private(set) lazy var storyLabel: UILabel = {
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
//label.backgroundColor = .gray
label.textColor = .red
label.font = UIFont(name: "MeltedMonster", size: 30)
label.textAlignment = .left
label.numberOfLines = 0
label.text = ""
return label
}()
private(set) lazy var nextButton: UIButton = {
let button = UIButton(type: .system)
button.translatesAutoresizingMaskIntoConstraints = false
//button.backgroundColor = .darkGray
button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside)
return button
}()
// MARK: - Lifecycle
override init(frame: CGRect) {
super.init(frame: frame)
backgroundColor = .black
setupVideoPlayer()
addSubviews()
setupConstraints()
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18)
setupFisheyeEffect()
}
private func setupFisheyeEffect() {
// cria o filtro
guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")}
storyLabel.layer.shouldRasterize = true
storyLabel.layer.rasterizationScale = UIScreen.main.scale
// define os parametros
filter.setDefaults()
// centro do efeito
let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY)
filter.setValue(center, forKey: kCIInputCenterKey)
// raio de distorção
filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey)
// intensidade de distorção
filter.setValue(7, forKey: kCIInputScaleKey)
storyLabel.layer.filters = [filter]
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - Button actions
@objc private func didPressNextButton() {
onNextButtonTap()
}
@objc private func animateNextCharacter() {
guard currentCharIndex < fullTextToAnimate.count else {
textAnimationTimer?.invalidate()
return
}
let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex)
let partialText = String(fullTextToAnimate[...currentTextIndex])
storyLabel.text = partialText
currentCharIndex += 1
}
public func updateStoryText(with text: String) {
textAnimationTimer?.invalidate()
storyLabel.text = ""
fullTextToAnimate = text
currentCharIndex = 0
textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true)
}
// MARK: - Setup methods
private func setupVideoPlayer() {
guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else {
print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov")
return
}
let playerItem = AVPlayerItem(url: videoURL)
player = AVQueuePlayer(playerItem: playerItem)
// LINHA COM POSSIVEL ERRO
playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = .resizeAspectFill
if let layer = playerLayer {
self.layer.addSublayer(layer)
}
player?.play()
}
private func addSubviews() {
self.addSubview(storyLabel)
self.addSubview(tvImageView)
self.addSubview(nextButton)
}
private func setupConstraints() {
NSLayoutConstraint.activate([
// TV Image
tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor),
tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
tvImageView.widthAnchor.constraint(equalTo: widthAnchor),
// TV Text
storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50),
storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35),
storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42),
//TV Button
nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190),
nextButton.widthAnchor.constraint(equalToConstant: 100),
nextButton.heightAnchor.constraint(equalToConstant: 160)
])
}
}
#Preview{
ViewController()
}
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not.
I already checked this post, which points to this seemingly outdated sample code
I assume that example is deprecated in favour of the now built in version of GestureComponent.
Nonetheless, there are no compiler warnings or errors, it just fails silently.
TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight.
RealityView { content in
let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1)))
testEntity.position = SIMD3<Float>(0,0,-1)
testEntity.components.set(InputTargetComponent())
testEntity.components.set(CollisionComponent(
shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))]
))
let testGesture = TapGesture()
.onEnded { value in
print("Tapped")
}
testEntity.components.set(GestureComponent(testGesture))
let dragGesture = DragGesture()
.onEnded { value in
print("Dragged")
}
testEntity.components.set(GestureComponent(dragGesture))
content.add(testEntity)
}
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it.
On iOS 16, the following code works fine:
import ImageIO
import UIKit
func getUIImage(from imageData: [UInt8]) -> UIImage? {
let data = Data(imageData)
guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil),
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else {
print("Failed to decode JP2 image!")
return nil
}
return UIImage(cgImage: cgImage)
}
However, on iOS 18, this fails with errors like:
initialize:1415: *** invalid JPEG2000 file ***
makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50]
CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59]
Questions:
Did Apple remove or modify JPEG2000 support in iOS 18?
Is there an official workaround for decoding JPEG2000 on iOS 18?
Should I use Vision/Metal/Core Image instead?
Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage?
Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding?
Additional Info:
The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...).
The image works on iOS 16 but fails on iOS 18.
Tested on iPhone running iOS 18.0 beta.
Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
I didn't find a suggestion box on Swift's website so I'll post it here.
SwiftCharts are great but limited. I need more data on a single chart. Candlestick and OHLC type charts would be an excellent addition. Hopefully, influencers from Apple can make that happen.
Thanks.
I work on a Qt/QML app that uses Esri Maps SDK for Qt and that is deployed to both Windows and iPads. With a recent iPad OS upgrade to 26.1, many iPad users are reporting the application freezing after panning and/or identifying features in the map. It runs fine for our Windows users.
I was able to reproduce this and grabbed the following error messages when the freeze happens:
IOGPUMetalError: Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
IOGPUMetalError: Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource)
Environment:
Qt 6.5.4 (Qt for iOS)
Esri Maps SDK for Qt 200.3
iPadOS 26.1
Because it appears to be a Metal error, I tried using OpenGL (Qt offers a way to easily set hte target graphics api):
QQuickWindow::setGraphicsApi(QSGRendererInterface::GraphicsApi::OpenGL)
Which worked! No more freezing. But I'm seeing many posts that OpenGL has been deprecated by Apple.
I've seen posts that Apple deprecated OpenGL ES. But it seems to still be available with iPadOS 26.1. If so, will this fix (above) just cause problems with a future iPadOS update?
Any other suggestions to address this issue? Upgrading our version of Qt + Esri SDK to the latest version is not an option for us. We are in the process to upgrade the full application, but it is a year or two out. So, we just need a fix to buy us some time for now.
Appreciate any thoughts/insights....
Hello there,
I'm having trouble matching what I see in the scenekit editor and the output of the resulting scene in a scnview.
For a glitter effect I have set a high value on the diffuse intensity which looks fine in the editor but when running the game the colors are much darker. To see if the intensity value is merely capped I have set the same multiplier on the hat below - but it is blown out which looks to me like there is some grading going on
I have tried to switch on hdr rendering but that didn't make a difference.
I tried disabling linear rendering and that simply made everything darker still - which I expect.
Does someone have an idea what else this could be? What rendering is the scenekit editor using and how can I match it?
Interestingly when I take a screenshot of the editor window for this post, the image is also blown out... what is going on? :)
Thanks so much for any pointers,
Seb
Hi everyone,
I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender).
The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible.
The exact same blend shape animation works perfectly when no animation is playing.
In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit
Still, as soon as an animation starts, the shape keys don’t animate anymore.
Here’s the test project on GitHub that demonstrates the issue clearly:
https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing.
Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates?
Thanks in advance for any insights.
Hello,
When testing GameKit "Manage Game Progress" in Xcode 26:
On iOS devices, achievements, leaderboards, and party code data display and work correctly.
On macOS devices, none of these data appear in "Manage Game Progress."
Is this a known issue with macOS GameKit, or is there a limitation compared to iOS?
If it is not a bug, is there any additional configuration needed to make achievements and leaderboards visible on macOS?
I also included the GameKit bundle in my macOS app and enabled Enable Debug Mode in GameKit Configuration in the scheme options.
Thank you.
Hi Apple team,
Game Mode was introduced in iOS 18. To activate Game Mode, an app must include specific key-value pairs in its *.plist and be categorized as a "Game" on the App Store.
My app (https://apps.apple.com/us/app/voidlink/id6747717070) works primarily as a self-hosted game streaming (PC->iPhone/iPad) client. Game Mode provides clear benefits in terms of latency and frame rate stability, but it can currently only be activated when running via Xcode or TestFlight.
I am an individual iOS developer based in China, where an additional government license is required for apps to be listed under the "Game" category on the App Store. Obtaining such a license is very difficult for independent developers, so my app has been categorized under "Utilities" instead.(If move the app to game category, it will disappear from Chinese App Store immediately)
Expectation / Suggestion:
Please consider making Game Mode available as a local, user-controllable option on iOS18/26+, such as through a system “App Pool” where users can choose which apps to enable Game Mode for, regardless of App Store category.
This would greatly benefit use cases like streaming clients, benchmarking tools, and remote play utilities, without requiring developers to reclassify their apps as “Games” on App Store.
Topic:
Graphics & Games
SubTopic:
General
During editing in Pages (or Word) I am getting these glitches (see attachment).
Started after the last update to Mac OS 26.3 (beta)
Also removed 2 recent instalments (Blackhole audio driver and kDrive/Infomaniak, but trouble is still there.
27" iMac 2020 (Intel)
i7 3,8 Ghz
AMD Radeon Pro 5500 XT 8 GB
24 GB RAM
macOS Tahoe 26.3 (=beta)
Tried restart in safe mode, checked fonts. Talked to aissistent to get a solution, but no ...)
Thx for any advice, Pieter (not a developer so please kee pit simple 🙏🏻)
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive 'file:///System/Library/PrivateFrameworks/IconRendering.framework/Resources/binary.metallib'
available slices: applegpu_g13g, applegpu_g13s, applegpu_g13d, applegpu_g14g, applegpu_g14s, applegpu_g14d, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16g, applegpu_g16s, applegpu_g17g, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16s
Is it related to performance of applications in macOS 26.2 on Intel Macs?
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4
macOS: Version 15.4 (24E248)
visionOS Simulator: 2.3
Xcode: Version 16.2 (16C5032a)
My app works well without any breakpoints.
But if I create any breakpoint it shows me this:
Couldn't find the Objective-C runtime library in loaded images.
Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.