Hi!
I hope everyone reading is doing well. I am working on developing a reinforcement learning agent that involves sending scan codes to a window, which I've been doing by sending virtual scan codes with CGEventCreateKeyboardEvent per the docs. There is no event source when I send the keyboard events.
However, when many keyboard events are happening (with the keys 'q', 'w', 'e', 'r', 'f', 'd', 's', space, arrow keys) in quick succession (<250ms), the enable dictation popup or the function button emojis popup appear for seemingly no reason. I have verified that I am using the correct scan codes for these keypresses, so I am wondering what else could cause this to happen. It is as if I am choosing to press f5 or fn. It does not happen when 'a' is the only button being pressed in quick succession.
One thing that I have not been able to easily find is the scan code inputs for dictation nor the function button. do these scan codes overlap somehow?
Thank you all for the help!
Hunter
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In my turn-based game, I receive GKListener event receivedTurnEventForMatch and decode the match.matchData. On occasion, the matchData is clearly stale and is from the previous turn. If I call the MatchMaker ViewController up and select that same match, the data is not stale, so it's not a matter of not calling endTurn.
I have tried both loadMatchWithID and loadMatchesWithCompletionHandler after receiving the receivedTurnEventForMatch, but the data is still stale.
Advice?
Topic:
Graphics & Games
SubTopic:
GameKit
Can't seem to get the Metal HUD to display value range's (pre 26 Tahoe). The documented environment variable MTL_HUD_SHOW_VALUE_RANGE doesn't seem to work.
https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance#Display-the-value-range-of-metrics
Anyone having any luck?
Topic:
Graphics & Games
SubTopic:
Metal
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes.
I've done this before in Unity and Godot in a fairly straight forward manner.
How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)?
Do I need to do it with a shader manually? How can I stop the z fighting?
Our APP has integrated 3D function, in order to reduce the memory occupation of the APP in the background, we will uninstall the 3D after the APP enters the background. However, the uninstall also causes problems. When the uninstall process is executed in the background, the app will briefly trigger the background GPU rendering error warning with the error warning code:
OGPUMetalError: Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU Work from background) (00000006: kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted) excuse me this warning system will tighten APP permissions background? For example, limit or shorten the background survival time of the APP. In addition, will the background refresh function fail, resulting in the failure of Bluetooth Ibeacon activation?
Topic:
Graphics & Games
SubTopic:
Metal
When using simd_inverse functions, the same input yields different results on different devices.
On the Mac mini with M4 pro and M3 ultra, the result is
But on the Mac mini of M2 ultra, the result is
I have something like this drawing in an MTKView (see at bottom).
I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created?
I have been reading through various articles such as
Setting resource storage modes
Choosing a resource storage mode for Apple GPUs
Copying data to a private resource
but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land.
I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in).
Any advice and/or pointers to the definitive literature will be appreciated.
guard let argsBuffer = device.makeBuffer(bytes: &args,...
argumentTable.setAddress(argsBuffer.gpuAddress, ...
encoder.setArgumentTable(argumentTable, stages: .vertex)
// encode drawing
renderEncoder.draw...
...
encoder.endEncoding() // 1
commandBuffer.endCommandBuffer() // 2
commandQueue.waitForDrawable(drawable)
commandQueue.commit([commandBuffer]) // 3
commandQueue.signalDrawable(drawable)
drawable.present()
Topic:
Graphics & Games
SubTopic:
Metal
Issue
When an Entity with a ViewAttachmentComponent is:
disabled using isEnabled = false
removed using removeFromParent()
and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working.
Specifically:
Button actions inside the attached view do not fire
TapGesture closures on child views do not respond
Expected Behavior
Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added.
Actual Behavior
After being disabled or removed once, all tap interactions stop responding.
Comparison
When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur.
Removing and re-displaying the attachment still allows taps to work correctly.
Reproduction
Attached sample code reproduces the issue:
A RealityView with an Entity that has a ViewAttachmentComponent
The attached SwiftUI view contains a Toggle
The toggle updates isEnabled on the Entity
After toggling off and on, tap interactions stop responding
Environment
Xcode 26
visionOS 26
Question
Is this expected behavior of ViewAttachmentComponent, or a bug?
Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions?
import SwiftUI
import RealityKit
struct GestureTestView: View {
@State var sampleEnabled = true
@State var sampleEntity: Entity?
var body: some View {
RealityView { contents, attachments in
// After deleting and re-displaying it, taps no longer respond.
let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView()))
// Executed successfully
//let sample = attachments.entity(for: "SampleView")!
contents.add(sample)
sample.position = [0, 1.2, -1]
sampleEntity = sample
let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled)))
contents.add(toggleButton)
toggleButton.position = [0, 1, -1]
} update: { _, _ in
// run update closure
print(sampleEnabled)
// update sample entity enable
sampleEntity?.isEnabled = sampleEnabled
} attachments: {
Attachment(id: "SampleView") {
SampleView()
}
}
}
}
struct ToggleButtonView: View {
@Binding var isOn: Bool
var body: some View {
VStack {
Toggle(isOn: $isOn) {
Text("Toggle")
}
}
.padding()
.glassBackgroundEffect()
}
}
struct SampleView: View {
var body: some View {
VStack {
Button {
print("Hello, World!")
} label: {
Text("Hello, World!")
.padding()
}
}
.padding()
.glassBackgroundEffect()
}
}
#Preview(immersionStyle: .mixed) {
GestureTestView()
}
We are trying to implement saving and fetching data to and from iCloud, but it have some problems.
MacOS: 15.3
Here is what I do:
Enable Game Center and iCloud capbility in Signing & Capabilities, pick iCloud Documents, create and select a Container.
Sample code:
void SaveDataToCloud( const void* buffer, unsigned int datasize, const char* name )
{
if(!GKLocalPlayer.localPlayer.authenticated) return;
NSData* data = [ NSData dataWithBytes:databuffer length:datasize];
NSString* filename = [ NSString stringWithUTF8String:name ];
[[GKLocalPlayer localPlayer] saveGameData:data withName:filename completionHandler:^(GKSavedGame * _Nullable savedGame, NSError * _Nullable){
if (error != nil)
{
NSLog( @"SaveDataToCloud error:%@", [ error localizedDescription ] );
}
}];
}
void FetchCloudSavedGameData()
{
if ( !GKLocalPlayer.localPlayer.authenticated ) return;
[ [ GKLocalPlayer localPlayer ] fetchSavedGamesWithCompletionHandler:^(NSArray<GKSavedGame *> * _Nullable savedGames, NSError * _Nullable error) {
if ( error == nil )
{
for ( GKSavedGame *item in savedGames )
{
[ item loadDataWithCompletionHandler:^(NSData * _Nullable data, NSError * _Nullable error) {
if ( error == nil )
{
//handle data
}
else
{
NSLog( @"FetchCloudSavedGameData failed to load iCloud file: %@, error:%@", item.name, [ error localizedDescription ] );
}
} ];
}
}
else
{
NSLog( @"FetchCloudSavedGameData error:%@", [ error localizedDescription ] );
}
} ];
}
Both saveGameData and fetchSavedGamesWithCompletionHandler are not reporting any error, when debugging, saveGameData completionHandler got a nil error, and can get a valid "savedGame", but when try to rebot the game and use "fetchSavedGamesWithCompletionHandler" to fetch data, we got nothing, no error reported, and the savedGames got a 0 length.
From this page https://developer.apple.com/forums/thread/718541?answerId=825596022#825596022
we try to wait 30sec after authenticated , then try fetchSavedGamesWithCompletionHandler, still got the same error.
Checked:
Game Center and iCloud are enabled and login with the same account.
iCloud have enough space to save.
So what's wrong with it.
In the SceneKit framework, I want to render a point cloud and need to set the minimumPointScreenSpaceRadius property of the SCNGeometryElement class, but it doesn't work. I searched for related issues on the Internet, but didn't get a final solution.
Here is my code:
- (SCNGeometry *)createGeometryWithVector3Data:(NSData *)vData colorData: (NSData * _Nullable)cData{
int stride = sizeof(SCNVector3);
long count = vData.length / stride;
SCNGeometrySource *dataSource = [SCNGeometrySource geometrySourceWithData:vData
semantic:SCNGeometrySourceSemanticVertex
vectorCount:count
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:stride];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:nil
primitiveType:SCNGeometryPrimitiveTypePoint
primitiveCount:count
bytesPerIndex:sizeof(int)];
element.minimumPointScreenSpaceRadius = 1.0; // not work
element.maximumPointScreenSpaceRadius = 20.0;
SCNGeometry *pointCloudGeometry;
SCNGeometrySource *colorSource;
if (cData && cData.length != 0) {
colorSource = [SCNGeometrySource geometrySourceWithData:cData
semantic:SCNGeometrySourceSemanticColor
vectorCount:count
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:stride];
pointCloudGeometry = [SCNGeometry geometryWithSources:@[dataSource, colorSource] elements:@[element]];
} else {
pointCloudGeometry = [SCNGeometry geometryWithSources:@[dataSource] elements:@[element]];
}
return pointCloudGeometry;
}
Hi,
How to enable multitouch on ARView?
Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either:
Use SwiftUI .simultaneousGesture on top of an ARView representable
Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView
Expected behavior:
ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled.
Here is what I tried, on iOS 26.1 and macOS 26.1:
ARView Multitouch
The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup.
Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI.
import RealityKit
import SwiftUI
struct ARViewMultiTouchView: View {
var body: some View {
ZStack {
ARViewMultiTouchRepresentable()
.ignoresSafeArea()
}
}
}
#Preview {
ARViewMultiTouchView()
}
// MARK: Representable ARView
struct ARViewMultiTouchRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARViewMultiTouch(frame: .zero)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
// MARK: ARView
class ARViewMultiTouch: ARView {
required init(frame: CGRect) {
super.init(frame: frame)
/// Enable multi-touch
isMultipleTouchEnabled = true
cameraMode = .nonAR
automaticallyConfigureSession = false
environment.background = .color(.gray)
/// Disable gesture recognizers to not conflict with touch events
/// But it doesn't fix the issue
gestureRecognizers?.forEach { $0.isEnabled = false }
}
required dynamic init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
/// # Problem
/// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch)
/// But it only fires for one touch at a time (single-touch)
print("Touch began at: \(touch.location(in: self))")
}
}
}
Multitouch with an Overlay
This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right?
import SwiftUI
import RealityKit
struct MultiTouchOverlayView: View {
var body: some View {
ZStack {
MultiTouchOverlayRepresentable()
.ignoresSafeArea()
Text("Multi touch with overlay view")
.font(.system(size: 24, weight: .medium))
.foregroundStyle(.white)
.offset(CGSize(width: 0, height: -150))
}
}
}
#Preview {
MultiTouchOverlayView()
}
// MARK: Representable Container
struct MultiTouchOverlayRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
/// The view that SwiftUI will present
let container = UIView()
/// ARView
let arView = ARView(frame: container.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
arView.cameraMode = .nonAR
arView.automaticallyConfigureSession = false
arView.environment.background = .color(.gray)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
/// The view that will capture touches
let touchOverlay = TouchOverlayView(frame: container.bounds)
touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
touchOverlay.backgroundColor = .clear
/// Pass an arView reference to the overlay for hit testing
touchOverlay.arView = arView
/// Add views to the container.
/// ARView goes in first, at the bottom.
container.addSubview(arView)
/// TouchOverlay goes in last, on top.
container.addSubview(touchOverlay)
return container
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
// MARK: Touch Overlay View
/// A UIView to handle multi-touch on top of ARView
class TouchOverlayView: UIView {
weak var arView: ARView?
override init(frame: CGRect) {
super.init(frame: frame)
isMultipleTouchEnabled = true
isUserInteractionEnabled = true
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))")
for touch in touches {
let location = touch.location(in: self)
/// Hit testing.
/// ARView and Touch View must be of the same size
if let arView = arView {
let entity = arView.entity(at: location)
if let entity = entity {
print("Touched entity: \(entity.name)")
} else {
print("Touched: none")
}
}
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))")
}
}
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue';
I have hit a deadline. Any recommendations are very welcome.
I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version
I get the following response -
Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData
Build Version: 17A321
Status: installed
Toolchain Identifier: com.apple.dt.toolchain.Metal.32023
Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded
/Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal
Apple metal version 32023.830 (metalfe-32023.830.2)
Target: air64-apple-darwin24.6.0
Thread model: posix
InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
How can I paste a string to the findNavigator of a TextEditor ?
Topic:
Graphics & Games
SubTopic:
General
Hi,
Since iOS 26 introduced the new Games app, I’ve noticed a problem when using a Nintendo Switch Pro Controller in wired USB-C mode, and also with third-party controllers that emulate it (like the GameSir X5 Lite).
In the Games app interface, only the L/R buttons respond, but the D-Pad and analog sticks don’t work at all. Once inside actual games, the controller works fine — the issue only affects the Games app UI.
What I’ve tested so far:
Xbox / PlayStation controllers → work fine in both wired and Bluetooth, including inside the Games app.
Switch Pro Controller (Bluetooth) → works fine, including in the Games app.
Switch Pro Controller (wired) → same issue as the X5 Lite, D-Pad and sticks don’t work in the Games app.
This makes it hard to use the new Games app launcher with these controllers, even though they work perfectly once a game is launched.
My question: is this an iOS bug (Apple needs to add proper support for wired Switch Pro controllers in the Games app), or something that Nintendo / GameSir would need to address?
Thanks in advance to anyone who can confirm this or provide more info.
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
I have tried every combination of suggestions to get a skybox to appear. Using swiftUI, realityKit and iOS. Non immersive environment. Does anyone have code that works to display a skybox.
When i use a do/catch loop i get environmentResource not found. I have checked the syntax, ensured the folder is referencing the target, used the same name for the folder as the file, the file is a .hdr (i assume this is supported), i have moved the file folder to the top level - no change.
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material.
This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s).
// Did the setup for the video and AVPlayer eventually leading me to
let videoMaterial = VideoMaterial(avPlayer: avPlayer)
// Assign the material to the entity
entity.model!.materials = [videoMaterial]
// The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material
entity.model!.materials = entity.model!.materials.map { baseMaterial in
try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)
}
But, I get the following error
Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound
How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?
The simplest realityView (content, attachments in ...
causes Contextual closure expects 1 argument but 2 were used in closure body. I have checked every example and i cannot understand why i get this error regardless of any content. Note: i have added Attachment(id: "test") to the attachment closure and get Attachment not is scope.
imported both realityKit and SwiftUI.
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)