Hi,
Our app has a section where, we show to users how to activate "Silence Unknown Callers", because is a crucial feature for our app. But, we saw that 30% of users drop the process here, because we can't open directly that setting option in phone app.
We are using this url scheme to open phone settings in iOS 18:
if let url = URL(string: "App-prefs:com.apple.mobilephone") {
UIApplication.shared.open(url)
}
But, we don't see other way to open directly the path "silence", like in iOS 17, with this url scheme: prefs:root=Phone&path=SILENCE_CALLS
So, do you know if is possible open that option directly? We want to improve our accessibility.
Thank you!
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have users who need to be able to hear the content of SwiftUI Text views. I have specified the .textSelection(.enabled) modifier for the text views. Adding this modifier causes a "copy" option to appear on long press, but it doesn't enable the visible selection of text, nor does it provide the "Speak" menu item that UIKit allows on text selection.
Is the "Speak Selection" accessibility feature broken for SwiftUI Text views? I've found that there's another accessibility feature that does work (enabling the Speech Controller button for "Speak Screen"). Do I need to tell my users that Apple is deprecating the "Speak Selection" accessibility feature, and that they need to use the Speech Controller instead? Or is there something else I can do to my SwiftUI to get that feature to work?
I have more than 1000 notes classified in parent/child folders up to 5 levels. From the 5th level of files I can no longer share the note. The note is not shared. It is that of the parent file that is shared.
Thank you very much
Good to you
Christophe
Topic:
Accessibility & Inclusion
SubTopic:
General
Hey folksI, I would like to ask for help on this topic:
I think this is exactly the same problem Combobox not working with VoiceOver after… - Apple Community.
VoiceOver also breaks the combobox from the official ARIA W3C website https://www.w3.org/WAI/ARIA/apg/patterns/combobox/examples/combobox-autocomplete-list/. When VO is turned off, I can use the up/down arrow to go through the menu items from the dropdown, but when VO is turned on, the up/down arrows cannot access the dropdown menu items.
Is there an official tutorial on how to control it using voice over?
Kind regards,
Jakub
Topic:
Accessibility & Inclusion
SubTopic:
General
I have a view dynamically overlaid on a UITableView with proper padding (added when certain conditions are met). When VoiceOver focuses on a cell beneath this overlay, the focused element does not scroll into view. I’ve noticed similar behavior in Apple’s first-party Podcasts app.
Please find the attached image for reference. How can I resolve this issue and ensure VoiceOver scrolls the focused cell into view?
why did the screen recorder button disappear? It cannot be found anywhere.
Topic:
Accessibility & Inclusion
SubTopic:
General
Please excuse me if this is obvious. I'm new to Apple development.
Is there a SwiftUI Accessibility Inspector? I run the standard one, in Xcode 26b3, and it shows me warnings for things that I didn't create in SwiftUI. I presume that "SwiftUI" is primarily implemented using macros and that these things are either generated or boilerplate lower-level things. But if so, then why would they trip Accessibility Inspector warnings? Is there something I can do from SwiftUI to clear them?
Or... is there a demangler somewhere that will translate from these names into something this human might recognize?
I'm targeting macos, btw, if that makes any difference.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello,
I am working on a Braille keyboard by using HID approach.
Current the device works with iPhone 11 and SE3.
However, when tested in iPhone 6s with iOS 15, although the device can be connected and recognized as Braille device in VoiceOver screen, the phone shows no response to key press report.
Would there be any requirement at points such as HID descriptor for iPhone 6s support on Braille device? If iPhone 6s does not support such devices, what is the minimum system requirements?
Thank you!
Hi,
I'm trying to fix tvOS view for VoiceOver accessibility feature:
TabView { // 5 tabs
Text(title)
Button(play)
ScrollView { // Live
LazyHStack { 200 items }
}
ScrollView { // Continue watching
LazyHStack { 500 items }
}
}
When the view shows up VoiceOver reads:
"Home tab 1 of 5, Item 2" - not sure why it reads Item 2 of the first cell in scroll view, maybe beacause it just got loaded by LazyHStack.
VocieOver should only read "Home tab 1 of 5"
When moving focus to scroll view it reads:
"Live, Item 1" and after slight delay "Item 1, Item 2, Item 3, Item 4"
When moving focus to second item it reads:
"Item 2" and after slight delay "Item 1, Item 2, Item 3, Item 4"
When moving focus to third item it reads:
"Item 3" and after slight delay "Item 1, Item 2, Item 3, Item 4"
It should be just reading what is focused, idealy just
"Live, Item 1, 1 of 200"
then after moving focus on item 2
"Item 2, 2 of 200"
this time without the word "Live" because we are on the same scroll view (the same horizontal list)
Currently the app is unusable, we have visually impaired testers and this rotor reading everything on the screen is totaly confusing, because users don't know where they are and what is actually focused.
This is a video streaming app and we are streaming all the time, even on home page in background, binge plays one item after another, usually there is never ending Live stream playing, user can switch TV channel, but we continue to play. Voice over should only read what's focused after user interaction.
Original Apple TV app does not do that, so it cannot be caused by some verbose accessibility settings. It reads correctly only focused item in scrolling lists.
How do I disable reading content that is not focused?
I tried:
.accessibilityLabel(isFocused ? title : "")
.accessibilityHidden(!isFocused)
.accessibilityHidden(true) - tried on various levels in view hierarchy
.accessiblityElement(children: .ignore) - even focused item is not read back by voice over
.accessiblityElement(children: .ignore) - even focused item is not read back by voice over
.accessiblityElement(children: .contain) - tried on various levels in view hierarchy
.accessiblityElement(children: .combine) - tried on various levels in view hierarchy
.accessibilityAddTraits(.isHeader) - tried on various levels in view hierarchy
.accessibilityRemoveTraits(.isHeader) - tried on various levels in view hierarchy
// the last 2 was basically an attempt to hack it
.accessibilityRotor("", ranges []) - another hack that I tried on ScrollView, LazyHStack, also on top level view.
50+ other attempts at configuring accessibility tags attached to views.
I have seen all the accessibility videos, tried all sample code projects, I haven't found a solution anywhere, internet search didn't find anything, AI didn't help as it can only provide code that someone else wrote before.
Any idea how to fix this?
Thanks.
Added a view controller in the storyboard, added a tableview in this view, and added a cell under the table, when I run the APP to jump to the page, when using the narration function, I find that when I use three fingers to swipe up or down, a sentence will be broadcast in English, I want to no longer change the accessiblity of the cell, when I perform the behavior of swiping up or down with three fingers, Broadcast how Chinese should be implemented.
I have an app that needs Input Monitoring permissions to get keyboard access in the background. I've attempted to use both IOHIDCheckAccess(kIOHIDRequestTypeListenEvent) and IOHIDRequestAccess(kIOHIDRequestTypeListenEvent), but they always return denied, even though I have given the permission for Input Monitoring to the app in Settings.
Is there something I need to put in my Info.plist to enable this permission to work?
Topic:
Accessibility & Inclusion
SubTopic:
General
Request: Name Recognition → Shortcut for SOS Flashlight + Vibration
Right now, iOS Name Recognition works, but all I can do is flash the tiny notification light. It would be much more useful if Name Recognition could trigger a Shortcut. That way, I could set it to flash the flashlight in an SOS pattern and vibrate, making the alert impossible to miss.
I tried using Custom Alarm, but it won’t let me record my spoken name, so it doesn’t really solve the problem. If Apple allowed Name Recognition to trigger Shortcuts — or expanded “Custom” to support names/words — this would open up far more practical, real-world alerts.
Topic:
Accessibility & Inclusion
SubTopic:
General
I am developing a vision os app for controlling an underwater ROV. I have ornaments with telemetry and buttons around a central video view feed. I have custom buttons mappings, such as "A" for locking the depth of the drone. However, when I look at buttons or certain ornaments, my custom gamepad logic is kept from running. This means that when a SwiftUI Button gains focus on visionOS, pressing the controller’s A button triggers the system’s default “click” on that Button rather than my custom buttonA handler. Essentially, focus interception by the system is stealing my A-press events and preventing my custom gamepad logic from running.
Is there a way to disable the built in gamepad interaction and only allow my custom gamepad mappings?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Game Controller
Accessibility
Focus
visionOS
When VoiceOver reads decimal numbers with six or more digits after the decimal, it stops announcing the decimal separator and also adds pauses between each digit.
Text("0.12345") // VoiceOver: "zero **point** one two three four five"
Text("0.123456") // VoiceOver: "zero one, two, three, four, five, six"
How can I force VoiceOver to announce the decimal separator ("point") and not insert pauses regardless of the number of decimal digits?
When I try to get the frames of a AXUIElementRef using AXUIElementCopyAttributeValue(element, (CFStringRef)attribute, &result) the frames are shifted and rotated on the iOS simulator.
I get the same frames when using the Accessibility Inspector when the Max is selected as the host.
When I switch the host to the iOS simulator the frames are correct.
How is the Accessibility Inspector getting the correct frames? And how can I do the same in my app?
Topic:
Accessibility & Inclusion
SubTopic:
General
VoiceOver reads out all visible content on the screen, which is essential for visually challenged users. However, this raises a privacy concern—what if a user accidentally focuses on sensitive information, like a bank account password, and it gets read aloud?
How can developers prevent VoiceOver from exposing confidential data while still maintaining accessibility? Are there best practices or recommended approaches to handle such scenarios effectively?
Please update Accessibility OS Settings for VoiceOver in iPhone iOS and iPadOS to include frames on the Rotor, and to make web navigation and component gestures easier to find and assign. Please add content to the iPhone and iPad Apple User Guide to use VoiceOver in web navigation with touch gestures.
Specifically... iframes.
There is no clear guidance in Apple documentation for VoiceOver users in iPhone or iPadOS to access iframes with touch gestures. A common belief as written on AppleVis, other blogs, and internet searches is that iframes in Safari or a webView in an app are only available with explore by touch.
If explore by touch is the only option for some interactions, that needs to be included in Apple User Guides. If not, details on equivalent touch gestures for VO that have keyboard interactions in Mac need to be clear for users.
VoiceOver for Mac includes a default keyboard interaction of VO-Command-F in its extensive User Guide (https://support.apple.com/guide/voiceover/by-images-or-frames-mchlp2740/mac). A user can include a rotor option for web navigation for iframes.
VoiceOver for iPhone and iPad does not include a default swipe gesture assigned to frames. An option is not available for the Rotor.
While there is iPhone User Guide guidance that gestures can be customized (https://support.apple.com/guide/iphone/customize-gestures-and-keyboard-shortcuts-iph59a8e6fd2/18.0/ios/18.0), it is not clear that for adding this gesture, "Move to the next frame" is tucked into the advanced navigation commands for VoiceOver Accessibility Settings in the OS. At least in my phone, the word "frame" was not searchable despite the All Commands screen using a search bar.
Hi everyone,
I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?").
Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar.
I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this.
The Concept: Skeleton-based Normalization
Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input.
Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space.
Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance.
Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers).
Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees").
Why this approach?
Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body.
Privacy: We are processing coordinates, not video streams.
Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life.
Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps.
Looking forward to hearing your thoughts.
Yesterday I installed iOS 26 on my iPhone as a beta tester. At first there was no problem, but during the afternoon I noticed that neither FaceTime nor IMessage worked... I tried to go through the settings as described by Apple Support, but my phone number would not activate. Sometimes I was even asked to activate iCloud. I always get a REG-RESP message.
Does anyone have any ideas what the problem could be?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello!
I was doing some accessibility testing for my app and found out that when the user switches the text size, all of the data in the text fields is reset, which causes major disruption.
I've tried looking for documentation, but all I've found is information on how to dynamically scale the UI for different text sizes, which I've already implemented.
My guess is that every time Dynamic Type registers a change, it redraws my UI instead of just updating it.
How can I make sure the data is not reset when the text size changes?