Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug?
If it's a default behavior, I much appreciate if I can get a link to the documentation.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do?
Thanks,
Gunek
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated.
Steps to Reproduce:
Add the same song to a playlist multiple times.
Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1).
Remove one entry.
Fetch playlist again — note the other IDs have shifted.
FB18879062
Hi,
I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically:
• Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade.
• Beat-matched transitions between tracks.
From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit.
Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team?
For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed.
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
FairPlay Streaming
MusicKit
AVFoundation
Since MacOS 26 Apple Music has inconsitent drops to the Quality of some Tracks indiscrimantly. I don't know if others Expereinced it. It doesn't happen on the Speakers or connected via Bluetooth, but the AUX I/O has it quite often. It is more noticable on Headphones with 48kHz and higher Frequency Bandwidth.
Here is the FB18062589
hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup:
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio)
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed
}
That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers.
Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone).
I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled.
Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline).
Using .with(.tracks) on the library album fetches the tracks added to the library.
The trackCount property is referring to the tracks that can be fetched from the library.
The isComplete property is always nil when fetching from the library.
One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well.
Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids.
Thank you
hi all,
as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display.
can i somehow stop the insertion of the display sleep assertion?
pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep"
Created for PID: 4145.
where PID 4145 is spotify.
but it doesn't matter which app is playing the audio.
any help would be appreciated
thanks
Topic:
Media Technologies
SubTopic:
Audio
I'm working on adding Spatial Audio support to a game on the Mac. I'm looking at the SpatialAudioRenderer sample but having some issues.
It's unclear to me when a device is compatible with Spatial Audio and when I should attempt to render Spatial Audio. There is no property that I can find on the Mac that advertises Spatial Audio compatibility on a device.
The sample crashes when the output device is a USB device. This includes the Apple Studio Display.
The Apple Studio Display is supposed to be capable of rendering Spatial Audio. The device doesn't work with the sample - do I still need to render down the 7.1.4 source on this device? The sample always renders down to Stereo, but the Apple Studio Display is not a Stereo device.
I'm a bit confused by the sample and when/how I should configure the mixing unit.
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach).
Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data.
The Problem
Setup:
Single audio file (monolith) containing multiple concatenated samples
Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file
All zones load successfully without errors
Expected Behavior:
All zones should play their respective audio regions immediately from the first sample.
Actual Behavior:
Last zone in the zone list: Works perfectly - plays audio immediately
All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data]
The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers.
After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning.
Minimal Reproduction
1. Create Test Monolith Audio File
Create a single Wav file with 3 concatenated 1-second samples (44.1kHz):
Sample 1: frames 0-44099 (constant amplitude 0.3)
Sample 2: frames 44100-88199 (constant amplitude 0.6)
Sample 3: frames 88200-132299 (constant amplitude 0.9)
2. Create Test Preset
Create an .aupreset with 3 zones all referencing the same file:
Pseudo code
<Zone array>
<zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav;
<zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav;
<zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav;
</Zone array>
3. Load and Test
// Load preset into AVAudioUnitSampler
let sampler = AVAudioUnitSampler()
try sampler.loadAudioFiles(from: presetURL)
// Play each zone (MIDI notes C4=60, D4=62, E4=64)
sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1
sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2
sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3
4. Observed Result
Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning
Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning
Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone)
What I've Extensively Tested
What DOES Work
Separate files per zone:
Each zone references its own individual audio file
All zones play correctly without zeros
Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations
What DOESN'T Work (All Tested)
1. Different Audio Formats:
CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved)
M4A (AAC compressed)
WAV (uncompressed)
SF2 (SoundFont2)
Bug persists across all formats
2. CAF Region Chunks:
Created CAF files with embedded region chunks defining zone boundaries
Set zones with no sampleStart/sampleEnd in preset (nil values)
AVAudioUnitSampler completely ignores CAF region metadata
Bug persists
3. Unique Waveform IDs:
Gave each zone a unique waveform ID (268435456, 268435457, 268435458)
Each ID has its own file reference entry (all pointing to same physical file)
Hypothesized this might trigger separate buffer initialization
Bug persists - no improvement
4. Different Sample Rates:
Tested: 44.1kHz, 48kHz, 96kHz
Bug occurs at all sample rates
5. Mono vs Stereo:
Bug occurs with both mono and stereo files
Environment
macOS: Sonoma 14.x (tested across multiple minor versions)
iOS: Tested on iOS 17.x with same results
Xcode: 16.x
Frameworks: AVFoundation, AudioToolbox
Reproducibility: 100% reproducible with setup described above
Impact & Use Case
This bug severely impacts professional music applications that need:
Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A)
iOS file handle limits: Opening 400+ individual sample files is not viable on iOS
Performance: Single file loading is much faster than hundreds of individual files
Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers
Current Impact:
Cannot use monolith files with AVAudioUnitSampler on iOS
Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits
No viable workaround exists
Root Cause Hypothesis
The bug appears to be in AVAudioUnitSampler's internal buffer initialization when:
Multiple zones share the same source audio file
Each zone specifies different sampleStart/sampleEnd offsets
Key observation: The last zone in the zone array always works correctly.
This is NOT related to:
File permissions or security-scoped resources (separate files work fine)
Audio codec issues (happens with uncompressed PCM too)
Preset parsing (preset loads correctly, all zones are valid)
Questions
Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this.
Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler?
Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
Hi,
when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system.
What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data.
It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work?
Thanks, any hints or pointers are highly appreciated!
Hagen.
Hi guys,
I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker.
I am able to redirect audio back to the headset but this is not what I want.
The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone.
This is example of the code:
import AVFoundation
class AudioRecorder {
let player: AVAudioPlayerNode
let engine:AVAudioEngine
let audioSession:AVAudioSession
let audioSessionOutput:AVAudioSession
init() {
self.player = AVAudioPlayerNode()
self.engine = AVAudioEngine()
self.audioSession = AVAudioSession.sharedInstance()
self.audioSessionOutput = AVAudioSession()
do {
try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker])
try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile
try self.audioSession.setMode(AVAudioSession.Mode.default)
try self.audioSession.setActive(true)
// try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work
} catch {
print(error)
}
let input = self.engine.inputNode
self.engine.attach(self.player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
public func start() {
try! self.engine.start()
self.player.play()
}
public func stop() {
self.player.stop()
self.engine.stop()
}
}
I am not sure if this is a bug or not.
Can somebody point me into the right direction?
I there a way to design a custom audio routing?
I would also appreciate some good documentation besides AVFoundation docs.
On Apple TV 4K 3rd generation, with tvOS 26 beta 2, when two HomePod 2 are paired to the device, music and movie sources with Dolby Atmos can only be listened to in stereo. dolby atmos not supported
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
Hi all,
I’ve implemented the new Core Audio Tap API (AudioHardwareCreateProcessTap with CATapDescription) and I’m seeing consistent level attenuation that scales with the number of stereo output pairs exposed by the target device.
What I observe
Device with 4 stereo pairs (8 outs) → tap shows −12.04 dB relative to source.
True 2-ch devices (built-in speakers, AirPods) → ~0 dB attenuation.
The attenuation appears regardless of whether I:
Create a global (default-output) tap via initStereoGlobalTapButExcludeProcesses:
Or create a per-process/per-device tap via initWithProcesses:andDeviceUID:withStream:
Additionally, the routing choice inside the sending app matters:
App output to “System/Default Output” → I often see no attenuation.
App output directly to a multi-out interface (e.g., RME Fireface) → I see the pair-count-scaled attenuation.
I can query Core Audio for the number of output channels/pairs and gain-compensate (+20·log10(N_pairs) dB) and that matches my measurements for many cases. However, this compensation is not universally correct because it seems to depend on where each process routes its audio (Default Output vs. direct device), even when those processes are included in the same tap aggregate.
Question
Is there a supported way to obtain the raw, unattenuated streams for all processes through the Tap API—i.e., to bypass this automatic headroom/attenuation behavior entirely? If this attenuation is expected by design:
Is there a documented rule for when it applies (global vs. device taps, per-process taps, stream selection, etc.)?
Is there a property/flag to disable it, or a reliable, official method to compute the exact compensation (beyond counting stereo pairs)?
Any guidance on ensuring consistent levels when multiple processes route differently (Default Output vs. direct device) but are captured by the same tap?
Environment
API: AudioHardwareCreateProcessTap + CATapDescription
Devices: built-in output (2-ch), RME Fireface (8+ outs / 4+ stereo pairs)
Behavior reproducible with both global and per-process/per-device tap descriptions.
Attenuation example: 4 stereo pairs → −12.04 dB observed.
Happy to provide a minimal sample, measurements, and device logs. Thanks!
— David
Hello!
I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone.
Desired behavior:
Play audio through Bluetooth headset (AirPods)
Record unprocessed environmental audio from the iPhone's built-in microphone
Actual behavior:
When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs)
However, the actual audio data received is clearly still coming from the AirPods microphone
The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds
Environment Details
Device: iPhone 12 Pro Max
iOS Version: 18.4.1
Hardware: AirPods
Audio Framework: AVAudioEngine (also tried AudioQueue)
Code Attempted
I've tried multiple approaches to force the correct routing:
func configureAudioSession() {
let session = AVAudioSession.sharedInstance()
// Configure to allow Bluetooth output but use built-in mic
try? session.setCategory(.playAndRecord,
options: [.allowBluetoothA2DP, .defaultToSpeaker])
try? session.setActive(true)
// Explicitly select built-in microphone
if let inputs = session.availableInputs,
let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) {
try? session.setPreferredInput(builtInMic)
print("Selected input: \(builtInMic.portName)")
}
// Log the current route
let route = session.currentRoute
print("Current input: \(route.inputs.first?.portName ?? "None")")
// Configure audio engine with native format
let inputNode = audioEngine.inputNode
let nativeFormat = inputNode.inputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in
// Process audio buffer
// Despite showing "Built-in Microphone" in route, audio appears to be
// coming from AirPods with voice isolation applied - welp!
}
try? audioEngine.start()
}
I've also tried various combinations of:
Different audio session modes (.default, .measurement, .voiceChat)
Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP)
Setting session.setPreferredInput() both before and after activation
Diagnostic Observations
When AirPods are connected:
AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput()
The actual audio data received shows clear signs of AirPods' voice isolation processing
Background/environmental sounds are actively filtered out...
When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through.
Questions
Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output?
Are there any lower-level configurations that might resolve this issue?
Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877.
FB#: FB19024508
Strange Behavior:
AVAudioEngine.inputNode shows no channels or input format on bus 0.
AVAudioEngine.start() fails with -10877 (AudioUnit connection error).
AVCaptureDevice.DiscoverySession returns zero audio devices.
Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input.
However, CoreAudio HAL does detect all input/output devices:
Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole.
This suggests the lower-level audio stack is functional.
I have tried:
Resetting CoreAudio with sudo killall coreaudiod
Rebuilding and re-signing the app
Clearing TCC with tccutil reset Microphone
Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated
Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant
I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
Hi,
I've had a new deck installed in my car for about 1.5 weeks.
I'm having compatibility issues with my 15PM.
It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works.
I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back.
I've changed the setting "FaceID and passcode-allow access when locked-accessories."
The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device".
Any advice appreciated.
Topic:
Media Technologies
SubTopic:
Audio