Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

ShazamKit for Android and 16 KB native library alignment
Hello, I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit. In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here: https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app. So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file. Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that? I'm totally stuck with that so any help is highly appreciated. Thanks.
3
0
613
Oct ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
3
5
1.3k
Oct ’25
Significant Uptick in AVCaptureSessionWasInterrupted (Reason 4) Leading to Camera Black Screen and AVError Code -11803
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos. "Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}", } This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error: Some questions from our team: What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see. Is manual recovery of the session required? Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session? Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4? Best, Ben
3
17
597
Jun ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
3
0
898
Oct ’25
Configuring CaptureVideoDelegate to avoid gamma/transfer function
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive video in as unprocessed format as possible. In particular, I'm interested in getting pixel buffers that contain pretty much the bayer data as the sensor sees it - with the minimum processing of color possible. Currently we configure the AVCaptureDevice to fix the focus and exposure, use a low ISO with no gain and set the white balance gains to 1. AVCaptureVideoDataOutput is using 32BGRA. What I'd like to do is remove any additional color and brightness processing such that the data is effectively processed with a linear transfer function (i.e. gamma function is 1). I thought that this might be down to using the AVCaptureDevice activeColorSpace - we currently use P3_D65 for this. But there only seems to be a few choices (e.g. sRGB, HLG_BT2020) all of which I think affect the gamma. So: is it possible to control or specify the gamma / transfer function when using CaptureVideoDelegate? if not, does one of the color space settings have a defined gamma function that I can effectively reverse it from the pixel data without losing too much information? or is there a better way to capture video-ish speed images (15-30fps) from the camera sensor that skips processing like this? Many thanks for any suggestions.
3
0
148
Mar ’25
AVAudioSessionCategoryPlayback is not allowed while CallKit call is active
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
3
0
374
Nov ’25
iOS26中ALAssetsLibrary 编译报错问题
mac os 系统版本:26.0 (25A354) Xcode版本:Version 26.0 (17A324) 项目编译报错 `SwiftExplicitDependencyCompileModuleFromInterface arm64 /Users/zhz/Library/Developer/Xcode/DerivedData/ModuleCache.noindex/AssetsLibrary-HTIJ05N58KN3.swiftmodule /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/usr/lib/swift/AssetsLibrary.swiftmodule/arm64e-apple-ios.swiftinterface:10:25: error: 'ALAssetsLibrary' is unavailable in iOS: Use PHPhotoLibrary from the Photos framework instead 8 | public import _StringProcessing 9 | public import _SwiftConcurrencyShims 10 | extension AssetsLibrary.ALAssetsLibrary { | `- error: 'ALAssetsLibrary' is unavailable in iOS: Use PHPhotoLibrary from the Photos framework instead 11 | #if compiler(>=5.3) && $NonescapableTypes 12 | @available(iOS, introduced: 9.0, deprecated: 9.0, obsoleted: 26.0) /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/System/Library/Frameworks/AssetsLibrary.framework/Headers/ALAssetsLibrary.h:80:12: note: 'ALAssetsLibrary' was obsoleted in iOS 26.0 78 | 79 | OS_EXPORT AL_DEPRECATED(4, "Use PHPhotoLibrary from the Photos framework instead") 80 | @interface ALAssetsLibrary : NSObject { | `- note: 'ALAssetsLibrary' was obsoleted in iOS 26.0 81 | @package 82 | id _internal; /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/usr/lib/swift/AssetsLibrary.swiftmodule/arm64e-apple-ios.swiftinterface:1:1: error: failed to build module 'AssetsLibrary'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.2 effective-5.10 (swiftlang-6.2.0.17.14 clang-1700.3.17.1)', while this compiler is 'Apple Swift version 6.2 effective-5.10 (swiftlang-6.2.0.19.9 clang-1700.3.19.1)'). Please select a toolchain which matches the SDK.
3
4
1.8k
Oct ’25
AVCaptureSession startRunning is slow
AVCaptureSession's startRunning method is thread blocking and seems to be slow. What is this method doing behind the scenes? For context: I'm working on Simulator Camera support and I have a 'fake' AVCaptureDevice that might be causing this. My hypothesis is that AVCaptureSession tries to connect to the device and waits for a notification to be posted back. I'd love to find a way to let my fake device message AVCaptureSession that it's connected.
3
0
294
Jul ’25
Question about PT Framework channel tone behaviour
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation. During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case. I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
3
0
408
Oct ’25
Format of 14-bit RAW bayer data from lower bit camera sensor?
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive sensor data in as unprocessed format as possible. I'm using AVCapturePhotoOutput to take Bayer RAW stills and receiving data in kCVPixelFormatType_14Bayer_RGGB format. However, I'm puzzled as to the content of the bits. I simply demosaic the image by taking each 2x2 square: RG GB and use R, (G+G)/2, B to get 16-bit RGB values - and this indeed works. However, I am puzzled as to the values we are getting as they seem to be approximately in the range 2048 - 16383. The top value is understandable - the maximum that you can fit in 14-bits (as implied by the pixel format type). However we don't seem to be able to get lower than ~2048 no matter how black/dark we make the sensor. I'm aware that the sensor is probably not 14-bits (we're using the iPhone 16e camera) and that maybe this is to do with the way the sensor data is packaged. The Advances in iOS Photography video (https://developer.apple.com/videos/play/wwdc2016/501/) describes it as "10-bit sensor RAW packaged in 14 bits per pixel instead of eight." Is there any documentation describing what is going on here? It's vital for our use that we get as close to the raw camera sensor light readings as possible, so any pointers as to the mapping (e.g. decompanding?) being used would be extremely useful. Many thanks in advance for your help.
3
0
189
May ’25
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
3
0
582
Sep ’25
TypeScript definitions for MusicKit JS
I take that MusicKit JS is built with TypeScript, based on the attributions in the script: https://js-cdn.music.apple.com/musickit/v3/musickit.js In the script it points to https://js-cdn.music.apple.com/musickit/v1/acknowledgements.txt – I assume this should be the v3 URL for the v3 version? It returns the same content nonetheless. This contains attributions for TypeScript. Currently there's a third-party effort with DefinitelyTyped, which publishes the NPM package @types/musickit-js. The latest supported version available is v1. However, there is no version compatible with v3. This makes it hard to use MusicKit JS v3 in a TypeScript project. Please publish the types, ideally on the CDN along with the musickit.js file. Also consider publishing an officially Apple supported DefinitelyTyped package, or help to maintain the existing @types/musickit-js to make consuming this even easier.
3
1
186
Jun ’25
AVAudioUnit host - PCM buffer output silent
Hi, I just started to develop audio unit hosting support in my application. Offline rendering seems to work except that I hear no output, but why? I suspect with the player goes something wrong. I connect to CoreAudio in a different location in the code. Here are some error messages I faced so far: 2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node! 2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node! 2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated. 2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852 ** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852 I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect: /* audio engine */ audio_engine = [[AVAudioEngine alloc] init]; fx_audio_unit_audio->audio_engine = (gpointer) audio_engine; av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format; /* av audio player node */ av_audio_player_node = [[AVAudioPlayerNode alloc] init]; /* av audio unit */ av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]]; av_audio_unit = (AVAudioUnit *) av_audio_unit_effect; fx_audio_unit_audio->av_audio_unit = av_audio_unit; /* audio sequencer */ av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine]; fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer; /* output node */ [[AVAudioOutputNode alloc] init]; /* audio player and audio unit */ [audio_engine attachNode:av_audio_player_node]; [audio_engine attachNode:av_audio_unit]; [audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format]; [audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format]; ns_error = NULL; [audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline format:av_format maximumFrameCount:buffer_size error:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("enable manual rendering mode error - %d", [ns_error code]); } ns_error = NULL; [[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]); } Then I render in a dedicated thread. ns_error = NULL; [audio_engine startAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("error during audio engine start - %d", [ns_error code]); } [av_audio_sequencer prepareToPlay]; ns_error = NULL; [av_audio_sequencer startAndReturnError:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("error during audio sequencer start - %d", [ns_error code]); } [av_audio_player_node play]; while(is_running){ /* pre sync */ /* IO buffers */ av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer; av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer; /* fill input buffer */ /* schedule av input buffer */ frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size; av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node; AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)]; [av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil]; /* render */ ns_error = NULL; status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error]; if(ns_error != NULL && [ns_error code] != noErr){ g_warning("render offline error - %d", [ns_error code]); } } regards, Joël
3
0
493
Aug ’25
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
3
3
600
Sep ’25
Audio Unit v3 host v2 third party plugins
Hi, I have just implemented an Audio Unit v3 host. AgsAudioUnitPlugin *audio_unit_plugin; AVAudioUnitComponentManager *audio_unit_component_manager; NSArray<AVAudioUnitComponent *> *av_component_arr; AudioComponentDescription description; guint i, i_stop; if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){ return; } audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager]; /* effects */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_Effect; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } /* instruments */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_MusicDevice; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } But this doesn't show me Audio Unit v2 plugins, why? regards, Joël
3
0
659
Aug ’25
Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
3
0
256
Apr ’25
Email "Updates to FairPlay Streaming Server SDK" makes no sense
Hi, While I don't normally use FairPlay, I got this email that is so strangely worded I am wondering if it makes sense to people who do know it or if it has some typo or what. You can only generate certificates for SDK 4, also SDK 4 is no longer supported? (Also "will not expire" is imprecise phrasing, certificates presumably will expire, I think it meant to say are still valid / are not invalidated.)
3
0
368
Oct ’25
Apple watch for child
Hi, my daughter was given an Apple Watch due to her grandfather's passing. It is not GPS/cellular, and we cannot connect it to her Apple ID because of this. It doesn't seem right to leave her logged in to his Apple ID, but we are currently out of options. Is there a workaround to this? When I try to set her up from my phone, it tells me that the watch must have GPS/cellular to set it up. Why? Am I missing something?
3
0
160
Jun ’25
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
3
0
437
Aug ’25