Hi,
I’m trying to implement the new PhotoKit PHBackgroundResourceUploadExtension. I created the extension, enabled full photo library access in the host app, and registered the extension point using the string: com.apple.photos.background-upload.
However, when I attempted to enable the extension with:
try library.setUploadJobExtensionEnabled(true)
I received the following error:
Error Domain=PHPhotosErrorDomain Code=-1 "(null)"
This happens when running the app on Xcode 26.1 and 26.2 Beta, using the iPhone 17 Pro Max simulator (iOS 26.1 and 26.2).
My question is: Is this extension supported on the simulator?
I’m asking because at the moment it’s difficult for me to test this on a physical device.
Also, What's the meaning of the error?
Thanks.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi Apple Developer Support Team,
We are developing an iOS application using a camera package within a hybrid (cross-platform) framework, and we would like to confirm whether it is possible to disable the camera shutter sound programmatically.
As per our understanding, the shutter sound on iOS is system-controlled and depends on the device’s silent/ring mode, and there is no App Store–approved API available to force-disable this sound. Kindly confirm whether this understanding is correct or if any supported alternative approach exists for hybrid or native implementations.
Thank you for your clarification.
Best regards,
ParkhyaSolutions
Hi everyone,
I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline.
These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down.
<<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281)
Even in this clean, minimal setup, the same logs appear on iOS 26.2
The exact same logic did not produce these logs on iOS 18.x.
To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch.
My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview.
Thanks in advance for any insight.
Example Code
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)?
Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values?
Is this consistent across devices or can the behavior vary from device to device?
While implementing the new background backup feature introduced in iOS 26.1, I create a PHAssetResourceUploadJob in an Extension. On iOS 26.1, the system successfully triggers the upload. However, on iOS 26.2, although the job is created successfully and all related configurations are correctly set, the system does not trigger the upload.
Could you please help confirm the cause of this issue? Thank you.
Hello Apple Developer Support,
I’m developing a virtual camera using the CMIOExtensionDevice / CMIOExtensionStreamSource APIs on macOS. While the virtual camera appears in system settings and apps like Zoom and Google Meet, the video output exhibits the following issues:
Jittering frames: The first frame sometimes appears correctly, but subsequent frames flicker or jitter.
Solid color fill: Eventually, the camera feed fills entirely with a solid accent color (e.g., blue), rather than the intended video content.
Console logs: Repeated messages appear in Console.app:
Invalid display 0x00000000
Setup details:
The virtual camera is created using CMIOExtensionDevice and CMIOExtensionStream.
Video frames are rendered from NSImage/CGImage using CGContext and copied into CVPixelBuffers.
Frame delivery is controlled by a DispatchSourceTimer at 60 FPS.
macOS version: 26.2
Xcode version: 26.1
Observations:
The Invalid display 0x00000000 logs suggest that CGContext drawing or NSImage operations are failing in headless mode (i.e., there is no real display attached to the virtual camera).
Using CIContext with .useSoftwareRenderer = true appears to mitigate some flicker, but not entirely.
Questions / Requests:
Is it expected that CoreMediaIO virtual cameras cannot reliably render CGImage / NSImage frames offscreen?
Are there recommended APIs or approaches to render virtual camera frames fully headless to avoid display-dependent jitter?
Is there any documentation or sample code from Apple showing stable video output from a virtual camera extension that does not rely on a physical display?
Any guidance or examples would be greatly appreciated. This issue prevents the virtual camera from being used reliably in standard video apps.
Thank you,
Savvy
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
System Extensions
Core Media
Continuity Camera
I am following the Apple sample code and trying to add a manual focus lens position slider:
@available(iOS 18.0, *)
private func addCameraControls() {
if !self.session.controls.isEmpty {
for control in self.session.controls {
self.session.removeControl(control)
}
}
self.cameraControlFocusSlider = nil
//Focus Slider
if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported {
self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0)
self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in
//Do manual focus
}
if self.session.canAddControl(self.cameraControlFocusSlider!) {
self.session.addControl(self.cameraControlFocusSlider!)
}
}
}
So there are these AVCaptureSessionControlsDelegate methods:
final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeActive")
}
final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillEnterFullscreenAppearance")
}
final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillExitFullscreenAppearance")
}
final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeInactive")
}
So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used?
Please note that I will have more than one AVCaptureSlider in the final code.
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds).
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description
Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
Hello!
In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this?
In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated.
Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
Topic:
Media Technologies
SubTopic:
Photos & Camera
My app is a camera app that supports Picture-in-Picture (PiP) mode.
Normally, when the device rotates, I get the device orientation from iOS and use it to rotate the camera feed so that the preview stays correctly aligned.
However, when the app enters PiP mode, it is considered to be in the background, and I can no longer receive orientation updates from the system.
As a result, I can’t apply rotation corrections to the camera video in PiP mode.
Is there any way to retrieve device orientation while the app is in the background (specifically during PiP mode)?
Any guidance would be greatly appreciated.
Thank you!
I'm developing a video capture app using AVFoundation, designed specifically for use on a boat pylon to record slalom water skiing. This setup involves considerable vibration.
As you may know, the OIS that Apple began adding to lenses since the iPhone 7 is actually very problematic in high vibration circumstances, ironically creating very shaky video, whereas lenses without OIS produce perfectly stable video. Because of this, up until iPhone 14, the solution for my app was simply to use the Selfie lens, which did not have OIS.
Starting with iPhone 14 through iPhone 16 (non-Pro models), technical specs suggest the selfie lens still does not include OIS. However, I’m still seeing the same kind of shaky video behavior I see on OIS-equipped lenses. The one hardware change I see in this camera module is the addition of PDAF (Phase Detection Autofocus), so that is my best guess as to what is causing the unstable video.
1- Does that make any sense - that in high vibration settings, PDAF could create unstable video in the same way that OIS does? Or could it be something else that was changed between the iPhone 13 and 14 Selfie lens?
Thinking that the issue was PDAF, I figured that if I enabled my app to set a Manual Focus level, that ought to circumvent PDAF (expecting that if a lens is manually focusing, it can’t also be autofocusing via PDAF).
However, even with manual focus locked via AVCaptureDevice in my app, on the Selfie lens of an iPhone 16, the video still comes out very shaky, basically unusable. I also tested with the built-in Apple Camera app (using the press-and-hold to lock focus and exposure) and another 3rd party camera app to lock focus, all with the same results, so it's not that my app just isn't correctly doing manual focus.
So I'm stuck with these questions:
2- Does the selfie camera on iPhones 14–16 use PDAF even when focus is set to locked/manual mode?
3- Is there any way in AVFoundation to disable or suppress PDAF during video recording (e.g., a flag, device format setting, or private API)?
4- Is PDAF behavior or suppression documented or controllable via AVCaptureDevice or any related class?
5- If no control of PDAF is available, are there any best practices for stabilizing or smoothing this effect programmatically?
Note that I also have set my app to use the most aggressive form of stabilization available, so it defaults to .cinematicExtendedEnhanced, if that’s not available, then .cinematicExtended, etc. On the 16 Selfie lens, it is using .cinematicExtended. As an additional question:
6- Would those be the most appropriate stabilization settings for a high vibration environment, and if not, what would be best?
Hi,
I'm developing a SwiftUI app using RealityKit and ARKit for an AR measuring feature. I’ve noticed that after navigating away from my AR view and performing extensive cleanup (including removing all anchors/entities, pausing the ARSession, and nil-ing out all references), memory usage remains elevated and sometimes grows with repeated AR sessions.
Each time I enter and exit the AR view, memory increases
The memory does not return to the baseline after cleanup, even though all custom objects are deallocated.
Are there best practices beyond what I’ve described to ensure all ARKit/RealityKit resources are released after an AR session?
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210
How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera?
My environment:
Model Name: MacBook Pro
ProductVersion: 15.5
Model Identifier: MacBookPro15,2
Processor Name: Quad-Core Intel Core i5
Processor Speed: 2.4 GHz
Number of Processors: 1
// 禁用/启用USB设备
bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) {
std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x"
<< std::hex << vendorID << ", PID: 0x" << productID << std::endl;
// 创建匹配字典查找指定VID/PID的USB设备
CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (!matchingDict) {
std::cerr << "Failed to create USB device matching dictionary." << std::endl;
return false;
}
// 设置VID/PID匹配条件
CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID);
CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID);
CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef);
CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef);
CFRelease(vendorIDRef);
CFRelease(productIDRef);
// 获取匹配的设备迭代器
io_iterator_t deviceIterator;
if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) {
std::cerr << "Failed to get USB device iterator." << std::endl;
CFRelease(matchingDict);
return false;
}
io_service_t usbDevice;
bool result = false;
int deviceCount = 0;
// 遍历所有匹配的设备
while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) {
deviceCount++;
// 获取设备路径
char path[1024];
if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) {
std::cout << "Found device at path: " << path << std::endl;
}
// 打开设备
IOCFPlugInInterface** plugInInterface = NULL;
IOUSBDeviceInterface** deviceInterface = NULL;
SInt32 score;
IOReturn ret = IOCreatePlugInInterfaceForService(
usbDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&plugInInterface,
&score);
if (ret == kIOReturnSuccess && plugInInterface) {
ret = (*plugInInterface)->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
}
if (ret != kIOReturnSuccess) {
std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl;
IOObjectRelease(usbDevice);
continue;
}
// 禁用/启用设备
if (enable) {
// 启用设备 - 重新配置设备
ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0);
if (ret == kIOReturnSuccess) {
std::cout << "Device enabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to enable device. Error: " << ret << std::endl;
}
} else {
// 禁用设备 - 断开设备连接
ret = (*deviceInterface)->USBDeviceClose(deviceInterface);
if (ret == kIOReturnSuccess) {
std::cout << "Device disabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to disable device. Error: " << ret << std::endl;
}
}
// 关闭设备接口
(*deviceInterface)->Release(deviceInterface);
IOObjectRelease(usbDevice);
}
IOObjectRelease(deviceIterator);
if (deviceCount == 0) {
std::cerr << "No device found with specified VID/PID." << std::endl;
return false;
}
return result;
}
According to the doc, I did a simple demo to verify.
My env:
ProductName: macOS
ProductVersion: 15.5
BuildVersion: 24F74
2.4 GHz 四核Intel Core i5
Info.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>IOKitPersonalities</key>
<dict>
<key>UVCamera</key>
<dict>
<key>CFBundleIdentifierKernel</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOUserService</string>
<key>IOMatchCategory</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProviderClass</key>
<string>IOUserResources</string>
<key>IOResourceMatch</key>
<string>IOKit</string>
<key>IOUserClass</key>
<string>UVCamera</string>
<key>IOUserServerName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProbeScore</key>
<integer>100000</integer>
<key>idVendor</key>
<integer>1452</integer>
<key>idProduct</key>
<integer>34068</integer>
</dict>
</dict>
<key>OSBundleUsageDescription</key>
<string></string>
</dict>
</plist>
UVCamera.cpp
//
// UVCamera.cpp
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#include <os/log.h>
#include <DriverKit/IOUserServer.h>
#include <DriverKit/IOLib.h>
#include "UVCamera.h"
kern_return_t
IMPL(UVCamera, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
os_log(OS_LOG_DEFAULT, "Hello World");
return ret;
}
UVCamera.iig
//
// UVCamera.iig
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#ifndef UVCamera_h
#define UVCamera_h
#include <Availability.h>
#include <DriverKit/IOService.iig>
class UVCamera: public IOService
{
public:
virtual kern_return_t
Start(IOService * provider) override;
};
#endif /* UVCamera_h */
Then I build by xcode and mv it to /Library/DriverExtensions:
sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions
sudo kmutil install -R / -r /Library/DriverExtensions
kmutil rebuild done
However,the dext can't be loaded:
kmutil showloaded --list-only | grep UVCamera
No variant specified, falling back to release
What's the problem? anyone can help me?
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird scenario we are seeing on an even smaller number of users is that the following call:
AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
returns nil. A quick look at our error reports show this happening on iPhone XR, 13 and 14 models. They should all support this device type.
Any help on investigating these issue would be greatly appreciated!
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird interruption we are seeing
Hey,
Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen:
Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app.
I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible.
Thanks,
Alex
Hi, I’ve developed a photo app that includes a photo deletion feature.
Some users have reported encountering PHPhotosError.operationInterrupted (3301) when attempting to delete photos.
Initially, I suspected that some of the assets might have a sourceType of typeiTunesSynced, since the documentation notes that iTunes-synced assets cannot be edited or deleted.
However, after checking the logs, all of the assets involved are of typeUserLibrary.
Additionally, the user mentioned that some photos in the iPhone Photos do not show a delete button.
I’m unsure whether the absence of the delete button is related to the 3301 error.
I’d like to confirm the following:
Under what conditions does PHPhotosError.operationInterrupted (3301) occur, and how should it be handled?
Why do some photos in the iPhone Photos not show a delete button?
The code for deleting photos is as follows:
PHPhotoLibrary *library = [PHPhotoLibrary sharedPhotoLibrary];
[library performChanges:^{
PHFetchResult *assetsToBeDeleted = [PHAsset fetchAssetsWithLocalIdentifiers:delUrls options:nil];
if (assetsToBeDeleted) {
[PHAssetChangeRequest deleteAssets:assetsToBeDeleted];
}
} completionHandler:^(BOOL success, NSError *error) {
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)