FrameOutputOptions
interface FrameOutputOptionsConfiguration options for a CameraFrameOutput.
See
Properties
allowDeferredStart
allowDeferredStart: booleanAllow this output to start later in the capture pipeline startup process.
Enabling this lets the camera prioritize outputs needed for preview first,
then start the CameraFrameOutput shortly afterwards.
This can improve startup behavior when preview responsiveness is more important than receiving frame-processor frames immediately.
dropFramesWhileBusy
dropFramesWhileBusy: booleanWhether to drop new Frames when they arrive while the Frame Processor is still executing.
- If set to
true, theCameraFrameOutputwill automatically drop any Frames that arrive while your Frame Processor is still executing to avoid exhausting resources, at the risk of loosing information since Frames may be dropped. - If set to
false, theCameraFrameOutputwill queue up any Frames that arrive while your Frame Processor is still executing and immediatelly call it once it is free again, at the risk of exhausting resources and growing RAM.
Default
trueenableCameraMatrixDelivery
enableCameraMatrixDelivery: booleanGets or sets whether the CameraFrameOutput attaches
a Camera Intrinsic Matrix to the Frames it produces.
Intrinsic Matrixes are only supported if video stabilization is 'off'.
See
Default
falseenablePhysicalBufferRotation
enablePhysicalBufferRotation: booleanEnable (or disable) physical buffer rotation.
- When
enablePhysicalBufferRotationis set totrue, and theCameraFrameOutput'soutputOrientationis set to any value different than the Camera sensor's native orientation, the Camera pipeline will physically rotate the buffers to apply the orientation. The resultingFrame'sorientationwill then always be'up', meaning it no longer needs to be rotated by the consumer. - When
enablePhysicalBufferRotationis set tofalse, the Camera pipeline will not physically rotate buffers, but instead only provide theFrame's orientation relative to theCameraFrameOutput's targetoutputOrientationas metadata (seeFrame.orientation), meaning the consumers have to handle orientation themselves - e.g. by reading pixels in a different order, or applying orientation in a GPU rendering pass, depending on the use-case.
Setting enablePhysicalBufferRotation to true introduces
processing overhead.
Default
falseenablePreviewSizedOutputBuffers
enablePreviewSizedOutputBuffers: booleanDeliver smaller, preview-sized output buffers for Frame Processing.
This is useful for ML and computer vision workloads where full-resolution buffers are unnecessary and would only increase memory bandwidth and processing costs.
Other camera outputs (for example CameraVideoOutput) keep using
the selected full-resolution CameraFormat.
Default
falsepixelFormat
pixelFormat: TargetVideoPixelFormatSets the TargetVideoPixelFormat of the
CameraFrameOutput.
- The most efficient format is
'native', which internally just uses the currently selectedCameraFormat'snativePixelFormat. - Some
CameraFormats may support natively streaming in a YUV format (e.g. ifCameraFormat.nativePixelFormat=='yuv-420-8-bit-video'), in which case'yuv'can also be zero overhead. - If your Frame Processor absolutely requires to run in RGB, you may
set
pixelFormatto'rgb', which comes with additional processing overhead as the Camera pipeline will convert native frames to RGB (e.g. to'rgb-bgra-8-bit').
Discussion
It is recommended to use 'native'
if possible, as this will use a zero-copy GPU-only path.
Other formats almost always require conversion at
some point, especially on Android.
If you need CPU-access to pixels, use
'yuv' instead of
'rgb' as a next best alternative,
as 'rgb' uses ~2.6x more bandwidth
than 'yuv' and requires additional
conversions as it is not a Camera-native format.
Only use 'rgb' if you really need
to stream Frames in an RGB format.
Discussion
It is recommended to use 'native' and
design your Frame Processing pipeline to be fully GPU-based, such as
performing ML model processing on the GPU/NPU and rendering via Metal/Vulkan/OpenGL
by importing the Frame as an external sampler/texture (or via
Skia/WebGPU which use NativeBuffer zero-copy APIs), as the
Frame's data will already be on the GPU then.
If you use a non-'native' pixelFormat
in a GPU pipeline, your pipeline will be noticeably slower as CPU <-> GPU
downloads/uploads will be performed on every frame.