The Frame Output
Streaming Frames using the Frame Output
The CameraFrameOutput allows streaming Frames in realtime, making them accessible via a JS worklet function.
Creating a Frame Output
function App() {
const device = useCameraDevice('back')
const frameOutput = useFrameOutput({
// ...options
onFrame(frame) {
'worklet'
console.log(`Received ${frame.width}x${frame.height} Frame!`)
frame.dispose()
}
})
return (
<Camera
style={StyleSheet.absoluteFill}
isActive={true}
device={device}
outputs={[frameOutput]}
/>
)
}function App() {
const device = useCameraDevice('back')
const frameOutput = useFrameOutput({
// ...options
onFrame(frame) {
'worklet'
console.log(`Received ${frame.width}x${frame.height} Frame!`)
frame.dispose()
}
})
const camera = useCamera({
isActive: true,
device: device,
outputs: [frameOutput],
})
}const session = await HybridCameraFactory.createCameraSession(false)
const device = await getDefaultCameraDevice('back')
const frameOutput = HybridCameraFactory.createFrameOutput({ /* options */ })
const workletRuntime = createWorkletRuntimeForThread(frameOutput.thread)
scheduleOnRuntime(workletRuntime, () => {
'worklet'
frameOutput.setOnFrameCallback((frame) => {
console.log(`Received ${frame.width}x${frame.height} Frame!`)
frame.dispose()
})
})
await session.configure([
{
input: device,
outputs: [
{ output: frameOutput, mirrorMode: 'auto' }
],
config: {}
}
], {})
await session.start()See FrameOutputOptions for a full list of configuration options for the Frame Output.
Dependency Required
The CameraFrameOutput requires react-native-worklets to be installed to synchronously run the onFrame(...) function on a parallel JS Worklet Runtime.
Disposing a Frame
A Frame is a GPU-backed buffer, streamed at full resolution and frame rate.
The Camera pipeline keeps a small pool to re-use buffers, and if that pool is full, the pipeline stalls and subsequent frames will be dropped.
To prevent frame drops, you need to dispose a Frame once you are done using it via dispose():
const frameOutput = useFrameOutput({
onFrame(frame) {
'worklet'
try {
// processing...
} finally {
frame.dispose()
}
}
})Choosing a Pixel Format
Choosing an appropriate pixelFormat depends on your Frame Processor's usage.
While the most commonly used format in visual recognition models is 'rgb', it is by far not the most efficient format for a Camera pipeline as it requires an additional conversion and uses ~2.6x more bandwidth than 'yuv'.
If you render to native Surfaces (e.g. via GPU pipelines or Media Encoders), you may also be able to use 'native', which chooses whatever the currently selected CameraFormat's nativePixelFormat is, and requires zero conversions.
Use 'native' with caution, as your CameraFormat's nativePixelFormat might also be a RAW format like 'raw-bayer-packed96-12-bit', or a vendor-specific private format ('private').
Examples:
- OpenCV natively supports YUV, so streaming in
'yuv'is most efficient. - LiteRT supports YUV, but converts to RGB internally - so streaming in
'rgb'directly is more efficient as conversion is handled in the Camera pipeline. - react-native-skia supports importing external textures (PRIVATE), as well as YUV and RGB Frames, so streaming in so
'native'is most efficient, with'yuv'as a next-best alternative.
After selecting a pixelFormat, inspect your Frame for the actual pixel format you receive via Frame.pixelFormat:
const frameOutput = useFrameOutput({
pixelFormat: 'yuv',
onFrame(frame) {
'worklet'
console.log(frame.pixelFormat) // 'yuv-420-8-bit-full'
frame.dispose()
}
})