How do I ingest video using the frameReady API?
Phenix supports the ingest of video into the Phenix platform using something other than a traditional media device (camera, etc) using the frameReady API, which essentially allows an application to write raw video into a frame buffer for ingest.
You may want to do this, for example, if you wish to have your application acquire the camera video, do some video processing / manipulation and then to send the resulting video to Phenix.
This guide is intended for developer use. Please contact your Phenix account representative for additional assistance and guidance for your specific use case.
This article discusses how to access video and audio from your video device. It assumes that a PhenixChannelExpress
or PhenixRoomExpress
instance is available in your app.
The example code uses RoomExpress, but ChannelExpress can be used instead with the same results.
The examples in this article are for iOS devices. Similar code can be written for Android; however, a different approach, not covered in this article, is needed when using the Web SDK.
You cannot access the camera or mic if you are publishing via the Phenix SDK with camera and/or mic enabled. Use the frame-ready API as described below if you need to control the source (audio or video) yourself and want to feed the raw data to the Phenix SDK.
You do not need to use the frame-ready API if you just want Phenix to render the preview to a CALayer; that can be done directly. The frame-ready API is an option you if you need to extract the raw frames (not only inject them). That may be useful if custom processing is needed before frames are rendered.
Do not open your own capture sessions when using the Phenix SDK. Doing so will cause issues; likely, the last capture session started will win, while the other will stop getting frames.
To provide audio and/or video data from your own source, use the approach described below,
which does not cause Phenix to attempt to access the camera (and/or mic) and eliminates possible issues
with resource contention. If you invoke publish
(on any of the API objects) it will open the camera and select
the properties according to the media constraints you passed in.
An example of this is shown in the SDK documentation.
If you also want to display what is being captured locally, you can pass in a CALayer
via withPreviewRender
when you assemble the publisher options
(the example code shows both versions: with and without preview).
If the Preview Renderer approach does not work, an alternative is described at the end of this article.
Instructions
Step 1: Null Video Device
Find the null video device using the following code.
This example has been simplified by using a dispatch group, which allows your app to wait for an asynchronous operation. This is not recommended for a production app; instead, consider using something like RxSwift.
var nullVideoDevice : PhenixSourceDeviceInfo? = nil
let dispatchGroup = DispatchGroup();
// NOTE: Using a dispatch group here for simplicity! `enumerateSourceDevices` works asynchronously.
dispatchGroup.enter()
self.phenixRoomExpress?.pcastExpress.pcast.enumerateSourceDevices({ (pcast:PhenixPCast?, devices:[PhenixSourceDeviceInfo]?) in
guard let devices = devices else {
return
}
for device in devices {
if device.deviceType == PhenixSourceDeviceType.null {
nullVideoDevice = device
break
}
}
dispatchGroup.leave()
}, PhenixMediaType.video)
dispatchGroup.wait()
Step 2: User Media Options
Assemble the user media options.
let gumOptions = PhenixUserMediaOptions()
gumOptions.video.capabilityConstraints[PhenixDeviceCapability.deviceId.rawValue] =
[PhenixDeviceConstraint.initWith(nullVideoDeviceInstance.id)]
gumOptions.video.capabilityConstraints[PhenixDeviceCapability.frameRate.rawValue] =
[PhenixDeviceConstraint.initWith(15)]
Audio is not included in this example.
Be sure to specify a framerate. The way this works is via a callback, so frames are pulled, not pushed.
Step 3: User Media Stream
Obtain the local user media stream.
This example uses the dispatch group again for simplicity.
dispatchGroup.enter()
self.phenixRoomExpress?.pcastExpress.getUserMedia(gumOptions, { (status:PhenixRequestStatus, localUserMediaStream:PhenixUserMediaStream?) in
// NOTE: Check status!
userMediaStream = localUserMediaStream
dispatchGroup.leave()
})
dispatchGroup.wait()
Step 4: Set Up Callback
Hook up the frame ready callback.
// Pull sample buffer from somewhere:
let nextSampleBuffer : CMSampleBuffer? = nil
frameReady?.write(nextSampleBuffer!)
})
This will call your app 15 times per second. Your app is expected to provide a valid sample buffer (which you simply wrap around a CVPixelBuffer
).
The supported pixel formats are NV12, I420, and BGRA.
Set the presentation timestamp and duration if you have it.
An Apple function called CMSampleBufferCreateForImageBuffer
is used to set that, and allows you to pass in
a CMSampleTimingInfo
when you wrap your CVPixelBuffer
.
Step 5: Publish
Invoke the publish API with the user media stream you have instead of the constraints.
let publishOptions = PhenixPCastExpressFactory.createPublishOptionsBuilder()
.withCapabilities(Configuration.kPublisherCapabilities)
.withUserMedia(userMediaStream!)
.buildPublishOptions()
This example only shows the building of the publish options. The only difference here is using withUserMedia
instead of withMediaConstraints
. You can do the same for audio.
Reading Frames
Alternatively, you can have the Phenix SDK publish and capture from the camera, but also receive the raw camera frames within your app at the same time for further processing. To do this, use the approach outlined above (frame-ready), but instead of writing frames, read them.
An example in Objective C is shown in our iOS documentation.
In the example, you invoke read
on the frame notification and get a CMSampleBufferRef
for the video frames.
Do not call write
unless you want to modify the frames. The example shows both audio and video.
The other difference is when you call getUserMedia
, pass in regular constraints to select resolution,
facing mode etc., instead of using a null source like above:
Troubleshooting
If using withPreviewRenderer
does not work, e.g., is causing freezing, you may be using some other
component to access the camera within your app. Anything that will attempt to grab frames from the
camera will likely result in either the Phenix SDK or that other component to fail.
If that is the case, try the following:
- Call
getUserMedia
with the media constraints you would otherwise pass intopublish
per this documentation. AccessPCastExpress
via PhenixChannelExpress as a property calledpcastExpress
. For an example of media constraints seemediaConstraints
here.
Create a renderer using PhenixUserMediaStream.mediaStream.createRenderer
, which returns a PhenixRenderer
.
You can start and stop that renderer at will by calling start(CALayer)
to start it, and stop
to stop it again.
You can now run the preview independent of the publisher.
When you publish, e.g., using channel express.
Replace withMediaConstraints
for the publish options with withUserMedia
, passing in the
PhenixUserMediaStream
obtained above.