This article describes the methods that your video data source object needs to implement to use pixel buffers for iChat Theater. Read “Setting the Video Data Source” for how to set a video data source object before reading this article. If your video data source uses OpenGL, read “Using OpenGL Buffers.”
Getting the Video Format
Rendering Video Frames
Your video data source object needs to implement the getPixelBufferPixelFormat:
IMVideoDataSource
protocol method to return the pixel format for the video content. The IMAVManager
object needs this information to properly display and transmit the video. This getPixelBufferPixelFormat:
implementation returns the kCVPixelFormatType_32ARGB
pixel format appropriate for Core Video pixel buffers, from which graphics contexts are derived, as shown in “Rendering Video Frames”:
- (void)getPixelBufferPixelFormat:(OSType *)pixelFormatOut { |
*pixelFormatOut = kCVPixelFormatType_32ARGB; |
} |
The pixel format returned by this method is the format of the CVPixelBufferRef
object that is passed to the renderIntoPixelBuffer:forTime:
IMVideoDataSource
protocol method.
Your video data source needs to implement the renderIntoPixelBuffer:forTime:
IMVideoDataSource
protocol method to provide the next frame in the video content. The sample code in this article uses Core Video.
If the video frame has not changed since the last frame—for example, in a slideshow the same frame is displayed for several seconds—then the renderIntoPixelBuffer:forTime:
method should return NO
so that transmitting frames can be more efficient.
This code fragment locks the pixel buffer using the CVPixelBufferLockBaseAddress
function:
// Lock the pixel buffer's base address so that we can draw into it. |
if((err = CVPixelBufferLockBaseAddress(buffer, 0)) != kCVReturnSuccess) { |
// Rarely is a lock refused. Return NO if this happens. |
NSLog(@"Warning: could not lock pixel buffer base address in %s - error %ld", __func__, (long)err); |
return NO; |
} |
The pixel buffer dimensions can change from one frame to the next so always obtain the dimensions from the pixel buffer argument—do not use the previous dimensions. This code fragment creates a graphics context from the pixel buffer:
size_t width = CVPixelBufferGetWidth(buffer); |
size_t height = CVPixelBufferGetHeight(buffer); |
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); |
CGContextRef cgContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(buffer), |
width, height, |
8, |
CVPixelBufferGetBytesPerRow(buffer), |
colorSpace, |
kCGImageAlphaPremultipliedFirst); |
CGColorSpaceRelease(colorSpace); |
If you are creating the video content using Cocoa drawing methods, you can create an NSGraphicsContext
, make it current, and invoke the drawing methods—for example, the drawInRect:fromRect:operation:fraction:
NSImage
method or the drawAtPoint:
NSAttributedString
method—to render the next frame in the pixel buffer as shown here:
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO]; |
[NSGraphicsContext setCurrentContext:context]; |
// Insert drawing methods here |
[context flushGraphics]; |
Finally, you need to release all objects and unlock the pixel buffer as shown here:
CGContextRelease(cgContext); |
CVPixelBufferUnlockBaseAddress(buffer, 0); |
Warning: You should never retain a pixel buffer or leave it in a locked state in the callback method.
See Core Video Programming Guide for more information about using Core Video.
© 2007 Apple Inc. All Rights Reserved. (Last updated: 2007-10-31)