This chapter describes the extensions to the Image Compression Manager introduced in various releases of QuickTime, including support for ColorSync, asynchronous decompression, timecodes, and compressed fields of video data.
ColorSync Support
Asynchronous Decompression
Timecode Support
Data Source Support
Working with Alpha Channels
Working With Video Fields
Packetization Information
ColorSync is a system extension that provides a platform for consistent color reproduction between widely varying output devices. ColorSync color matching capability was added to the Image Compression Manager picture drawing functions in QuickTime. You can accurately reproduce color images (not movies) with the DrawPicture
functions by setting the useColorMatching
flag in the flags parameter to these functions.
enum { |
useColorMatching = 4 |
}; |
QuickTime introduced the concept of scheduled asynchronous decompression operations. Decompressor components can allow applications to queue decompression operations and specify when those operations should take place. The Image Compression Manager provides a function, DecompressSequenceFrameWhen
, that allows applications to schedule an asynchronous decompression operation.
The Image Compression Manager and compressor components have been enhanced to support timecode information. The Image Compression Manager function SetDSequenceTimeCode
allows you to set the timecode value for a frame that is to be decompressed.
QuickTime introduced support for an arbitrary number of sources of data for an image sequence. This functionality forms the basis for dynamically modifying parameters to a decompressor. It also allows for codecs to act as special effects components, providing filtering and transition type effects. A client can attach an arbitrary number of additional inputs to the codec. It is up to the particular codec to determine whether to use each input and how to interpret the input. For example, an 8-bit gray image could be interpreted as a blend mask or as a replacement for one of the RGB data planes.
To create a new data source, use the function CDSequenceNewDataSource
.
QuickTime supports compressing and storing images with alpha channels. QuickTime supports use of the alpha channel when displaying images. Display of alpha channels is supported only for images with an uncompressed bit depth of 32 bits or greater. For 32-bit ARGB images, for example, the high byte of each pixel contains the alpha channel. For 64-bit ARGB (k64ARGBCodecType
), there is a 16-bit alpha field for each pixel, in addition to 16-bit fields for red, green, and blue. The alpha channel can be interpreted in one of three ways:
straight alpha
pre-multiplied with white
pre-multiplied with black
QuickTime uses the alpha channel to define how an image is to be combined with the image that is already present at the location to which it will be drawn. This is similar to how QuickDraw’s blend mode works. To combine an image containing an alpha channel with another image, you specify how the alpha channel should be interpreted by specifying one of the new alpha channel graphics modes defined by QuickTime.
Straight alpha means that the color components of each pixel should be combined with the corresponding background pixel based on the value contained in the alpha channel. For example, if the alpha value is 0, only the background pixel will appear. If the alpha value is 255, only the foreground pixel will appear. If the alpha value is 127, then (127/255) of the foreground pixel will be blended with (128/255) of the background pixel to create the resulting pixel, and so on.
Pre-multiplied with white means that the color components of each pixel have already been blended with a white pixel, based on their alpha channel value. Effectively, this means that the image has already been combined with a white background. To combine the image with a different background color, QuickTime must first remove the white from each pixel and then blend the image with the actual background pixels. Images are often pre-multipled with white as this reduces the appearance of jagged edges around objects.
Pre-multipled with black is the same as pre-multipled with white, except the background color that the image has been blended with is black instead of white.
Note: Although you pass these new alpha channel graphics modes to QuickTime in the same way as you would traditional QuickDraw transfer modes, these modes are not supported by QuickDraw and will cause unpredictable results if passed to QuickDraw routines.
The Image Compression Manager defines the following constants for specifying alpha channel graphics modes:
enum { |
graphicsModeStraightAlpha = 256, |
graphicsModePreWhiteAlpha = 257, |
graphicsModePreBlackAlpha = 258 |
graphicsModeStraightAlphaBlend = 260 |
}; |
The graphicsModeStraightAlphaImage
Compression Manager, graphicsModePreWhiteAlphaImage
Compression Manager, and graphicsModePreBlackAlpha
graphics modes cause QuickTime to draw the image interpreting the alpha channel as specified. The graphics mode graphicsModeStraightAlphaBlend
causes QuickTime to interpret the alpha channel as a straight alpha channel, but when it draws, combines the pixels together and applies the opColor
supplied with the graphics mode to the alpha channel. This provides an easy way to combine images using both an alpha channel and a blend level. This can be useful when compositing 3D rendered images over video.
To draw a compressed image containing an alpha channel, that image must be compressed using an image-compression format that is capable of storing the alpha channel information. The Animation, Planar RGB and None compressors store alpha channel data in the “Millions of Colors” + (32-bit) mode.
You use the
MediaSetGraphicsMode
function to set a movie track to use an alpha channel graphics mode. You use the SetDSequenceTransferMode
function to set an image sequence to use an alpha channel graphics mode.
QuickTime introduced support for working directly with fields of interlaced video, such as those created by some motion JPEG compressors.
Because video processing applications sometimes need to perform operations on individual fields (for example, reversing them or combining one field of a frame with a field from another frame), QuickTime now provides a method for accessing the individual fields without having to decompress them first. Previously such operations required decompressing each frame, copying the appropriate fields, and then recompressing. This was a time consuming process that could result in a loss of image quality due to the decompression and recompression of the video data.
Three functions (ImageFieldSequenceBegin
, ImageFieldSequenceExtractCombine
, and ImageFieldSequenceEnd)
allow an application to request that field operations be performed directly on the compressed data. These functions accept one or two compressed images as input and create a single compressed image on output.
The Apple Component Video and Motion JPEG compressors support image field functions in QuickTime. See the description of the ImageFieldSequenceBegin
, ImageFieldSequenceExtractCombine
, and ImageFieldSequenceEnd
functions in the QuickTime API Reference for information on how to process image fields in your application.
QuickTime video compressors are increasingly being used for videoconferencing applications. Image data from a compressor is typically split into network-packet-sized pieces, transmitted through a packet-based protocol (such as UDP or DDP), and reassembled into a frame by the receiver(s). Typically, a lost packet causes an entire frame to be dropped; without all the data for a given frame, the decompressor cannot decode the image. When the loss of one packet forces others to be unusable, the loss rate is effectively multiplied by a large factor.
Some compression methods, however, such as H.261, can divide a compressed image into pieces which can be decoded independently. Some videoconferencing protocols, such as the Internet’s Real Time Protocol (RTP, RFC#1889), specify that data compressed using H.261 must be packetized into independently decodable chunks. While RTP demands this packetization information from the compressor, other protocols, such as QuickTime Conferencing’s MovieTalk protocol, can optionally use this information to effectively reduce loss rates.
QuickTime added four functions to support packetization: SetCSequencePreferredPacketSize
, SGSetPreferredPacketSize
, SGGetPreferredPacketSize
, and VDSetPreferredPacketSize
. In addition, the CodecCompressParams
structure includes a field, preferredPacketSizeInBytes
.
For application developers, the important function is SGSetPreferredPacketSize
.
© 2005, 2006 Apple Computer, Inc. All Rights Reserved. (Last updated: 2006-01-10)