Important: The information in this document is obsolete and should not be used for new development.
This chapter discusses the Audio Converter, Audio File, Audio Format and AUGraph APIs, which are part of the Audio Toolbox for Mac OS X, and the services provided by the Audio Toolbox framework that applications may use for audio processing. The section “Audio Toolbox Reference” describes the constants, data types, and functions of the Audio Toolbox framework.
Overview of the Audio Toolbox
Using the Audio Toolbox
Audio Toolbox Reference
The Audio Toolbox framework provides a set of services that applications can use for audio processing:
AudioConverter.h
AudioFormat.h
AudioFile.h
AUGraph.h
In Java, these services are provided in the com.apple.audio.toolbox
package.
Audio Converter provides format conversion services. When encoding or decoding audio data, Audio Converter should be utilized, as it allows for sample rate conversions, interleaving and deinterleaving of audio streams, floating-point-to-integer and integer-to-floating-point conversions, and bit rate conversions. Also, the API handles channel reordering, as well as converting between PCM and compressed formats. When encoding or decoding an audio stream, use of Audio Converter is strongly recommended over the direct use of an audio codec, since optimizations are in place to provided for optimal conversions.
The Audio Format API is provided to help handle information
about different audio formats. It is able to inspect AudioStreamBasicDescription
instances
and provide more information about a particular format’s parameters.
This API also can derive information from AudioChannelLayout
instances,
including a description of the channels present in the instance,
and the ordering of the channels. Finally, Audio Format can provide
information about the encoders and decoders available on the system.
Audio File is a system with which audio files may be created, opened, modified, and saved. Besides these operations, it also allows for discovery of global properties, including:
File types that can be read.
File types that can be written.
A name for a file type.
Stream formats that can be read.
All file extensions that can be read.
File extensions for a file type.
The AUGraph is a high-level representation of a set of Audio Units, along with the connections between them. These APIs may be used to construct arbitrary signal paths through which audio may be processed, that is, a modular routing system. The APIs deal with large numbers of Audio Units and their relationships to one another.
AUGraphs provide the following services:
Real-time routing changes that allow for connections to be created and broken while audio is being processed.
Maintaining representation even when Audio Units are not instantiated.
The head of a graph is always an output unit, which may save the processed audio stream to disk, into memory, or as sound out. Starting a graph entails “pulling” on the head unit (provided for by API), which will, in turn, pull on the next unit in the graph. The contents of a graph may be saved output and saved for later use.
The Music Player and Music Sequence APIs are used in tandem to sequence various events. Events can range from the changing of an audio unit's parameters to sending a MIDI endpoint a message. Standard MIDI files are played back using the Music Player API, particularly using the provided functions (see Reading in an SMF section). Similarly, you can save incoming MIDI data to a music sequence and then save the sequence as a standard MIDI file.There are three pieces in the Music Player API: players, sequences, and tracks.Players are assigned to a sequence, and trigger the sequence to start and stop. The relation between a player and a sequence is one-to-one, meaning that each player may only have one sequence assigned to it, and vice versa. Players also keep track of the current playback time in the sequence, and allow the playback time to be set. Finally, a scalar can be applied to a player, which will alter the tempo of the assigned sequence by that scalar.A sequence is a collection of tracks. A track is collection of events targeted at either a MIDI endpoint, an audio unit, or a callback. A sequence may contain an arbitrary number of tracks, created as needed. Each sequence also contains one special track, the tempo track. This track measures out the playback rate, in beats-per-minute (bpm). Adding tempo events to a tempo track will change the rate at which events occur from that point on, or until the next tempo event occurs.
When recording MIDI events for saving to disk, the incoming MIDI data needs to be parsed and placed into the sequence, which then can be saved as a standard MIDI file, for later use.
This usage section describes how to utilize the APIs that comprise the Audio Toolbox framework available for Mac OS X.
The Audio Converter API allows for the conversion between various audio formats. These examples are provided to give the developer a feel for using the Audio Converter tool.
AudioStreamBasicDescription in, out; |
/* ... Fill out stream descriptions ... */ |
AudioConverterRef converter; |
OSStatus err = AudioConverterNew(&in, &out, &converter); |
These steps should be followed when creating a new converter:
Declare two AudioStreamBasicDescription
instances,
one for the input, and one for the output.
Populate the two descriptions with the appropriate stream information.
Declare a new converter instance.
Invoke the AudioConverterNew()
function,
providing the input, output, and converter as parameters. Note that
the parameters are passed by reference.
AudioConverterRef converter; |
/* ... Set up the converter .. */ |
const UInt32 kRequestPackets = 8192; |
AudioBufferList bufferList; |
/* ... Allocate the output buffer ... */ |
while( /* ... While there is data left to be converted ... */ ) |
{ |
UInt32 ioOutputDataPacketSize = kRequestPackets; |
OSStatus err = AudioConverterFillComplexBuffer(converter, inputProcPtr, |
userData, &ioOutputDataPacketSize, &bufferList, NULL); |
} |
These steps should be followed when pulling data from a converter:
Allocate and set up a converter instance.
Optional: Set up a constant for the amount of data to be pulled.
Set up an AudioBufferList
instance
to hold the converted data. If the data is interleaved, then only
one index is needed for the instance’s mBuffers
array;
if the data consists of multiple mono channels, then allocate one
index in the mBuffers
array
for each channel.
Enter into a loop which pulls data until the *AudioConverterComplexInputProc
signals
that no more data is left to be pulled, or until the desired amount
of data is pulled.
Inside of the loop, use AudioConverterFillComplexBuffer()
to
pull the data. The parameters passed in this example are:
converter - The converter to be used.
inputProcPtr - A callback which provides the input data for conversion.
userData - Any parameters or
constants needed by the inputProcPtr
callback.
ioOutputDataPacketSize - Upon input, the requested amount of converted data; on output, the actual amount of data converted.
bufferList - The buffer for the converted audio data.
NULL - An AudioStreamPacketDescription
instance
used to describe the size of the resulting packet; only needed when
receiving variable bit rate (VBR) data.
OSStatus FromFloatInputProc ( |
AudioConverterRef inAudioConverter, |
UInt32 *ioNumberDataPackets, |
AudioBufferList *ioData, |
AudioStreamPacketDescription **outDataPacketDescription, |
void *inUserData ) |
{ |
MyUserData *data = static_cast<MyUserData*>(inUserData); |
AudioBufferList *bufferList = data->bufferList; |
for (UInt32 i=0; i < bufferList->mNumberBuffers; ++i) |
{ |
ioData->mBuffers[i].mNumberChannels = |
bufferList->mBuffers[i].mNumberChannels; |
ioData->mBuffers[i].mData = bufferList->mBuffers[i].mData; |
ioData->mBuffers[i].mDataByteSize = |
bufferList->mBuffers[i].mDataByteSize; |
} |
*ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize / |
data->mInputASBD.mBytesPerPacket; |
return noErr; |
} |
This example looks at creating an *AudioConverterComplexInputDataProc
for
use by AudioConverterFillComplexBuffer()
:
An *AudioConverterComplexInputDataProc
takes
in the following arguments:
inAudioConverter - The converter in use.
ioNumberPackets - The number of packets requested.
ioData - The data to be returned
to AudioConverterFillComplexBuffer()
for
conversion.
outPacketDescription - Provided
to give the details of the packet format passed back when decoding.
This should be an AudioStreamPacketDescription
array,
with each packet corresponding to a descrption.
inUserData - Data needed by the callback, for any purpose; in this case, it holds the location of the input data.
In a loop, fill each buffer in ioData
with
the number of channels, requested amount of data, and data byte
size.
Calculate the number of provided packets, and place the value in ioNumberDataPackets.
The Audio Format API is provided to acquire information about formats and channel layouts. This example is provided to give the developer a feel for using the Audio Format API.
UInt32 size; |
OSStatus err; |
OSType *formatIDs; |
err = AudioFormatGetPropertyInfo( |
kAudioFormatProperty_EncodeFormatIDs, 0, NULL, &size); |
if (err) return err; |
formatIDs = (OSType*)malloc(size); |
UInt32 numFormats = size / sizeof(OSType); |
err = AudioFormatGetProperty( |
kAudioFormatProperty_EncodeFormatIDs, 0, NULL, &size, formatIDs); |
if (err) return err; |
for (UInt32 i=0; i<numFormats; ++i) |
{ |
AudioStreamBasicDescription absd; |
memset(&absd, 0, sizeof(absd)); |
absd.mFormatID = formatIDs[i]; |
CFStringRef name; |
size = sizeof(CFStringRef); |
err = AudioFormatGetProperty( |
kAudioFormatProperty_FormatName, sizeof(absd), &absd, &size, &name); |
if (err) return err; |
CFShow(name); |
} |
This example shows how to use the property management system available in Audio Format to acquire information about the encoder formats, and output their names:
Use the AudioFormatGetPropertyInfo()
function
to get the size of the data that will be returned by calling AudioFormatGetProperty()
for
that property. The arguments for AudioFormatGetPropertyInfo()
are:
kAudioFormatProperty_EncodeFormatIDs - The property we are querying for.
0 - The size of the specifier;
in this case, 0
, since
this property does not require a specifier.
NULL - The specifier; in this
case, NULL
, since the
property does not require a specifier.
&size - The size of the
data that will be returned when AudioFormatGetProperty()
is
called for this property.
Once the size is obtained, AudioFormatGetProperty()
may
be called. The same parameters are passed in as with AudioFormatGetPropertyInfo()
,
with the addition of a void
pointer, in this
case formatIDs, to hold the returned
data in.
Now that an array of the format IDs have been attained, it may be iterated over by following these steps:
Create
an AudioStreamBasicDescription
instance and
clear its contents.
Set the format ID of the AudioStreamBasicDescription
instance
to one of the format IDs returned when AudioFormatGetProperty()
was
called with the kAudioFormatProperty_EncodeFormatIDs
property.
Create a CFStringRef
to hold the name
of the format, and obtain its size.
To obtain the format’s name, call AudioFormatGetProperty()
with kAudioFormatProperty_FormatName
as
the property, the AudioStreamBasicDescription
instance
as the specifier, and the CFString
as the outPropertyData value.
Print out the name of the format, as stored in the CFString
.
The Audio File API is used to discover global file format information and to provide an interface for creating, opening, modifying, and saving audio files. This example is provided to give the developer a feel for using the Audio File API.
OSStatus err; |
UInt32 propertySize; |
err = AudioFileGetGlobalInfoSize( |
kAudioFileGlobalInfo_WritableTypes, 0, NULL, &propertySize); |
if (err) return err; |
OSType *types = (OSType*)malloc(propertySize); |
err = AudioFileGetGlobalInfo( |
kAudioFileGlobalInfo_WritableTypes, 0, NULL, &propertySize, types); |
if (err) return err; |
UInt32 numTypes = propertySize / sizeof(OSType); |
for (UInt32 i=0; i<numTypes; ++i) |
{ |
CFStringRef name; |
UInt32 outSize = sizeof(name); |
err = AudioFileGetGlobalInfo( |
kAudioFileGlobalInfo_FileTypeName, sizeof(OSType), types+i, &outSize, &name); |
if (err) return err; |
CFShow(name); |
} |
This example shows how to obtain an array of the writable file types for the system and output their names:
Use the AudioFileGetGlobalInfoSize()
function
to get the size of the data that will be returned by calling AudioFileGetGlobalInfo()
for
that property. The arguments for AudioFileGetGlobalInfoSize()
are:
kAudioFileGlobalInfo_WritableTypes - The property we are querying for.
0
- The specifier’s
size; in this case, 0
,
since there is now need for a specifier for this property.
NULL
- The specifier;
in this case, NULL
, since
this property does not require a specifier.
&propertySize
-
The size of the data that will be returned when AudioFileGetGlobalInfo()
is called
for this property.
Once the size is obtained, it will be used by the AudioFileGetGlobalInfo()
function.
In addition to the four parameters used for AudioFileGetGlobalInfoSize()
,
a fifth is used to point to the actual property data returned by
this method. Note that the space for holding this returned data
is allocated beforehand, using the propertySize obtained
previously.
Once the types have been returned, a loop is used to cycle through the types and query Audio File for the name of the type, which is then printed:
Create a CFStringRef
to
hold the name of the type, and obtain its size.
To obtain the type’s name, call AudioFileGetGlobalInfo()
with kAudioFileGlobalInfo_FileTypeName
as
the property, the type as the specifier, and the CFString
as
the outPropertyData value.
Print out the name of the format, as stored in the CFString
.
An audio unit graph maintains its representation using the AUNode
type,
even when the Audio Unit components themselves are not instantiated.
The AUGraph states are defined as open, initialized, running, and closed. These correspond directly with the Audio Unit states.
The AUGraph APIs are responsible for representing the description
of a set of Audio Unit components, as well as the audio connections
between their inputs and outputs. This representation may be saved
and restored persistently and instantiated by opening all of the
Audio Units (AUGraphOpen()
), and making
the physical connections between them stored in the representation
(AUGraphInitialize()
). Thus, the graph
is a description of the various Audio Units and their connections,
but also manage the actual instantiated Audio Units.
The AUGraph is a complete description of an audio signal processing network.
The AUGraph may be introspected in order to get complete information
about all of the Audio Units in the graph. The various nodes (AUNode
)
in the graph representing Audio Units may be added or removed, and
the connections between them modified.
An AUNode representing an Audio Unit component is created
by specifying a ComponentDescription
record
(from the Component Manager), as well as optional “class” data, which
is passed to the Audio Unit when it is opened.
This class data is in an arbitrary format, and may differ depending on the particular Audio Unit. In general, the data is used by the Audio Unit to configure itself when it is opened (in object-oriented terms, it corresponds to constructor arguments). In addition, certain AudioUnits may provide their own class data when they are closed, allowing their current state to be saved for the next time they are instantiated. This provides a general mechanism for persistence.
An AUGraph's state can be manipulated in both the rendering
thread and in other threads. Consequently, any activities that effect
the state of the graph are guarded with locks.To avoid blocking
the render thread, many of the calls to AUGraph may return kAUGraphErr_CannotDoInCurrentContext
.
This result is only generated when an graph modification is called
from within a render callback. It means that the lock that it required
was held at that time by another thread. If this result code is
returned, the action may be retried, typically on the next render
cycle (so in the mean time the lock can be cleared), or the action may
be delegated to another thread. As a general rule, the render thread
should not be allowed to spin.
When using AUGraph, certain steps need to be followed in order for the graph to function properly. These steps must be followed in this order:
Create the graph. Call NewAUGraph
on an AUGraph
instance.
Populate the graph. Use AUGraphNewNode
and AUGraphNewNodeSubGraph
to populate the
graph.
Make connections between nodes. AUGraphConnectNodeInput
sets
up connections between the nodes. Nodes may have multiple inputs
and outputs, though sharing an input or output is not allowed. This
is otherwise known as “fan in” and “fan out,” and is not allowed
in AUGraph.
Open the graph. Up until this point,
each node was being used in the abstract AUNode
representation.
Upon calling AUGraphOpen
,
each node is instantiated. This allows for properties to be set
for each Audio Unit inside of the graph.
Set output sample rates and channel layouts. A common error is for sample rates and channel layouts to be mismatched between nodes. If any format changes occur, it is imperative that sample rates and channel number be set for all Audio Unit outputs prior to initialization
Initialize the graph. Once setup has
occurred, AUGraphInitialize
may
be called. This makes all the connections between the nodes, and
initializes all of the Audio Units that are part of a connection.
At the least, the output unit is initialized, even if no connections
lead to it.
Start the graph. Calling AUGraphStart
begins
rendering, starting with the output unit and traversing through
the graph.
Once a graph has been created, its contents may be modified by adding and removing connections and nodes. These functions are provided for performing these actions:
After the graph is initialized, any of these functions may be called, and the changes will occur immediately, meaning that the node will be initialized right way, and connections will be made immediately as well.
When a graph is running, however, changes to not immediately
take effect. Calling any of these functions is allowed, but the
actions they perform are queued. To apply the actions to a running
graph, AUGraphUpdate
must
be called.
Calling an update signals to the render thread that an update is ready to occur. When the render thread gets to a point in its cycle where updates are allowed (usually before and after a render), the update is actually performed. Before calling an update, check the format, sample rates, and channel layouts of the connections to avoid errors. If an error does occur, all updates are halted.
When audio data rendering is no longer needed, the graph may
be stopped by calling AUGraphStop
.
This does not alter the graph in any way; it simply halts the pull
on the output node. However, AUGraph uses a reference counting scheme
to ensure that one process does not stop the graph while another
may still be accessing it. Each AUGraphStart()
invocation
adds one to the reference, while each AUGraphStop()
subtracts
one from the reference. When the reference becomes zero, it then
stops rendering. To determine if a graph is still running, use AUGraphIsRunning
.
Uninitializing the graph is done by calling AUGraphUninitialize
. First, doing so
will stop audio render, no matter what the reference count for the
graph is. Beyond that, it also calls the Uninitialize()
function
for each Audio Unit and subgraph.
If the graph is no longer needed, calling AUGraphClose
will close all the Audio
Unit components in the graph, leaving only a nodal representation
of the graph. As with uninitialization, if the graph is rendering
audio data, calling this function halts the render.
It is worth noting that the graph’s structure may be serialized
using the AUGraphGetNodeInfo
and AUGraphGetNodeConnections
functions
at any time the graph exists.
When the AUGraph is no longer needed, use DisposeAUGraph
to deallocate it.
A music sequence is designed to hold various music tracks, which are intended to be logical groupings of events. To use a music sequence, these functions need to be called:
NewMusicSequence
-
This creates a new music sequence. The sequence, as is, contains
only a tempo track.
MusicSequenceNewTrack
-
Call this function for each new track that you want in the sequence.
Also of note is the fact that you can reverse all the events
in all of the tracks of a sequence by calling MusicSequenceReverse
.
Events trigger changes to the destination of a track. There are eight different types of events:
MIDIChannelMessage
-
For use with MusicTrackNewMIDIChannelEvent
.
These events will pass the assigned data on to the channel when
triggered.
MIDINoteMessage
- For
use with MusicTrackNewMIDINoteEvent
.
These events will pass the information for a note to the targeted
endpoint when triggered.
MIDIRawData
- For use
with MusicTrackNewMIDIRawDataEvent
.
These events will pass raw MIDI data on to an endpoint when triggered.
MIDIMetaEvent
- For
use with MusicTrackNewMetaEvent
.
These events should be used when MIDI meta data needs to be passed
on to an endpoint.
MusicEventUserData
-
For use with MusicTrackNewUserEvent
.
This event will call a MusicSequenceUserCallback
callback,
passing in the structure’s user data to the callback. The callback
is registered via MusicSequenceSetUserCallback
,
with each sequence allowing for one callback to be registered to
it.
ExtendedNoteOnEvent
-
For use with MusicTrackNewExtendedNoteEvent
.
These events will send a note message to a music device audio unit
when triggered.
ExtendedControlEvent
-
For use with MusicTrackNewExtendedControlEvent
.
These events will send a control message, changing the parameters
of a music device audio unit when triggered.
ParameterEvent
- For
use with MusicTrackNewParameterEvent
.
These events will issue a parameter change in an audio unit when
triggered.
A sequence or track must address either a MIDI endpoint or an audio unit (when used inside of an audio unit graph). All of the events belonging to a sequence or track will then be sent to its assigned destination.
An entire sequence can be assigned to an endpoint or a graph via:
When targeting a sequence to a specific graph, its tracks
need to be assigned to units within the track; this is done via MusicTrackSetDestNode
.
Within this context, endpoints can still be addressed by a track
using MusicTrackSetDestMIDIEndpoint
.
Each sequence has a tempo track assigned to it, which can not be removed. This track is designed to control the rate of playback across the sequence’s tracks. The units of measurement used here are beats-per-minute (bpm), which can be any floating point value. An event in the tempo track will change the playback rate to the new event’s specified rate.
All of the events for this track must be of type ExtendedTempoEvent
;
no others are allowed in the tempo track, and this event type is
not allowed in event tracks. Each tempo track starts out with one
of these, specifying the initial playback rate. By default, this
rate is 120 bpm, but it can be modified to be any floating point
value.
To access the tempo track, call MusicSequenceGetTempoTrack
. Once you
acquire the tempo track, and tempo event can be added to it by calling MusicTrackExtendedTempoEvent
and
passing in a new event. The event should have the intended rate
in it. This means that, once the event is reached, the new rate
will be used from that point on, until playback stops, or the next tempo
event is reached. During that time, all events in all tracks within
the sequence will occur at the new rate. For example, if the initial
tempo was 120 bpm, and the next tempo event was set to occur at
the 90th beat, it will occur after 45 seconds. If that event changes the
tempo to 60 bpm, all of the events from that point on will happen
at half the rate of the previous tempo. So if the next tempo event
is set to occur at the 120th beat, will happen 75 seconds after
playback has begun.
When a sewuence is no longer needed, it may be disposed of.
This is done by calling DisposeMusicSequence
.
To delete a track and its accompanying events, call MusicSequenceDisposeTrack
.
A number of functions are provided with the Music Sequence API to get information about a sequence and its tracks:
MusicSequenceGetTrackCount
-
Returns the number of tracks in the current sequence.
MusicSequenceGetIndTrack
-
Returns a pointer towards a track for a particular index.
MusicSequenceGetTrackIndex
-
Returns the index of a track.
MusicSequenceGetAUGraph
-
Returns a pointer for the audio unit graph currently assigned to the
sequence. Returns NULL if there is no graph assigned.
MusicTrackGetSequence
-
Returns a pointer to the sequence which contains the current track.
MusicTrackGetDestNode
-
Returns a pointer towards the node used by the selected track.
MusicTrackGetDestMIDIEndpoint
-
Returns a pointer towards the MIDI endpoint used by the selected
track.
Properties are used to change the status of various tracks,
as described in “Music Track Properties.” Use MusicTrackGetProperty
to retrieve the
current value of any of the properties, and MusicTrackSetProperty
to change the
value of the property.
To gain access to the events within a track, the track needs to be iterated over. Iterating involves creating a new iterator, and then moving forward or backward between the events within the track. These functions are used when setting up an iterator and iterating on a track:
NewMusicEventIterator
-
Creates a new iterator.
DisposeMusicEventIterator
-
Disposes of the iterator.
MusicEventIteratorNextEvent
-
Moves the iterator to the next event.
MusicEventIteratorPreviousEvent
-
Moves the iterator to the previous event
MusicEventIteratorGetEventInfo
-
Retrieves information about the current event.
MusicEventIteratorSetEventInfo
-
Sets information about the current event.
MusicEventIteratorDeleteEvent
-
Removes the event from the track.
MusicEventIteratorSetEventTime
-
Moves the event to the new time.
MusicEventIteratorHasPreviousEvent
-
Returns a boolean signifying if there is an event before the iterator.
MusicEventIteratorHasNextEvent
-
Returns a boolean signifying if there is an event after the iterator.
MusicEventIteratorHasCurrentEvent
-
Returns a boolean signifying if the iterator currently points towards
an event, or is at the end of the track.
Events within a track can be modified based on their placement within a track. The idea is to be able to grab the events within a certain amount period of time and then to move them elsewhere, or delete them.
To move events within a track, use MusicTrackMoveEvents
. All you need to
do is to specify the range of events to move, and where to move
them to.
The NewMusicTrackFrom
function
will take the specified range of events, and will create a new track
with the range in it.
MusicTrackClear
will
remove the events in the given range, while MusicTrackCut
will remove the given
range, and move the events after the range up to fill the space
left by the cut.
Use MusicTrackCopyInsert
to
copy a series of event from one track to another. Doing so with
this function move the events behind the insertion point back to
the end of the range in the destination track.
Finally, MusicTrackMerge
will
take the source range and merge it with the events following the insertion
point in the destination.
A music player is associated with a music sequence, in a one-to-one relationship. The player keeps track of the playhead for the sequence, and allows for movement within the sequence. Activating and stopping the sequence is done via a player.
NewMusicPlayer
-
Creates a new music player.
DisposeMusicPlayer
-
Disposes of the music player. Note that this does not dispose of
the sequence attached to the player.
MusicPlayerSetSequence
-
Sets the sequence controlled by the player.
MusicPlayerStart
-
Begins playback of the sequence.
MusicPlayerStop
- Halts
sequence playback.
MusicPlayerIsPlaying
-
Returns a boolean signifying if the player is in use.
Of note is when the playhead is moved within the player. This
is done using MusicPlayerSetTime
,
which also prerolls the sequence for you. Prerolling is when the
sequence is prepared to begin in mid-sequence, with all parameters
and endpoints being adjusted to the points they should be at the
playhead. To determine what time the player is currently at, use MusicPlayerGetTime
.
If a new event is added to a sequence, it will need to be
manually prerolled using MusicPlayerPreroll
.
The Music Player API is used to read in MIDI files. To do
so, simply call MusicSequenceLoadSMF
,
specifying the file from which the data is to be read in, or MusicSequenceLoadSMFData
when
the MIDI is to be read in from memory. When these functions are
used, the MIDI data is parsed and placed, as events, in a track
inside of a sequence.
Calling MusicSequenceLoadSMFWithFlags
and MusicSequenceLoadSMFDataWithFlags
,
with kMusicSequenceLoadSMF_ChannelsToTracks passed in as the flag
will result in each channel in the MIDI data being parsed into its
own track in the sequence. Beyond that, any meta data that is found
in the MIDI sequence in placed in the last track of the sequence.
When you want to save incoming MIDI data to disk, first you
need to capture the incoming MIDI data. The data then needs to be
parsed and placed into a sequence, specifically into a track. You
will need to use the MusicPlayerGetBeatsForHostTime
function
to determine how far into the sequence the new MIDI event will need
to be. This can only be done as the sequence in running, so that
calling this will return the current beat in the sequence when it
is invoked.
Once all of the incoming MIDI data has been captured and placed
into a sequence, it needs to be saved to disk. To do this, call
the MusicSequenceSaveSMF
function,
if saving the data to disk, or, if saving it to memory, call MusicSequenceSaveSMFData
.
This reference section describes the constants, data types and functions that comprise the Audio Toolbox framework available for Mac OS X.
Audio converters are designed to meet a developer’s encoding and decoding needs. It allows for conversions between most conceivable combinations of input and output formats, assuming proper codecs are available on the system.
Typedefs are used to simplify the declaration of converters and the use of properties in the context of an audio converter.
typedef struct OpaqueAudioConverter*
AudioConverterRef
typedef UInt32 AudioConverterPropertyID
Stores information regarding the number of frames used in priming input for the current codec.
typedef struct AudioConverterPrimeInfo { UInt32 leadingFrames; UInt32 trailingFrames; } AudioConverterPrimeInfo;
An instance of this structure is input via the kAudioConverterPrimeInfo
property.
The instance works in conjunction with the kAudioConverterPrimeMethod
property,
which specifies the priming method used by the codec. When a priming
method is in use, the members of this structure are used to specify
the number of leading and trailing frames (when using kConverterPrimeMethod_Pre
),
or just the number of trailing frames (when using kConverterPrimeMethod_Normal
),
for all input packets.
AudioConverter.h
Constants are provided for the developer’s convenience. They provide a consistent set of values for various aspects of a converter’s operations, and may be appended by the developer at any time.
Used by kAudioConverterSampleRateConverterQuality
to
set the relative quality of the conversion.
kAudioConverterQuality_Max = 0x7F |
kAudioConverterQuality_High = 0x60 |
kAudioConverterQuality_Medium = 0x40 |
kAudioConverterQuality_Low = 0x20 |
kAudioConverterQuality_Min = 0 |
Note: The relative quality of a conversion is an arbitrary aspect of the codec used, and may or may not alter the quality of the resulting conversion.
Specifies the priming method currently in use, as referenced
in the kAudioConverterPrimeMethod
property.
kConverterPrimeMethod_Pre
= 0
When kAudioConverterPrimeMethod
is
set to this value, the converter will expect that the packet be
primed with both leading and trailing frames.
kConverterPrimeMethod_Normal
= 1
Set kAudioConverterPrimeMethod
to
this when the trailing frames are needed for the conversion; leading
frames are assumed to silent.
kConverterPrimeMethod_None
= 2
Used when the leading and trailing frames are assumed to be silent and priming is not needed.
The properties are used to query a converter for its settings, and sometimes, to modify those properties.
kAudioConverterPropertyMinimumInputBufferSize
= 'mibs'
Returns a UInt32
containing
the size of the smallest input buffer size, in bytes, that can be
supplied into the AudioConverterConvertBuffer()
function
or the *AudioConverterInputProc
callback.
kAudioConverterPropertyMinimumOutputBufferSize
= 'mobs'
Returns a UInt32
containing
the size of the smallest buffer that will bereturned as a result
of AudioConverterConvertBuffer()
or AudioConverterFillBuffer()
.
kAudioConverterPropertyMaximumInputBufferSize
= 'xibs'
Returns a UInt32
containing
the largest buffer size that will requested by *AudioConverterInputProc
;
returns 0xFFFFFFFF
if
the value depends on the size of the input.
kAudioConverterPropertyMaximumInputPacketSize
= 'xips'
Returns a UInt32
containing
the size, in bytes, of the largest packet of data that may be input.
kAudioConverterPropertyMaximumOutputPacketSize
= 'xops'
Returns a UInt32
containing
the size, in bytes, of the largest packet of data that will be output.
kAudioConverterPropertyCalculateInputBufferSize
= 'cibs'
On input, takes a UInt32
with
the desired output size, in bytes; returns the number of bytes needed
as input to generate the requested output.
kAudioConverterPropertyCalculateOutputBufferSize
= 'cobs'
On input, takes a UInt32
with
the desired input size, in bytes; returns the number of bytes returned
as output for the requested input.
kAudioConverterPropertyInputCodecParameters
= 'icdp'
Takes in a buffer of untyped data for private use relative and specific to the format.
kAudioConverterPropertyOutputCodecParameters
= 'ocdp'
Takes in a buffer of untyped data for private use relative and specific to the format.
kAudioConverterSampleRateConverterAlgorithm
= 'srci'
Deprecated. Use kAudioConverterSampleRateConverterQuality
instead.
kAudioConverterSampleRateConverterQuality
= 'srcq'
Specifies the quality of the sample rate conversion, using the “Converter Quality Settings.”
kAudioConverterPrimeMethod
= 'prmm'
Specifies the priming method, using the “Priming Method Selectors.”
kAudioConverterPrimeInfo
= 'prim'
Returns in a pointer to an AudioConverterPrimeInfo
instance.
kAudioConverterChannelMap
= 'chmp'
Takes an array of SInt32
values
where the index represents an output channel and the value stored
at the index in the array is the connecting input channel; the size
of the array is the number of output channels.
kAudioConverterDecompressionMagicCookie
= 'dmgc'
Takes a void
pointer
towards the magic cookie that may be required to decompress the
data.
kAudioConverterCompressionMagicCookie
= 'cmgc'
Returns a void
pointer
towards the magic cookie used to compress the output data; may be
passed back via kAudioConverterDecompressionMagicCookie
for decompressing
the data.
Creates a new audio converter.
extern OSStatus AudioConverterNew( const AudioStreamBasicDescription* inSourceFormat, const AudioStreamBasicDescription* inDestinationFormat, AudioConverterRef* outAudioConverter );
This function takes in two AudioStreamBasicDescription
instances,
one for the source, and one for the destination, sets up all of
the internal links needed for the conversion, and returns a pointer
for the new converter. Note that if the setup fails, an error is
returned which specifies the error that was encountered.
AudioConverter.h
Destroys an audio converter.
extern OSStatus AudioConverterDispose(AudioConverterRef inAudioConverter);
This function deallocates the memory used by inAudioConverter.
AudioConverter.h
Resets the audio converter to its post-initialization state.
extern OSStatus AudioConverterReset(AudioConverterRef inAudioConverter);
AudioConverter.h
Retrieves the size and writable state of the data belonging to the queried property.
extern OSStatus AudioConverterGetPropertyInfo( AudioConverterRef inAudioConverter, AudioConverterPropertyID inPropertyID, UInt32* outSize, Boolean* outWritable );
The outSize
value
returned reflects the size, in bytes, of the data returned by calling AudioConverterGetProperty()
with
the respective property.
AudioConverter.h
Returns the requested property data.
extern OSStatus AudioConverterGetProperty( AudioConverterRef inAudioConverter, AudioConverterPropertyID inPropertyID, UInt32* ioPropertyDataSize, void* outPropertyData );
The ioPropertyDataSize parameter
should be the value obtained from calling AudioConverterGetPropertyInfo()
;
the output value of ioPropertyDataSize
will
be the actual data size of the returned data, for reference.
AudioConverter.h
Sets the property data to inPropertyData
.
extern OSStatus AudioConverterSetProperty( AudioConverterRef inAudioConverter, AudioConverterPropertyID inPropertyID, UInt32 inPropertyDataSize, const void* inPropertyData );
The inPropertyDataSize should be the size of data being input, and inPropertyData should point to the data to be set for inPropertyID.
AudioConverter.h
Should provide data for AudioConverterFillBuffer()
.
typedef OSStatus (*AudioConverterInputDataProc) ( AudioConverterRef inAudioConverter, UInt32* ioDataSize, void** outData, void* inUserData );
Deprecated. On input, ioDataSize
will
be the amount of data the converter needs to fill its buffer; on
output, this value should reflect the amount of the data provided
(if there is no more input data available, 0
should
be returned).
Fills the provided buffer with converted data.
extern OSStatus AudioConverterFillBuffer( AudioConverterRef inAudioConverter, AudioConverterInputDataProc inInputDataProc, void* inInputDataProcUserData, UInt32* ioOutputDataSize, void* outOutputData );
Deprecated. Uses the provided inInputDataProc
callback
to acquire data, converts it, and places the converted data in. outOutputData
.
Deprecated since it can only work with a single buffer. Use AudioConverterFillComplexBuffer()
instead.
AudioConverter.h
Should provide AudioConverterFillComplexBuffer()
with
data for conversion.
typedef OSStatus (*AudioConverterComplexInputDataProc) ( AudioConverterRef inAudioConverter, UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription** outDataPacketDescription, void* inUserData );
AudioConverterFillComplexBuffer()
will
use this callback to acquire data to convert. The returned data
will be an AudioBufferList
,
meaning that the data should be in separate indices, one for each
channel. Use inUserData
for
any data the callback may need passed to it. The caller will pass
the number of packets requested in ioNumberDataPackets
,
and upon completion, the callback should return the number of packets
actually provided, or 0
if
there is no data left to provide. The resulting packet format is
specified in outDataPacketDescription
.
Fills the AudioBufferList
with
converted data.
extern OSStatus AudioConverterFillComplexBuffer( AudioConverterRef inAudioConverter, AudioConverterComplexInputDataProc inInputDataProc, void* inInputDataProcUserData, UInt32* ioOutputDataPacketSize, AudioBufferList* outOutputData, AudioStreamPacketDescription* outPacketDescription );
Using the callback provided in inInputDataProc
,
this function will convert input data using inAudioConverter
and
will place the resulting converted data in outOutputData
.
Any relevant data for the callback should be passed in via inInputDataProcUserData
,
while outPacketDescription
will contain
the format of the returned data. On input, ioOutDataPacketSize should
contain the number of packets requested, and as output, will contain
the number of packets returned.
AudioConverter.h
These values are returned when errors occur.
kAudioConverterErr_FormatNotSupported = 'fmt?’ |
kAudioConverterErr_OperationNotSupported = 0x6F703F3F |
kAudioConverterErr_PropertyNotSupported = 'prop' |
kAudioConverterErr_InvalidInputSize = 'insz' |
kAudioConverterErr_InvalidOutputSize = 'otsz' |
kAudioConverterErr_UnspecifiedError = 'what' |
kAudioConverterErr_BadPropertySizeError = '!siz' |
kAudioConverterErr_RequiresPacketDescriptionsError = '!pkd’ |
The audio format system is provided to allow the developer
to get more information about certain aspects of AudioStreamBasicDescription
and AudioChannelLayout
instances,
and other important pieces of information.
The AudioFormatPropertyID
typedef is
used to hold the property ID being queried using the audio format
functions.
typedef UInt32 AudioFormatPropertyID
Stores information about the position of sound sources.
typedef struct AudioPanningInfo { UInt32 mPanningMode; UInt32 mCoordinateFlags; Float32 mCoordinates[3]; AudioChannelLayout* mOutputChannelMap; } AudioPanningInfo;
The mPanningMode value is based
on the panning mode constants. The value of mCoordinateFlags will
be based on the Coordinate Flag constants. The precise coordinates
of the source is located in mCoordinates,
and the mOutputChannelMap points
to an instance of an AudioChannelLayout
(specified
in CoreAudioTypes.h
),
which tracks channel layouts in hardware and in files.
AudioFormat.h
Constants are provided for the developer’s convenience. They provide a consistent set of values for various aspects of a converter’s operations.
These constants define various panning algorithms that can
be specified in an AudioPanningInfo
instance.
kPanningMode_SoundField
= 3
An Ambisonic format.
kPanningMode_VectorBasedPanning
= 4
A format for panning between two speakers.
Used by the mCoordinateFlags
value
in the AudioPanningInfo
structure; found in CoreAudioTypes.h
.
kAudioChannelFlags_RectangularCoordinates
= (1L<<0)
Use if cartesian coordinates are used for speaker positioning; either this or spherical coordinates must be chosen.
kAudioChannelFlags_SphericalCoordinates
= (1L<<1)
Use if spherical coordinates are used for speaker positioning; either this or cartesian coordinates must be chosen.
kAudioChannelFlags_Meters
= (1L<<2)
Use when units are in meters; if not set, then the units are relative to the coordinate system chosen.
The audio format tool uses the property system to get various pieces of information about structures used in Core Audio.
When the specifier parameter for AudioFormatGetPropertyInfo()
and AudioFormatGetProperty()
is
an AudioStreamBasicDescription
instance,
these properties may be queried.
kAudioFormatProperty_FormatInfo
= 'fmti'
Returns an AudioStreamBasicDescription
whose
values contain information about the specifier’s format.
kAudioFormatProperty_FormatIsVBR
= 'fvbr'
Returns a UInt32
where
a non-zero value means that the format has a variable bit rate (VBR).
kAudioFormatProperty_FormatIsExternallyFramed
= 'fexf'
Returns a UInt32
,
where a non-zero value indicates that the format is externally framed.
kAudioFormatProperty_FormatName
= 'fnam'
Returns a CFStringRef
containing
the name of the specified format.
kAudioFormatProperty_AvailableEncodeChannelLayouts
= 'aecl'
Takes in an AudioStreamBasicDescription
and
returns an AudioChannelLayoutTag
array containing
Audio Channel Layout constants.
kAudioFormatProperty_ChannelLayoutForTag
= 'cmpl
Takes an “Channel Layout Tags” value
(as specified in CoreAudioTypes.h
)
as the specifier and returns an AudioChannelLayout
with all
of its members filled with their respective versions of the input data.
kAudioFormatProperty_TagForChannelLayout
= 'cmpt'
Takes an AudioChannelLayout
as
the specifier and returns an AudioChannelLayoutTag
with
all of its members filled with their respective versions of the
input data.
kAudioFormatProperty_ChannelLayoutForBitmap
= 'cmpb'
Takes in a UInt32
that
contains a layout bitmap and returns an AudioChannelLayout
with
all of its members filled with their respective versions of the
input data.
kAudioFormatProperty_BitmapForLayoutTag
= 'bmtg'
Takes in a “Channel Layout Tags” value
and returns a UInt32
with the
bitmap of the channel layout.
kAudioFormatProperty_ChannelLayoutName
= 'lonm'
Takes in an AudioChannelLayout
and
returns a CFStringRef
with
the name of the channel.
kAudioFormatProperty_ChannelName
= 'cnam'
Takes in an AudioChannelDescription
with
a populated mChannelLabel
value,
and returns a CFStringRef
with
the name of the channel.
kAudioFormatProperty_MatrixMixMap
= 'mmap'
Takes in an array of two AudioChannelLayout
pointers,
the first to the input and the second to the output, and returns
a two dimensional Float32
array,
with the input being the rows and the output being the columns,
where the value at a coordinate is the gain that needs to be applied
to the input to achieve the output at that channel.
kAudioFormatProperty_NumberOfChannelsForLayout
= 'nchm'
Takes in an AudioChannelLayout
as
the specifier and returns a UInt32
with
the number of channels represented in the layout.
kAudioFormatProperty_PanningMatrix
= 'panm'
Takes in an AudioPanningInfo
instance
and returns a Float32
array
where each channel receives a volume level for each channel in the AudioPanningInfo
’s AudioChannelLayout
array.
These are other properties that involve discovering encoding and decoding formats and available sample and bit rates.
kAudioFormatProperty_EncodeFormatIDs
= 'acif'
Does not take a specifier (set to NULL
),
and returns a UInt32
array
containing “Format IDs” (specified
in CoreAudioTypes.h
)
for valid input formats into a converter.
kAudioFormatProperty_DecodeFormatIDs
= 'acof'
Does not take a specifier (set to NULL
),
and returns a UInt32
array
containing Format ID constants for valid output formats into a converter.
kAudioFormatProperty_AvailableEncodeSampleRates
= 'aesr'
Takes in a Format ID constant and returns an AudioValueRange
with all
of the available sample rates.
kAudioFormatProperty_AvailableEncodeBitRates
= 'aebr'
Takes in a Format ID constant and returns an AudioValueRange
with
all of the available bit rates.
These functions comprise the Audio Format property management system. These functions work by providing property ID, which notifies them as to which action should be performed, and a specifier, which is the data on which the operation is to be performed.
Retrieves the size of the data to be returned by the property.
extern OSStatus AudioFormatGetPropertyInfo( AudioFormatPropertyID inPropertyID, UInt32 inSpecifierSize, void* inSpecifier, UInt32* outPropertyDataSize );
AudioFormat.h
Retrieves the property information for the given property ID and selected specifier.
extern OSStatus AudioFormatGetProperty( AudioFormatPropertyID inPropertyID, UInt32 inSpecifierSize, void* inSpecifier, UInt32* ioPropertyDataSize, void* outPropertyData );
AudioFormat.h
These values are returned when errors occur.
kAudioFormatUnspecifiedError = 'what' |
kAudioFormatUnsupportedPropertyError = 'prop' |
kAudioFormatBadPropertySizeError = '!siz' |
kAudioFormatBadSpecifierSizeError = '!spc' |
kAudioFormatUnsupportedDataFormatError = 'fmt?' |
kAudioFormatUnknownFormatError = '!fmt' |
The Audio File API allows for opening and saving audio files in various formats, for later use.
Typedefs are used to simplify the declaration of converters and the use of properties in the context of an audio file.
typedef struct OpaqueAudioFileID *AudioFileID
typedef UInt32 AudioFilePropertyID
Used by the kAudioGlobalInfo_AvailableStreamDescriptionForFormat
property
to query for AudioStreamBasicDescriptions
based
on format and file type.
typedef struct AudioFileTypeAndFormatID{ UInt32 mFileType; UInt32 mFormatID; } AudioFileTypeAndFormat;
The value of mFileType
is
a “File Types” value,
while mFormatID
is from
the “Format IDs” in CoreAudioTypes.h
.
Constants are provided for the developer’s convenience. They provide a consistent set of values for various aspects of an audio file.
These constants are used to specify file types when using functions and structures related to audio files.
kAudioFileAIFFType = 'AIFF' |
kAudioFileAIFCType = 'AIFC' |
kAudioFileWAVEType = 'WAVE' |
kAudioFileSoundDesigner2Type = 'Sd2f' |
kAudioFileNextType = 'NeXT' |
kAudioFileMP3Type = 'MPG3' |
kAudioFileAC3Type = 'ac-3' |
kAudioFileAAC_ADTSType = 'adts' |
The Audio File API uses the property system to get and set information about files and global settings.
These properties are to be used when getting and setting information about an particular audio file.
kAudioFilePropertyFileFormat
= 'ffmt'
Passes a UInt32
that
identifies the file’s format, based on the “Format IDs” found
in CoreAudioTypes.h
.
kAudioFilePropertyDataFormat
= 'dfmt'
Passes an AudioStreamBasicDescription
that
describes the file’s format.
kAudioFilePropertyIsOptimized
= 'optm'
Returns a UInt32
with
either a value of 0
,
meaning that the file is not optimized, and therefore, not ready
to be written to, or a value of 1
,
meaning that the file is currently optimized.
kAudioFilePropertyMagicCookieData
= 'mgic'
Passes a void
pointer
towards memory set up for use as a magic cookie.
kAudioFilePropertyAudioDataByteCount
= 'bcnt'
Passes a UInt64
that
contains the size of the audio data in the file, in bytes.
kAudioFilePropertyAudioDataPacketCount
= 'pcnt'
Passes a UInt64
that
contains the size of the audio data in the file, in packets.
kAudioFilePropertyMaximumPacketSize
= 'psze'
Passes a UInt32
that
contains the maximum packet size in the file.
kAudioFilePropertyDataOffset
= 'doff'
Passes an SInt64
that
contains offset of where the audio data begins inside the file.
kAudioFilePropertyChannelLayout
= 'cmap
'Passes an AudioChannelLayout
,
specified in CoreAudioTypes.h
,
used in the file.
kAudioFilePropertyDeferSizeUpdates
= 'dszu'
Passes a UInt32
where
a value of 1
means that
the file size information in the file header is updated only when
the file is read, optimized, or closed; a value of 0
denotes
that the header is updated with every write.
kAudioFilePropertyDataFormatName
= 'fnme'
Deprecated in favor of the kAudioFormatProperty_formatName
property,
available from Audio Format “Audio Format Properties.”
The Global Info Properties are used to retrieve general information about the environment that is being used. Many of these properties require a specifier for use, meaning that, in addition to passing a property ID, a piece of information being queried upon is passed in as a specifier.
kAudioFileGlobalInfo_ReadableTypes
= 'afrf'
Takes NULL
as
its specifier, and returns a UInt32
array
containing the File Type constants which are readable.
kAudioFileGlobalInfo_WritableTypes
= 'afwf'
Takes NULL
as
its specifier, and returns a UInt32
array
containing the File Type constants which are writable.
kAudioFileGlobalInfo_FileTypeName
= 'ftnm'
Takes a UInt32
containing
a File Type constant as its specifier and returns a CFString
containing
the name of the file type.
kAudioFileGlobalInfo_ExtensionsForType
= 'fext'
Takes a UInt32
containing
a File Type constant as its specifier and returns a CFArray
of CFString
values
containing the file extensions recognized for this file type.
kAudioFileGlobalInfo_AllExtensions
= 'alxt'
Takes NULL
as
its specifier and returns a CFArray
of CFString
values
containing all of the recognizable file extensions.
kAudioFileGlobalInfo_AvailableFormatIDs
= 'fmid'
Takes a UInt32
containing
a File Type constant as its specifier and returns a UInt32
array
containing format ID constants for formats readable by audio file.
kAudioFileGlobalInfo_AvailableStreamDescriptionsForFormat
= 'sdid'
Takes an AudioFileTypeAndFormatID
instance
as its specifier and returns an AudioStreamBasicDescription
array
whose elements correspond with the elements in the specifier.
These functions are provided to access the functionality of the Audio File API.
Creates a new file using the descriptions provided.
extern OSStatus AudioFileCreate( const FSRef *inParentRef, CFStringRef inFileName, UInt32 inFileType, const AudioStreamBasicDescription *inFormat, UInt32 inFlags, FSRef *outNewFileRef, AudioFileID *outAudioFile );
The directory that the file to be place into is provided with inParentRef
,
the name of the file is contained within inFileName
,
a File Type constant must be provided with inFileType
,
the format must be specified using inFormat
, inFlag
contains
flags for opening and creating the file (currently undefined; should
be set to 0
), and outNewFileRef
is
provided for file system use, while outAudioFile is
for use with other audio file functions.
AudioFile.h
Wipes clean a existing file to prepare it for writing.
extern OSStatus AudioFileInitialize( const FSRef *inFileRef, UInt32 inFileType, const AudioStreamBasicDescription *inFormat, UInt32 inFlags, AudioFileID *outAudioFile );
The inFileRef
is
the file to be initialized, with the inFileType
being
a File Type constant value, inFormat
being
an AudioStreamBasicDescription
specifying
the format for the file, inFlags
being relevant
creation and opening flags (currently undefined; should be set to 0
),
and outAudioFile
being
an AudioFileID
for use
with other audio file functions.
AudioFile.h
Opens a file while preserving its contents.
extern OSStatus AudioFileOpen ( const FSRef *inFileRef, SInt8 inPermissions, UInt32 inFlags, AudioFileID *outAudioFile );
The inFileRef
should
be a reference to an existing file, inPermissions
being
the permissions for the file, as used by FSOpenFork()
,
and inFlags
, currently
undefined, should be set to 0
; outAudioFile
is
a file instance that will be returned for use in other audio file
functions.
AudioFile.h
Should read the contents of a file.
typedef OSStatus (*AudioFile_ReadProc)( void * inRefCon, SInt64 inPosition, ByteCount requestCount, void *buffer, ByteCount* actualCount );
This callback needs to be provided by the developer for the
purpose of reading the audio data for use with AudioFormatInitializeWithCallbacks()
and AudioFormatOpenWithCallbacks()
.
Constants for use by the callback are passed in via inRefCon
,
the position to be read from will be passed in via inPosition
,
the number of bytes requested is passed in via requestCount
,
the processed data is passed in via buffer
,
and actualCount
returns
the number of bytes returned.
Should write the given buffer to the file.
typedef OSStatus (*AudioFile_WriteProc)( void * inRefCon, SInt64 inPosition, ByteCount requestCount, const void *buffer, ByteCount* actualCount );
This callback needs to be provided by the developer for the
purpose of writing to a file. Constants for use by the callback
are passed in via inRefCon
,
the position to be read from will be passed in via inPosition
,
the number of bytes requested is passed in via requestCount
,
the data processed to is passed in via buffer
,
and actualCount
returns
the number of bytes written.
Should provide the size of the file to the caller.
typedef SInt64 (*AudioFile_GetSizeProc)(void * inRefCon);
This callback should return an SInt32
with
the audio stream data size to the caller. If any constants need
to be passed to the callback, their values should be pointed to
by inRefCon
.
Should set the file size to the passed value.
typedef OSStatus (*AudioFile_SetSizeProc)( void * inRefCon, SInt64 inSize );
This callback should set the size of the file to inSize
,
while inRefCon
is provided
to pass any needed arguments to the callback.
Initializes an audio file using the provided callbacks.
extern OSStatus AudioFileInitializeWithCallbacks( void * inRefCon, AudioFile_ReadProc inReadFunc, AudioFile_WriteProc inWriteFunc, AudioFile_GetSizeProc inGetSizeFunc, AudioFile_SetSizeProc inSetSizeFunc, UInt32 inFileType, const AudioStreamBasicDescription *inFormat, UInt32 inFlags, AudioFileID *outAudioFile );
This function will wipe the data target clean and set the various attributes using inFileType, inFormat, and inFlags. The callbacks need to be provided by the developer, according to the callback specifications elsewhere in this reference. Upon completion, outAudioFile will contain a reference to a file instance, for use with other audio file functions.
AudioFile.h
Opens the file an prepares it for use.
extern OSStatus AudioFileOpenWithCallbacks( void * inRefCon, AudioFile_ReadProc inReadFunc, AudioFile_WriteProc inWriteFunc, AudioFile_GetSizeProc inGetSizeFunc, AudioFile_SetSizeProc inSetSizeFunc, UInt32 inFlags, AudioFileID *outAudioFile );
Using this function will prepare the target data, while the callbacks specified here will be used when reading, writing, and modifying the data. This function is provided to allow for the use of Audio File’s APIs with sources other than files.
Closes the file.
extern OSStatus AudioFileClose(AudioFileID inAudioFile);
AudioFile.h
Optimizes the file.
extern OSStatus AudioFileOptimize(AudioFileID inAudioFile);
Optimizing a file will prepare it for any data which may be
appended to the end of it. This is a costly operation and should
not be performed during a process-intensive routine. The kAudioFilepropertyIsOptimized
flag
is available to determine whether or not the file is optimized.
AudioFile.h
Reads in a certain number of bytes from the file.
extern OSStatus AudioFileReadBytes( AudioFileID inAudioFile, Boolean inUseCache, SInt64 inStartingByte, UInt32 *ioNumBytes, void *outBuffer );
Here, inAudioFile
is
the file being read from, inStartingByte
is
the point from which to read from, ioNumBytes
being
the amount to read, and outBuffer
is
where the read data is stored. To cache the read, set inUseCache
to
true.
AudioFile.h
Write the contents of the buffer to the file.
extern OSStatus AudioFileWriteBytes( AudioFileID inAudioFile, Boolean inUseCache, SInt64 inStartingByte, UInt32 *ioNumBytes, void *inBuffer );
Specify the file to be written to with inAudioFile,
where to write within the file by specifying inStartingByte
,
how much is to be written using ioNumBytes
(and
verifying how much was written at output, as well), and the data
to be written should be pointed to by inBuffer
.
To cache the written data, set inUseCache
to
true.
AudioFile.h
Reads in a certain number of packets from the input file.
extern OSStatus AudioFileReadPackets( AudioFileID inAudioFile, Boolean inUseCache, UInt32 *outNumBytes, AudioStreamPacketDescription *outPacketDescriptions, SInt64 inStartingPacket, UInt32 *ioNumPackets, void *outBuffer );
This function reads in the contents of the file by packet,
starting at inStartingPoint
.
The packets that have been read are described in outPacketDescriptions
,
while the number of packets is specified in ioNumPackets
(with
the actual number of packets read being the return value), and the
size, in bytes, of the read in packets returned in outNumBytes
.
If the read should be cached, set inUseCache
to
true.
AudioFile.h
Writes the buffer to the file, by packets.
extern OSStatus AudioFileWritePackets( AudioFileID inAudioFile, Boolean inUseCache, UInt32 inNumBytes, AudioStreamPacketDescription *inPacketDescriptions, SInt64 inStartingPacket, UInt32 *ioNumPackets, void *inBuffer );
When writing to inAudioFile
,
specify the starting index as inStartingPacket
,
the format of the packet as defined in inPacketDescrptions
,
the size of the write as inNumBytes
,
and the number of packets to be written in ioNumPackets
.
If the write should be cached, set inUseCache
to
true.
AudioFile.h
Returns the size of the data that will be returned for the property.
extern OSStatus AudioFileGetPropertyInfo( AudioFileID inAudioFile, AudioFilePropertyID inPropertyID, UInt32 *outDataSize, UInt32 *isWritable );
The file being queried should be passed in as inAudioFile
,
while the property being queried is passed in as inPropertyID
.
The size of the resulting data is returned in outDataSize
,
and isWritable
will reflect
if the data is modifiable.
AudioFile.h
Returns the data for the specified property.
extern OSStatus AudioFileGetProperty( AudioFileID inAudioFile, AudioFilePropertyID inPropertyID, UInt32 *ioDataSize, void *outPropertyData );
The file and property being queried should be specified in inAudioFile and inPropertyID, respectively,
with the size retrieved with AudioFileGetPropertyInfo()
passed
into ioDataSize, and the resulting
data being placed in outPropertyData.
AudioFile.h
Sets the data for the respective property.
extern OSStatus AudioFileSetProperty( AudioFileID inAudioFile, AudioFilePropertyID inPropertyID, UInt32 inDataSize, const void *inPropertyData );
The file and property being set should be specified in inAudioFile
and inPropertyID
,
respectively, with the size of the data being written passed into ioDataSize
,
and the data being written coming from inPropertyData
.
AudioFile.h
Calculates the size of the data that will be returned for the property.
extern OSStatus AudioFileGetGlobalInfoSize( AudioFilePropertyID inPropertyID, UInt32 inSpecifierSize, void *inSpecifier, UInt32 *outDataSize );
Gets the size of the inPropertyID
for
the inSpecifier
and places
it in outDataSize
.
AudioFile.h
Retrieves the data for the queried property and specifier.
extern OSStatus AudioFileGetGlobalInfo( AudioFilePropertyID inPropertyID, UInt32 inSpecifierSize, void *inSpecifier, UInt32 *ioDataSize, void *outPropertyData );
This function takes inPropertyID
and
returns outPropertyData
based
on inSpecifier
.
AudioFile.h
These values are returned when errors occur.
kAudioFileUnspecifiedError = 'wht?' |
kAudioFileUnsupportedFileTypeError = 'typ?' |
kAudioFileUnsupportedDataFormatError = 'fmt?' |
kAudioFileUnsupportedPropertyError = 'pty?' |
kAudioFileBadPropertySizeError = '!siz' |
kAudioFilePermissionsError = 'prm?' |
kAudioFileNotOptimizedError = 'optm' |
kAudioFileFormatNameUnavailableError = 'nme?' |
kAudioFileInvalidChunkError = 'chk?' |
kAudioFileDoesNotAllow64BitDataSizeError = 'off?' |
kAudioFileInvalidPacketOffsetError = 'pck?' |
kAudioFileInvalidFileError = 'dta?' |
kAudioFileOperationNotSupportedError = 0x6F703F3F |
The AUGraph API allows for creating graphs of Audio Units for processing audio data.
Typedefs are used to simplify the declaration of converters and the use of properties in the context of a graph.
typedef SInt32 AUNode
typedef struct OpaqueAUGraph *AUGraph
Used to symbolize the connection between two nodes.
typedef struct AudioUnitNodeConnection{ AUNode sourceNode; UInt32 sourceOutputNumber; AUNode destNode; UInt32 destInputNumber; } AudioUnitNodeConnection;
AUGraph.h
These functions are provided to access the functionality of the AUGraph API.
Creates a new AUGraph
instance.
extern OSStatus NewAUGraph(AUGraph *outGraph);
AUGraph.h
Destroys an AUGraph
instance.
extern OSStatus DisposeAUGraph(AUGraph inGraph);
AUGraph.h
Creates a new node inside of the specified graph.
extern OSStatus AUGraphNewNode( AUGraph inGraph, const ComponentDescription *inDescription, UInt32 inClassDataSize, const void *inClassData, AUNode *outNode );
The graph to which the new node is to be added is set in inGraph,
while the node to be added may be specified using either a ComponentDescription
,
obtained from the Component Manager. The value of inClassData is
a CFPropertyList
containing the serialized
data of a saved state. The function returns outNode for
future reference towards the newly-created node.
AUGraph.h
Adds a new subgraph within the graph.
extern OSStatus AUGraphNewNodeSubGraph( AUGraph inGraph, AUNode *outNode );
The subgraph node pointed to by outNode may be populated as if it were a graph in itself. The entire graph becomes active when the subgraph node is connected to the rest of the graph, and it is deactivated when it is disconnected.
AUGraph.h
Removes the specified node from the graph.
extern OSStatus AUGraphRemoveNode( AUGraph inGraph, AUNode inNode );
AUGraph.h
Returns the number of nodes in the current graph.
extern OSStatus AUGraphGetNodeCount( AUGraph inGraph, UInt32 *outNumberOfNodes );
AUGraph.h
Returns a pointer to the node at the specified index.
extern OSStatus AUGraphGetIndNode( AUGraph inGraph, UInt32 inIndex, AUNode *outNode );
The index for the node is arbitrarily assigned when the node is added to the graph.
AUGraph.h
Returns information about a node.
extern OSStatus AUGraphGetNodeInfo( AUGraph inGraph, AUNode inNode, ComponentDescription *outDescription, UInt32 *outClassDataSize, void **outClassData, AudioUnit *outAudioUnit );
This function retrieves various pieces of information about
a graph’s nodes, which may be saved and used to rebuild the graph
later using AUGraphNewNode()
. The node
and graph containing the node in question are passed as inGraph and inNode,
respectively. Upon output, outDescription points
to a ComponentDescription
,
provided by the Component Manager. Also, outClassData points
towards a CFPropertyRef
, which may be saved
and used to rebuild the graph later on. The node’s Audio Unit
type is pointed to by outAudioUnit
.
The outClassDataSize parameter is
currently not used, and will return 0
.
AUGraph.h
Returns a pointer towards a subgraph.
extern OSStatus AUGraphGetNodeInfoSubGraph( const AUGraph inGraph, const AUNode inNode, AUGraph *outSubGraph );
AUGraph.h
Indicates if a node is a subgraph.
extern OSStatus AUGraphIsNodeSubGraph( const AUGraph inGraph, const AUNode inNode, Boolean* outFlag );
AUGraph.h
Connects two graph nodes together and specifies the way inputs are ordered.
extern OSStatus AUGraphConnectNodeInput( AUGraph inGraph, AUNode inSourceNode, UInt32 inSourceOutputNumber, AUNode inDestNode, UInt32 inDestInputNumber );
When connecting nodes together, the developer must specify
how the output of one node maps to the input of another. To prevent
fan out, all output-input connections are one-to-one, where each
node may have multiple inputs and outputs (indexed starting with 0
).
AUGraph.h
Disconnects the input from the graph.
extern OSStatus AUGraphDisconnectNodeInput( AUGraph inGraph, AUNode inDestNode, UInt32 inDestInputNumber );
AUGraph.h
Clears all of the connections between all inputs and outputs.
extern OSStatus AUGraphClearConnections(AUGraph inGraph);
AUGraph.h
Returns the number of connections present in the graph.
extern OSStatus AUGraphGetNumberOfConnections( AUGraph inGraph, UInt32 *outNumberOfConnections );
AUGraph.h
Returns the number of connections that involve the specified node.
extern OSStatus AUGraphCountNodeConnections( AUGraph inGraph, AUNode inNode, UInt32 *outNumConnections );
AUGraph.h
Returns an array containing the number of connections involving the specified node.
extern OSStatus AUGraphGetNodeConnections( AUGraph inGraph, AUNode inNode, AudioUnitNodeConnection *outConnections, UInt32 *ioNumConnections );
This function returns an “AudioUnitNodeConnection” array
containing information about all of the pairs of connections that
involve inNode. The size of the array
will be reflected in ioNumConnections,
while the value returned by AUGraphCountNodeConnections()
should be
passed to this parameter upon input.
AUGraph.h
Returns information about a particular connection.
extern OSStatus AUGraphGetConnectionInfo( AUGraph inGraph, UInt32 inConnectionIndex, AUNode *outSourceNode, UInt32 *outSourceOutputNumber, AUNode *outDestNode, UInt32 *outDestInputNumber );
Passing an index will return the information about it. The
indices are arbitrarily assigned when the connections are made,
and should follow the indices contained in the outConnections array
returned by AUGraphGetNodeConnections()
.
AUGraph.h
Updates all changes made to the graph while it is running.
extern OSStatus AUGraphUpdate( AUGraph inGraph, Boolean *outIsUpdated );
When a graph is running, no changes actually occur to the
graph until AUGraphUpdate()
is called.
All node connect and disconnect requests are queued until this function
called. When the graph is not running, all connect and disconnect
requests are processed immediately, and therefore, AUGraphUpdate()
is
not necessary. If the value of outIsUpdated is NULL
,
the update will block all rendering until it is finished; a non-NULL
value
will allow AUGraphUpdate()
to return
immediately. A true
value
for outIsUpdated will indicate that
all changes have occurred to the graph, whereas a false
value
means that there are still changes that have not occurred.
AUGraph.h
Instantiates every Audio Unit in the graph.
extern OSStatus AUGraphOpen(AUGraph inGraph);
This function should be called after the initial set of nodes is added to the graph and connections have been made between them. This will instantiate the nodes, meaning that their properties will be ready for modification. Each node’s sample rate may also be set after the graph is opened.
AUGraph.h
Closes the graph and deallocates its Audio Unit nodes.
extern OSStatus AUGraphClose(AUGraph inGraph);
AUGraph.h
Initializes the graph and the connected Audio Units.
extern OSStatus AUGraphInitialize(AUGraph inGraph);
Invoking this function will activate the connections between nodes and will initialize all nodes that are part of a connection. It is important to note that if format changes occur, sample rates for output nodes must be set before this function is called.
AUGraph.h
Uninitializes the graph and all of the Audio Units.
extern OSStatus AUGraphUninitialize(AUGraph inGraph);
AUGraph.h
Begins audio rendering through the graph.
extern OSStatus AUGraphStart(AUGraph inGraph);
This function starts with the head node, always an output unit, and works through the graph to get to the inputs, pulls the data, and renders it through all of the Audio Units in the path leading to the head.
AUGraph.h
Stops all rendering through the graph.
extern OSStatus AUGraphStop(AUGraph inGraph);
AUGraph.h
Returns a boolean value indicting whether or not the graph is open.
extern OSStatus AUGraphIsOpen( AUGraph inGraph, Boolean *outIsOpen );
AUGraph.h
Returns a boolean value indicting whether or not the graph is initialized.
extern OSStatus AUGraphIsInitialized( AUGraph inGraph, Boolean *outIsInitialized );
AUGraph.h
Returns a boolean value indicting whether or not the graph is running.
extern OSStatus AUGraphIsRunning( AUGraph inGraph, Boolean *outIsRunning );
AUGraph.h
Returns the amount of load on the CPU.
extern OSStatus AUGraphGetCPULoad( AUGraph inGraph, Float32 *outCPULoad );
AUGraph.h
Specifies a callback for the render process.
extern OSStatus AUGraphSetRenderNotification( AUGraph inGraph, AudioUnitRenderCallback inCallback, void *inRefCon );
This function is intended for use when the graph has Audio
Units of type ‘aunt
’.
The callback is specified in inCallback,
and is called before and after an audio render occurs. Passing NULL
to inCallback
removes
all callbacks from the notification. Multiple notifications are
allowed.
AUGraph.h
Removes the specified callback from the notification.
extern OSStatus AUGraphRemoveRenderNotification( AUGraph inGraph, AudioUnitRenderCallback inCallback, void *inRefCon );
This function is intended for use when the graph has Audio
Units of type ‘aunt
’.
AUGraph.h
Specifies a callback for the render process.
extern OSStatus AUGraphAddRenderNotify( AUGraph inGraph, AURenderCallback inCallback, void *inRefCon );
This function is intended for use when the graph has Audio
Units of type ‘auXX
’,
where XX
is one of the
various version 2 Audio Unit types, as specified in AudioUnit/AUComponent.h
.
The callback is specified in inCallback,
and is called before and after an audio render occurs. Passing NULL
to inCallback removes
all callbacks from the notification. Multiple notifications are
allowed.
AUGraph.h
Removes the specified callback from the notification.
extern OSStatus AUGraphRemoveRenderNotify( AUGraph inGraph, AURenderCallback inCallback, void *inRefCon );
This function is intended for use when the graph has Audio
Units of type ‘auXX
’,
where XX
is one of the
various version 2 Audio Unit types, as specified in AudioUnit/AUComponent.h
.
AUGraph.h
These values are returned when errors occur.
kAUGraphErr_NodeNotFound = -10860 |
kAUGraphErr_InvalidConnection = -10861 |
kAUGraphErr_OutputNodeErr = -10862 |
kAUGraphErr_CannotDoInCurrentContext = -10863 |
kAUGraphErr_InvalidAudioUnit = -10864 |
The Music Player and Music Sequence components allow for the sequencing of MIDI endpoints and audio units.
These typedefs are provided to support the different structures and functions in Music Player.
typedef UInt32 MusicSequenceLoadFlags
typedef UInt32 MusicEventType
typedef Float64 MusicTimeStamp
typedef struct OpaqueMusicPlayer *MusicPlayer
typedef struct OpaqueMusicSequence *MusicSequence
typedef struct OpaqueMusicTrack *MusicTrack
typedef struct OpaqueMusicEventIterator *MusicEventIterator
Stores information about a MIDI note event.
typedef struct MIDINoteMessage { UInt8 channel; UInt8 note; UInt8 velocity; UInt8 reserved; Float32 duration; } MIDINoteMessage;
channel
The channel number to which the note is assigned.
note
The value of the note to be played.
velocity
The volume at which the note is to be played.
reserved
duration
The length of time that the note should be played, in beats.
This structure encapsulates the information needed to relay
the properties of a note. An instance of this structure is used
by the MusicTrackNewMIDINoteEvent
function.
The values of the structure are:
MusicPlayer.h
Stores the data for a MIDI channel event.
typedef struct MIDIChannelMessage { UInt8 status; UInt8 data1; UInt8 data2; UInt8 reserved; } MIDIChannelMessage;
status
The message and the channel it is to be relayed to.
data1
Data specific to the message.
data2
Data specific to the message.
reserved
???
This structure encapsulates the information needed for a channel
event to be used in a music track. An instance of this structure
is used by the MusicTrackNewMIDIChannelEvent
function.
MusicPlayer.h
Stores the information for any MIDI event.
typedef struct MIDIRawData { UInt32 length; UInt8 data[1]; } MIDIRawData;
length
The size of the space allocated for data
.
data
The raw MIDI data to be stored; allocate as much space as needed for the data.
This structure encapsulates the data for an event where raw
MIDI data is sent to an endpoint. An instance of this structure
is used by the MusicTrackNewMIDIRawDataEvent
function. The
values of the structure are:
MusicPlayer.h
Stores the data for a MIDI meta event.
typedef struct MIDIMetaEvent { UInt8 metaEventType; UInt8 unused1; UInt8 unused2; UInt8 unused3; UInt32 dataLength; UInt8 data[1]; } MIDIMetaEvent;
metaEventType
Specifies the type of meta event this structure encapsulates.
unused1
An unused value.
unused2
An unused value.
unused3
An unused value.
dataLength
The size of the space allocated for data
.
data
The meta data for this event.
This structure encapsulates the information needed to pass
MIDI meta data, as found in standard MIDI files, to MIDI endpoints.
An instance of this structure is used by the MusicTrackNewMetaEvent
function. The
values of the structure are:
MusicPlayer.h
Stores data for a user event .
typedef struct MusicEventUserData { UInt32 length; UInt8 data[1]; } MusicEventUserData;
length
The size, in bytes, of the value stored in data
.
data
The data stored for this event.
This structure encapsulates the information used on a user
event. An instance of this structure is used by the MusicTrackNewUserEvent
function.
MusicPlayer.h
Stores the data for a playback note.
typedef struct ExtendedNoteOnEvent { MusicDeviceInstrumentID instrumentID; MusicDeviceGroupID groupID; Float32 duration; MusicDeviceNoteParams extendedParams; } ExtendedNoteOnEvent;
instrumentID
The instrument to be used by the Music Device.
groupID
The channel of the Music Device.
duration
The length of the note.
extendedParams
Any additional parameters that need to be sent to the Music Device.
This structure encapsulates the information needed to playback
a note using a Music Device. An instance of this structure is used
by the MusicTrackNewExtendedNoteEvent
function.
MusicPlayer.h
Stores information regarding an event using a Music Device.
typedef struct ExtendedControlEvent { MusicDeviceGroupID groupID; AudioUnitParameterID controlID; Float32 value; } ExtendedControlEvent;
groupID
The channel of the Music Device to be controlled.
controlID
The Music Device parameter to be controlled.
value
The value to which the parameter should be set.
This structure encapsulates the information needed to control
a Music Device. An instance of this structure is used by the MusicTrackNewExtendedControlEvent
function.
MusicPlayer.h
Stores information for an even using Audio Units.
typedef struct ParameterEvent { AudioUnitParameterID parameterID; AudioUnitScope scope; AudioUnitElement element; Float32 value; } ParameterEvent;
parameterID
The parameter to be adjusted.
scope
The scope for this event.
element
The element to be controlled
value
The value to be passed into the parameter.
This structure encapsulates the information needed to relay
the properties of a note. An instance of this structure is used
by the MusicTrackNewParameterEvent
function.
MusicPlayer.h
Specifies the tempo to be applied when this event occurs.
typedef struct ExtendedTempoEvent { Float64 bpm; } ExtendedTempoEvent;
pbm
The beats-per-minute to be used for the sequence from this event forward.
This structure encapsulates the information needed to change
the tempo of the sequence at a certain point. An instance of this
structure is used by the MusicTrackExtendedTempoEvent
function.
MusicPlayer.h
Constants are provided for the developer’s convenience. They provide a consistent set of values for various aspects of an audio file.
These constants are used by the MusicEventIteratorGetEventInfo
and MusicEventIteratorSetEventInfo
functions
to determine the type of event currently being pointed to by the
iterator. Based on these values, the outEventData
and inEventData
pointers
in these functions should point to instances of these types:
Constant |
Data Type |
---|---|
kMusicEventType_ExtendedNote |
|
kMusicEventType_ExtendedControl |
|
kMusicEventType_ExtendedTempo |
|
kMusicEventType_User |
|
kMusicEventType_Meta |
|
kMusicEventType_MIDINoteMessage |
|
kMusicEventType_MIDIChannelMessage |
|
kMusicEventType_MIDIRawData |
|
kMusicEventType_Parameter |
In addition to these types, two other values may be returned:
kMusicEventType_NULL
-
Returned when a Music Track is empty.
kMusicEventType_Last
-
Returned when all of the events in a track have been iterated through.
These additional constants have been provided as flags or definitions of values for consistency.
kMusicSequenceLoadSMF_ChannelsToTracks
-
Used by the MusicSequenceLoadSMFWithFlags
and MusicSequenceLoadSMFDataWithFlags
to
indicate that when a standard MIDI file is read in with this flag
assigned, the resulting sequence will have a track created for each
channel in the MIDI file, as well as another track for all of the
meta information.
kMusicTimeStamp_EndOfTrack
-
Used to provide an upper limit for the length of tracks; currently
set to 1,000,000,000.0 beats.
These properties are used by MusicTrackSetProperty
and MusicTrackGetProperty
to
define the status and certain values unique to the track.
kSequenceTrackProperty_LoopInfo
= 0
Uses a structure containing a MusicTimeStamp
indicating
the length of the track and a long
indicating
the number of times the track is to be looped. If the track is to
be loop infinitely, set this value equal to zero.
kSequenceTrackProperty_OffsetTime
= 1
Uses a MusicTimeStamp
to
determine after how many beats into the sequence the track should
begin playing.
kSequenceTrackProperty_MuteStatus
= 2
Uses a Boolean
to
mute the track.
kSequenceTrackProperty_SoloStatus
= 3
Uses a Boolean
to
determine if the track is the only one to control an endpoint or
node.
kSequenceTrackProperty_AutomatedParameters
= 4
Uses a UInt32
to
signify if the track modifys a node parameters.
kSequenceTrackProperty_TrackLength
= 5
Uses a MusicTimeStamp
to
reflect the length of the track, in beats.
These functions make up the functionality of the Music Player API.
The functions work with a Music Player instances, which are used to play back a Music Sequence instance. Each Music Player is only allowed to be associated with one Music Sequence, and vice versa.
Creates a new Music Player instance.
extern OSStatus NewMusicPlayer(MusicPlayer *outPlayer);
MusicPlayer.h
Disposes of a Music Player.
extern OSStatus DisposeMusicPlayer(MusicPlayer inPlayer);
MusicPlayer.h
Associates a Music Player with a Music Sequence.
extern OSStatus MusicPlayerSetSequence( MusicPlayer inPlayer, MusicSequence inSequence );
MusicPlayer.h
Moves the Music Player’s playhead to the desired time, in beats.
extern OSStatus MusicPlayerSetTime( MusicPlayer inPlayer, MusicTimeStamp inTime );
In addition to moving the playhead, MusicPlayerSetTime()
also
prerolls the track up to the playhead, setting Audio Unit parameters
and MIDI endpoints to where they should be at the playhead, based
on the sequence prior to the playhead.
MusicPlayer.h
Returns the current placement of the playhead, in beats.
extern OSStatus MusicPlayerGetTime( MusicPlayer inPlayer, MusicTimeStamp *outTime );
MusicPlayer.h
Returns the number of seconds equivalent to the number beats provided.
extern OSStatus MusicPlayerGetHostTimeForBeats( MusicPlayer inPlayer, MusicTimeStamp inBeats, UInt64* outHostTime );
This function determines what value to return by analyzing
the tempo track in the sequence and determining the amount of time
has passed when inBeats
number
of beats have occurred. Only valid when called while a Music Player
is playing.
MusicPlayer.h
Returns the beat for a given time.
extern OSStatus MusicPlayerGetBeatsForHostTime( MusicPlayer inPlayer, UInt64 inHostTime, MusicTimeStamp *outBeats);
This function determines what value to return by analyzing
the tempo track in the sequence and determining the number beats
that have passed at inHostTime
.
Only valid when called while a Music Player is playing.
MusicPlayer.h
Prepares a sequence to be played.
extern OSStatus MusicPlayerPreroll(MusicPlayer inPlayer);
Prerolling a player prepares the player’s sequence to be played. Calling this function will synchronize all of the tracks within the sequence, bringing MIDI endpoints to their correct state with respect to the playhead, while adjusting Audio Unit parameters as well. Adding an event prior to a playhead invalidates a preroll, and so this function should only be called after all events have been added, since this operation is rather costly.
MusicPlayer.h
Begins playback of a sequence.
extern OSStatus MusicPlayerStart(MusicPlayer inPlayer);
MusicPlayer.h
Halts the playback of a sequence.
extern OSStatus MusicPlayerStop(MusicPlayer inPlayer);
MusicPlayer.h
Returns a Boolean
reflecting
the current state of a player.
extern OSStatus MusicPlayerIsPlaying( MusicPlayer inPlayer, Boolean* outIsPlaying );
MusicPlayer.h
Sets a tempo multiplier for a sequence.
extern OSStatus MusicPlayerSetPlayRateScalar( MusicPlayer inPlayer, Float64 inScaleRate );
The value of inScaleRate
will
be applied to the tempo track of the sequence, adjusting the playback
tempo uniformly. For instance, if a tempo track is set up to be
entirely 60 bpm, and a value of two is set as the inScaleRate
,
playback will occur at 120 bpm. The scale rate is not allowed to
be negative (reverse playback is not allowed), and must be greater
than zero (use MusicPlayerStop
to
stop playback instead).
MusicPlayer.h
Returns the current tempo multiplier.
extern OSStatus MusicPlayerGetPlayRateScalar( MusicPlayer inPlayer, Float64 *outScaleRate );
The play rate scalar is a multiplier applied to every tempo event in the tempo track of a sequence. At playback, every tempo in the tempo track will multiplied by this value and the sequence will them play at the resulting tempo.
MusicPlayer.h
The functions work with a Music Sequence instance, and are used to create, modify, and dispose of sequences.
Creates a new Music Sequence.
extern OSStatus NewMusicSequence(MusicSequence *outSequence);
After creation, a Music Sequence contains one track: the Tempo
Track. This track determines the playback rate, in beats-per-minute
(bpm), determined by events placed along the track. Only events
of type ExtendedTempoEvent
are
allowed in the tempo track. To get a pointer to the tempo track
(to add and remove tempo events), use MusicSequenceGetTempoTrack
.
MusicPlayer.h
Disposes of a Music Sequence.
extern OSStatus DisposeMusicSequence(MusicSequence inSequence);
MusicPlayer.h
Creates a new event track within a sequence.
extern OSStatus MusicSequenceNewTrack( MusicSequence inSequence, MusicTrack *outTrack );
MusicPlayer.h
Removes and disposes of a track from within a sequence.
extern OSStatus MusicSequenceDisposeTrack( MusicSequence inSequence, MusicTrack inTrack );
MusicPlayer.h
Returns the number of tracks within a sequence.
extern OSStatus MusicSequenceGetTrackCount( MusicSequence inSequence, UInt32 *outNumberOfTracks );
MusicPlayer.h
Returns a pointer to the track at an index.
extern OSStatus MusicSequenceGetIndTrack( MusicSequence inSequence, UInt32 inTrackIndex, MusicTrack *outTrack );
MusicPlayer.h
Returns the index within the sequence for the given track.
extern OSStatus MusicSequenceGetTrackIndex( MusicSequence inSequence, MusicTrack inTrack, UInt32 *outTrackIndex );
MusicPlayer.h
Returns a pointer to a sequence’s tempo track.
extern OSStatus MusicSequenceGetTempoTrack( MusicSequence inSequence, MusicTrack *outTrack );
MusicPlayer.h
Sets the sequence to work with a particular AUGraph instance.
extern OSStatus MusicSequenceSetAUGraph( MusicSequence inSequence, AUGraph inGraph );
MusicPlayer.h
Returns a pointer to the AUGraph which is being used by the given sequence.
extern OSStatus MusicSequenceGetAUGraph( MusicSequence inSequence, AUGraph *outGraph );
MusicPlayer.h
Sets the MIDI device being used by the given sequence.
extern OSStatus MusicSequenceSetMIDIEndpoint( MusicSequence inSequence, MIDIEndpointRef inEndpoint );
MusicPlayer.h
Parses a standard MIDI file and places its contents into a track within a sequence.
extern OSStatus MusicSequenceLoadSMF( MusicSequence inSequence, const FSSpec *inFileSpec );
MusicPlayer.h
Parses MIDI data out of memory and uses it to populate a sequence.
extern OSStatus MusicSequenceLoadSMFData( MusicSequence inSequence, CFDataRef inData );
MusicPlayer.h
Parses a standard MIDI file and places its contents into a sequence.
extern OSStatus MusicSequenceLoadSMFWithFlags( MusicSequence inSequence, FSRef *inFileRef, MusicSequenceLoadFlags inFlags );
Use MusicSequenceLoadSMFWithFlags
to
load data from a standard MIDI file into a sequence. The use of
flags allows for the resulting data to be customized. Passing in
0 to inFlags
makes this function
act like MusicSequenceLoadSMF
.
The only other flag currently available for use is kMusicSequenceLoadSMF_ChannelsToTracks
,
described in “Other Constants.”
MusicPlayer.h
Parses MIDI data out of memory and uses it to populate a sequence.
extern OSStatus MusicSequenceLoadSMFDataWithFlags( MusicSequence inSequence, CFDataRef inData, MusicSequenceLoadFlags inFlags );
As with MusicSequenceLoadSMFWithFlags
,
passing a 0 to inFlags
will
cause this function to behave like MusicSequenceLoadSMFData
. Passing it kMusicSequenceLoadSMF_ChannelsToTracks
will
cause the data to be formatted as described in “Other Constants.”
MusicPlayer.h
Saves the contents of a sequence to file, in standard MIDI format.
extern OSStatus MusicSequenceSaveSMF( MusicSequence inSequence, const FSSpec *inFileSpec, UInt16 inResolution );
MusicPlayer.h
Saves the contents of a sequence to memory, in standard MIDI format
extern OSStatus MusicSequenceSaveSMFData( MusicSequence inSequence, CFDataRef *outData, UInt16 inResolution );
MusicPlayer.h
Revereses all of the events in a sequence.
extern OSStatus MusicSequenceReverse(MusicSequence inSequence);
MusicPlayer.h
Returns the number of seconds elapsed for the given number of beats in the sequence.
extern OSStatus MusicSequenceGetSecondsForBeats( MusicSequence inSequence, MusicTimeStamp inBeats, Float64* outSeconds );
This function uses the tempo track of the sequence to determine how much time passes until the provided number of beats has occurred.
MusicPlayer.h
Returns the number of beats that have elapsed after the provided number of seconds.
extern OSStatus MusicSequenceGetBeatsForSeconds( MusicSequence inSequence, Float64 inSeconds, MusicTimeStamp* outBeats );
This function uses the sequence’s tempo track to determine the number of beats that occur after the provided number of seconds.
MusicPlayer.h
Set the callback that is used whenever an event of type MusicEventUserData
occurs.
extern OSStatus MusicSequenceSetUserCallback( MusicSequence inSequence, MusicSequenceUserCallback inCallback, void* inClientData );
The callback registered suing this function is of type MusicSequenceUserCallback
.
The pointer passed in via inClientData
is
passed on to the callback when it is called. Passing NULL
to inCallback
removes
any callback previously registered.
Note that if MusicPlayerSetTime
is
called, this callback will be called for any events between the previous
playhead and the new playhead. See MusicSequenceUserCallback
for more information.
MusicPlayer.h
These functions are provided to create and set up a music track.
Returns the sequence to which the track belongs.
extern OSStatus MusicTrackGetSequence( MusicTrack inTrack, MusicSequence *outSequence );
MusicPlayer.h
Sets the AUGraph node which the track controls.
extern OSStatus MusicTrackSetDestNode( MusicTrack inTrack, AUNode inNode );
MusicPlayer.h
Sets the track’s destination endpoint.
extern OSStatus MusicTrackSetDestMIDIEndpoint( MusicTrack inTrack, MIDIEndpointRef inEndpoint );
MusicPlayer.h
Returns a pointer towards the node that the track points to.
extern OSStatus MusicTrackGetDestNode( MusicTrack inTrack, AUNode *outNode );
MusicPlayer.h
Returns a pointer towards the endpoint that the track points to.
extern OSStatus MusicTrackGetDestMIDIEndpoint( MusicTrack inTrack, MIDIEndpointRef *outEndpoint );
MusicPlayer.h
The functions with a Music Track’s properties.
Sets the track’s value for the given property.
extern OSStatus MusicTrackSetProperty( MusicTrack inTrack, UInt32 inPropertyID, void *inData, UInt32 inLength );
MusicPlayer.h
Returns the track’s value for a given property.
extern OSStatus MusicTrackGetProperty( MusicTrack inTrack, UInt32 inPropertyID, void *outData, UInt32 *ioLength );
MusicPlayer.h
These function place events on the various tracks within a sequence.
Creates a new MIDI note event.
extern OSStatus MusicTrackNewMIDINoteEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const MIDINoteMessage *inMessage );
This places a MIDINoteMessage
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new MIDI channel event.
extern OSStatus MusicTrackNewMIDIChannelEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const MIDIChannelMessage *inMessage );
This places a MIDIChannelMessage
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new MIDI raw data event.
extern OSStatus MusicTrackNewMIDIRawDataEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const MIDIRawData *inRawData );
This places a MIDIRawData
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new MIDI meta event.
extern OSStatus MusicTrackNewMetaEvent( MusicTrack inTrack, M usicTimeStamp inTimeStamp, const MIDIMetaEvent *inMetaEvent );
This places a MIDIMetaEvent
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new extended note event.
extern OSStatus MusicTrackNewExtendedNoteEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const ExtendedNoteOnEvent *inInfo);
This places a ExtendedNoteOnEvent
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new extended control event.
extern OSStatus MusicTrackNewExtendedControlEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const ExtendedControlEvent *inInfo);
This places a ExtendedControlEvent
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new parameter event.
extern OSStatus MusicTrackNewParameterEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const ParameterEvent *inInfo );
This places a ParameterEvent
instance
on inTrack
, which is
to be played at inTimeStamp
.
MusicPlayer.h
Creates a new extended tempo event.
extern OSStatus MusicTrackNewExtendedTempoEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, Float64 inBPM );
This places a ExtendedTempoEvent
instance
on inTrack
, which is
to be played at inTimeStamp
.
Creates a new user event.
extern OSStatus MusicTrackNewUserEvent( MusicTrack inTrack, MusicTimeStamp inTimeStamp, const MusicEventUserData* inUserData );
This places a MusicEventUserData
on inTrack
,
which is to be played at inTimeStamp
.
MusicPlayer.h
The functions allow you to arrange events within a track, create new tracks from events, and edit groups of events.
Moves the events from the given range to a new place.
extern OSStatus MusicTrackMoveEvents( MusicTrack inTrack, MusicTimeStamp inStartTime, MusicTimeStamp inEndTime, MusicTimeStamp inMoveTime );
MusicPlayer.h
Creates a new music track from the event in the given range.
extern OSStatus NewMusicTrackFrom( MusicTrack inSourceTrack, MusicTimeStamp inSourceStartTime, MusicTimeStamp inSourceEndTime, MusicTrack *outNewTrack );
MusicPlayer.h
Removes the events in the given range.
extern OSStatus MusicTrackClear( MusicTrack inTrack, MusicTimeStamp inStartTime, MusicTimeStamp inEndTime );
MusicPlayer.h
Removes the events in the given range, and moves those behind the range up.
extern OSStatus MusicTrackCut( MusicTrack inTrack, MusicTimeStamp inStartTime, MusicTimeStamp inEndTime );
MusicPlayer.h
Inserts the selected range at the destination insertion time, move all of events behind the insertion time back after the range.
extern OSStatus MusicTrackCopyInsert( MusicTrack inSourceTrack, MusicTimeStamp inSourceStartTime, MusicTimeStamp inSourceEndTime, MusicTrack inDestTrack, MusicTimeStamp inDestInsertTime );
MusicPlayer.h
Merges the selected range of events with the existing events in the track.
extern OSStatus MusicTrackMerge( MusicTrack inSourceTrack, MusicTimeStamp inSourceStartTime, MusicTimeStamp inSourceEndTime, MusicTrack inDestTrack, MusicTimeStamp inDestInsertTime );
MusicPlayer.h
The functions work with a Music Track instances.
Creates a new iterator for a track.
extern OSStatus NewMusicEventIterator( MusicTrack inTrack, MusicEventIterator *outIterator );
MusicPlayer.h
Destroys an iterator.
extern OSStatus DisposeMusicEventIterator(MusicEventIterator inIterator);
MusicPlayer.h
Moves the iterator to the event closest to the supplied time stamp.
extern OSStatus MusicEventIteratorSeek( MusicEventIterator inIterator, MusicTimeStamp inTimeStamp );
MusicPlayer.h
Moves the iterator o the next event in the track.
extern OSStatus MusicEventIteratorNextEvent( MusicEventIterator inIterator );
MusicPlayer.h
Move the iterator to the previous event in the track.
extern OSStatus MusicEventIteratorPreviousEvent( MusicEventIterator inIterator );
MusicPlayer.h
Returns information about the event currently iterated upon.
extern OSStatus MusicEventIteratorGetEventInfo( MusicEventIterator inIterator, MusicTimeStamp *outTimeStamp, MusicEventType *outEventType, const void* *outEventData, UInt32 *outEventDataSize );
MusicPlayer.h
Sets the type and data for an event.
extern OSStatus MusicEventIteratorSetEventInfo( MusicEventIterator inIterator, MusicEventType inEventType, const void *inEventData );
MusicPlayer.h
Removes the event from the track.
extern OSStatus MusicEventIteratorDeleteEvent( MusicEventIterator inIterator );
MusicPlayer.h
Sets the time that an event should occur at.
extern OSStatus MusicEventIteratorSetEventTime( MusicEventIterator inIterator, MusicTimeStamp inTimeStamp );
MusicPlayer.h
Returns a boolean signifying if there is an event before the currently iterated event.
extern OSStatus MusicEventIteratorHasPreviousEvent( MusicEventIterator inIterator, Boolean *outHasPreviousEvent );
MusicPlayer.h
Returns a boolean signifying if there is an event after the currently iterated event.
extern OSStatus MusicEventIteratorHasNextEvent( MusicEventIterator inIterator, Boolean *outHasNextEvent );
MusicPlayer.h
Returns a boolean which tells if the iterator points towards an event.
extern OSStatus MusicEventIteratorHasCurrentEvent( MusicEventIterator inIterator, Boolean *outHasCurrentEvent );
MusicPlayer.h
The callback used whenever a user event occurs in an event track.
typedef CALLBACK_API_C(void,MusicSequenceUserCallback)( void *inClientData, MusicSequence inSequence, MusicTrack inTrack, MusicTimeStamp inEventTime, const MusicEventUserData *inEventData, MusicTimeStamp inStartSliceBeat, MusicTimeStamp inEndSliceBeat );
These values are returned when errors occur.
kAudioToolboxErr_TrackIndexError = -10859 |
kAudioToolboxErr_TrackNotFound = -10858 |
kAudioToolboxErr_EndOfTrack = -10857 |
kAudioToolboxErr_StartOfTrack = -10856 |
kAudioToolboxErr_IllegalTrackDestination = -10855 |
kAudioToolboxErr_NoSequence = -10854 |
kAudioToolboxErr_InvalidEventType = -10853 |
kAudioToolboxErr_InvalidPlayerState = -10852 |
© 2008 Apple Inc. All Rights Reserved. (Last updated: 2008-10-15)