IOS Audio and Video (1) AVFoundation core class

IOS Audio and Video (2) AVFoundation video capture

IOS Audio and Video (3) AVFoundation playback and recording

IOS Audio and Video (43) AVFoundation Audio Session

IOS Audio Queue Services for AVFoundation

IOS Audio and Video (45) HTTPS self-signed certificates implement side play

IOS Audio and Video (46) Offline online speech recognition solution

1. Audio Queue Services

Audio Queue Services, which is a C programming interface in the Audio Toolbox framework of Core Audio.

  • So what is an audio queue service? Audio Queue Services provides an easy, low-cost way to record and play Audio in iOS and Mac OS X. The audio queue service lets you record and play audio in any of the following formats:
  1. Linear PCM.
  2. Any compressed format that is natively supported on the Apple platform that you are developing.
  3. Any other format of the codec that the user has installed.

The audio queue service is advanced. It allows your applications to use hardware recording and playback devices, such as microphones and speakers, without knowing the hardware interface. It also allows you to use complex codecs without understanding how they work. At the same time, the audio queue service supports some advanced features. It provides fine-grained timing control to support scheduled playback and synchronization. You can use it to synchronize playback from multiple audio queues and to synchronize audio and video.

Note: The Features provided by Audio Queue Services are similar to those previously provided by the Sound manager in Mac OS X. The Sound manager is not supported in Mac OS X V10.5 and cannot be used for 64-bit applications. Apple recommends all newly developed audio queue services as a replacement for the sound manager in existing Mac OS X applications.

Audio Queue Services is a pure C interface that can be used in Cocoa applications as well as Mac OS X command-line tools. To keep the focus on Audio queue services, the code examples in this article are sometimes simplified by using c++ classes from the Core Audio SDK. However, neither the SDK nor the c++ language requires the use of the audio queue service.

1.1 Audio Queue

  • What is an audio queue?

An audio queue is a software object used to record or play audio in iOS or Mac OS X. It is represented by the AudioQueueRef opaque data type, declared in the Audioqueue.h header file. You can use audio queues and other core audio interfaces, with relatively little custom code, to create a complete digital audio recording or playback solution in your application. An audio queue does the following:

  1. Connect to audio hardware
  2. Memory management
  3. Use codecs for compressed audio formats as needed
  4. Call recording and playback

1.1.1 Audio queue Architecture

All audio queues have the same general structure, consisting of the following parts:

  • A set of audio queue buffers, each of which is a temporary repository for some audio data
  • A buffer queue, an ordered list of audio queue buffers
  • An audio queue callback function you wrote

The architecture varies depending on whether the audio queue is used for recording or playback. The difference is in how the audio queue connects its inputs and outputs, and the role of the callback function.

1.1.2 Audio queue for recording

Figure 1-1 shows the structure of the recorded audio queue created using the AudioQueueNewInput function.

Creates a new recorded audio queue object. The method is as follows:

func AudioQueueNewInput(_ inFormat: UnsafePointer<AudioStreamBasicDescription>, 
                      _ inCallbackProc: AudioQueueInputCallback, 
                      _inUserData: UnsafeMutableRawPointer? ._inCallbackRunLoop: CFRunLoop? ._inCallbackRunLoopMode: CFString? ._ inFlags: UInt32, 
                      _outAQ: UnsafeMutablePointer<AudioQueueRef? >) -> OSStatus
Copy the code

AudioQueueNewInput:

  1. inFormat: Compressed or uncompressed audio data format to be recorded. When recording to linear PCM, only interleaved format is supported.
  2. inCallbackProc: a callback function for recording audio queues. The audio queue calls this function when it finishes filling the buffer. seeAudioQueueInputCallback.
  3. inUserData: custom data structures used with callback functions.
  4. inCallbackRunLoop: event loop to call the callback function pointed to by the inCallbackProc parameter. If NULL is specified, the callback is invoked on the internal thread of the audio queue.
  5. inCallbackRunLoopMode: Run loop mode for calling the callback function specified in the inCallbackProc argument. Typically, you pass kCFRunLoopCommonModes or use NULL, which is equivalent. You can choose to create your own thread with your own run loop. For more information about running loops, see Running loops andCFRunLoop.
  6. inFlags: Reserved for future use. It has to be 0.
  7. outAQ: On the output side, the newly created queue for recording audio.

There is also an AudioQueueInputCallback method: AudioQueueInputCallback, which is called by the system when the recorded audio queue has finished filling the audio queue buffer.

Specifies a recorded audio queue callback when the AudioQueueNewInput(::::::_: 🙂 function is called. The callback is called each time its recording audio queue populates the audio queue buffer with new audio data. Typically, callbacks write data to a file or other buffer, and then re-queue the audio queue buffer to receive more data.

typealias AudioQueueInputCallback = (UnsafeMutableRawPointer? .AudioQueueRef.AudioQueueBufferRef.UnsafePointer<AudioTimeStamp>, UInt32.UnsafePointer<AudioStreamPacketDescription>? ->Void

Copy the code

Parameters are described as follows:

  1. inUserData: you are inAudioQueueNewInput(:::::: _ ::Custom data specified in the inUserData argument of the 🙂 function. Typically, this includes the format and state information of the audio queue.
  2. inAQ: The recorded audio queue that invokes the callback.
  3. inBuffer: An audio queue buffer, newly populated by the recording audio queue, containing new audio data to write to for the callback.
  4. inStartTime: The sample time when the audio queue buffer starts. This parameter is not used for base records.
  5. inNumberPacketDescriptions: The number of call-back audio packets in the inBuffer parameter. Audio queues set this parameter to NULL when recording in constant bit rate (CBR) format.
  6. inPacketDescs: For compression formats that require packet descriptions, the encoder generates a set of packet descriptions for the audio data in the inBuffer parameter. The audio queue sets this parameter to NULL when recording in CBR format.
  • The input end of a recording audio queue is usually connected to external audio hardware, such as a microphone. In iOS, for example, audio comes from a device connected to the user’s built-in microphone or headphone microphone. By default on Mac OS X, audio comes from the system’s default audio input device, set by the user in system preferences.
  • The output of the recorded audio queue uses the callback function you wrote. When recorded to disk, the callback writes the buffer of new audio data received from its audio queue to the audio file. However, recorded audio queues can be used in other ways. For example, callbacks can provide audio data directly to an application rather than writing it to disk.
  • To learn more about this callback, see The Recording Audio Queue callback function.
  • Each audio queue – whether recording or playback – has one or more audio queue buffers. These buffers are arranged in a specific order called buffer queues. In the figure, the audio queue buffers are numbered according to the order in which they are filled — the same order in which they are passed to the callback.

1.1.3 Audio queue for playback

Figure 1-2 shows the structure of the playback audio queue (created using the AudioQueueNewOutput function).

In the playback audio queue, the callback is at the input. Callbacks are responsible for getting audio data from disk (or other source) and passing it to the audio queue. Playback callbacks also tell their audio queues to stop when no more data is playing.

The output of the playback audio queue is usually connected to external audio hardware, such as speakers. In iOS, audio is delivered to a device of the user’s choice, such as a receiver or headset. By default on Mac OS X, audio goes to the system’s default audio output device in the way the user sets it in system preferences.

Next, take a look at the AudioQueueNewOutput function, which creates a new playback audio queue object. The function is defined as follows:

func AudioQueueNewOutput(_ inFormat: UnsafePointer<AudioStreamBasicDescription>, 
                       _ inCallbackProc: AudioQueueOutputCallback, 
                       _inUserData: UnsafeMutableRawPointer? ._inCallbackRunLoop: CFRunLoop? ._inCallbackRunLoopMode: CFString? ._ inFlags: UInt32, 
                       _outAQ: UnsafeMutablePointer<AudioQueueRef? >) -> OSStatus

Copy the code

Its parameters are explained as follows:

  1. inFormat: Compressed or uncompressed audio data format to be recorded. When recording to linear PCM, only interleaved format is supported.
  2. A callback function that plays back the audio queue. The audio queue invokes the callback after the audio queue has finished fetching the buffer. Reference: AudioQueueOutputCallback.
  3. inUserData: custom data structures used with callback functions.
  4. inCallbackRunLoop: event loop to call the callback function pointed to by the inCallbackProc parameter. If NULL is specified, the callback is invoked on the internal thread of the audio queue.
  5. inCallbackRunLoopMode: Run loop mode for calling the callback function specified in the inCallbackProc argument. Typically, you pass kCFRunLoopCommonModes or use NULL, which is equivalent. You can choose to create your own thread with your own run loop. For more information about running loops, see Running loops andCFRunLoop.
  6. inFlags: Reserved for future use. It has to be 0.
  7. outAQ: The newly created playback audio queue object at output time.
  • Used to play back audio queues. The audio queue invokes the callback after the audio queue has finished fetching the buffer. This callback function is called by the system when the audio queue buffer is reusable. An AudioQueueOutputCallback is defined as follows:
typealias AudioQueueOutputCallback = (UnsafeMutableRawPointer? .AudioQueueRef.AudioQueueBufferRef) - >Void

Copy the code

Parameters are described as follows:

  1. inUserData: you in AudioQueueNewOutput (::::::: ::Custom data specified in the inUserData argument of the 🙂 function. Typically, this includes data format and state information for the audio queue.
  2. inAQ: Calls the playback audio queue for the callback.
  3. inBuffer: An audio queue buffer that is recently available to fill because the playback audio queue has acquired its contents.
  • If you put your callback function named MyAudioQueueOutputCallback, you will declare it: that the callback function in its associated the playback of audio queue buffer to get the data from the audio queue when call, while the buffer can be used for reuse. The newly available buffer is sent to this callback in the inBuffer argument. Normally, you write this callback to:
  1. Populate the newly available buffer with the next set of audio data in the file or other buffer.
  2. Requeue the play buffer. To requeue the buffer, use the AudioQueueEnqueueBuffer(:::🙂 or AudioQueueEnqueueBufferWithParameters (:::::::::The: _: function.

To associate this callback with the playback audio queue, provide a reference to the callback when you create the audio queue. Look at the inCallbackProc parameter of the AudioQueueNewOutput(::::::_: _:) function. When the system calls this callback, you cannot assume that audio data from the newly available buffer has been played. Aired on how to check whether the sound is completed, please refer to the AudioQueuePropertyListenerProc callback function discuss.

1.1.4 Audio queue buffer

The AudioQueue buffer is a data structure of type AudioQueueBuffer declared in the audioqueue.h header file:

typedef struct AudioQueueBuffer {
    const UInt32   mAudioDataBytesCapacity;
    void *const    mAudioData;
    UInt32         mAudioDataByteSize;
    void           *mUserData;
} AudioQueueBuffer;
typedef AudioQueueBuffer *AudioQueueBufferRef;
Copy the code

The mAudioData field highlighted in the code above points to the buffer itself: a chunk of memory that serves as a container for transient blocks of audio data being played or recorded. Information in the other fields helps with the audio queue management buffer.

An audio queue can use any number of buffers. Your application specifies the number. A typical number is three. This allows one person to be busy writing disks while the other is busy filling in new audio data. If you need to compensate for problems such as disk I/O latency, you can use a third buffer. Figure 1-3 illustrates this.

The audio queue performs memory management for its buffers:

  • When you call the AudioQueueAllocateBuffer function, the audio queue allocates a buffer.
  • When you release the audio queue by calling the AudioQueueDispose function, the queue releases its buffer.

Once the AudioQueueAllocateBuffer function is allocated, the pointer to the audio queue buffer and the size of the buffer cannot be changed. The size field of the buffer, mAudioDataByteSize, is initially set to 0, indicating the amount of valid data. The AudioQueueAllocateBuffer function is defined as follows:

func AudioQueueAllocateBuffer(_ inAQ: AudioQueueRef, 
                            _ inBufferByteSize: UInt32, 
                            _outBuffer: UnsafeMutablePointer<AudioQueueBufferRef? >) -> OSStatus

Copy the code

Function parameter parsing:

  1. inAQ: Audio queue to allocate buffer.
  2. inBufferByteSize: The required capacity of the new buffer, in bytes. The appropriate capacity depends on the processing of data and audio data formats.
  3. outBuffer: On output, points to the newly allocated audio queue buffer.

The AudioQueueDispose function that handles the audio queue also handles its resources, including its buffers. After calling this function, you will no longer be able to interact with the audio queue. In addition, the audio queue no longer invokes any callbacks. The AudioQueueDispose function is defined as follows:

func AudioQueueDispose(_ inAQ: AudioQueueRef, 
                     _ inImmediate: Bool) -> OSStatus
Copy the code

Function parameter parsing:

  1. inAQ: Audio queue to process.
  2. inImmediate: If true is passed, the audio queue is processed immediately (that is, synchronously). If false is passed, processing is not performed until all the queued buffers (that is, asynchronous processing) are processed.

The use of audio queue buffering improves the robustness of recording and playback capabilities added to the application. It also helps optimize resource usage.

For a complete description of the AudioQueueBuffer data structure, see the AudioQueue Services Resources.

1.1.5 Audio Queue Service

  • The audio queue service allows you to record and play audio in linear PCM, compressed formats such as Apple Lossless and AAC, and other user-installed codecs. The audio queue service also supports scheduled playback and synchronization of multiple audio queues as well as audio and video synchronization.
  • An audio queue is a software object used to record or play audio. An audio queue does the following:
  1. Connect to audio hardware
  2. Memory management
  3. Use codecs for compressed audio formats as needed
  4. Call recording and playback

Note: The audio queue service provides similar functionality to those previously provided by the Sound manager and macOS. It adds additional features, such as synchronization. The sound manager is not supported in OS X V10.5 and cannot be used for 64-bit applications. It is recommended that all newly developed applications use the audio queue service to replace the sound manager in existing Mac applications.

  • The following interfaces are provided to control the audio queue:
// Start playing or recording audio.
func AudioQueueStart(AudioQueueRef, UnsafePointer<AudioTimeStamp>?) -> OSStatus

// Decodes the queued buffer in preparation for playback.
func AudioQueuePrime(AudioQueueRef, UInt32, UnsafeMutablePointer<UInt32>?) -> OSStatus

// Resets the decoder state of the audio queue.
func AudioQueueFlush(AudioQueueRef) -> OSStatus

// Stop playing or recording audio.
func AudioQueueStop(AudioQueueRef, Bool) -> OSStatus

// Pause audio playback or recording.
func AudioQueuePause(AudioQueueRef) -> OSStatus

// Reset the audio queue.
func AudioQueueReset(AudioQueueRef) -> OSStatus

Copy the code
  • Provides the following interfaces for creating and processing audio queues:
// Create a new playback audio queue object.
func AudioQueueNewOutput(UnsafePointer
       
        , AudioQueueOutputCallback, UnsafeMutableRawPointer? , CFRunLoop? , CFString? , UInt32, UnsafeMutablePointer
        
         )
        ?>
        -> OSStatus

// Create a new recording audio queue object.
func AudioQueueNewInput(UnsafePointer
       
        , AudioQueueInputCallback, UnsafeMutableRawPointer? , CFRunLoop? , CFString? , UInt32, UnsafeMutablePointer
        
         )
        ?>
        -> OSStatus

//// destructs an audio queue.
func AudioQueueDispose(AudioQueueRef, Bool) -> OSStatus

Copy the code
  • Interfaces for processing audio queue buffers are provided as follows:
// Request the audio queue object to allocate an audio queue buffer.
func AudioQueueAllocateBuffer(AudioQueueRef, UInt32, UnsafeMutablePointer
       
        )
       ?> -> OSStatus

// Request an audio queue object to allocate an audio queue buffer with packet description space.
func AudioQueueAllocateBufferWithPacketDescriptions(AudioQueueRef, UInt32, UInt32, UnsafeMutablePointer
       
        )
       ?> -> OSStatus

// Request the audio queue to process the audio queue buffer.
func AudioQueueFreeBuffer(AudioQueueRef, AudioQueueBufferRef) -> OSStatus

// Adds the buffer to the buffer queue of the recording or playback audio queue.
func AudioQueueEnqueueBuffer(AudioQueueRef, AudioQueueBufferRef, UInt32, UnsafePointer<AudioStreamPacketDescription>?) -> OSStatus

// Adds the buffer to the buffer queue of the playback audio queue object, specifying the start time and other Settings.
func AudioQueueEnqueueBufferWithParameters(AudioQueueRef, AudioQueueBufferRef, UInt32, UnsafePointer
       
        ? , UInt32, UInt32, UInt32, UnsafePointer
        
         ? , UnsafePointer
         
          ? , UnsafeMutablePointer
          
           ?)
          
         
        
        -> OSStatus

Copy the code
  • Interfaces for manipulating audio queue parameters are provided as follows:
// Get the audio queue parameter value.
func AudioQueueGetParameter(AudioQueueRef, AudioQueueParameterID, UnsafeMutablePointer<AudioQueueParameterValue>) -> OSStatus

// Set the audio queue parameter values.
func AudioQueueSetParameter(AudioQueueRef, AudioQueueParameterID, AudioQueueParameterValue) -> OSStatus

Copy the code
  • The following interfaces are provided to manipulate audio queue properties:
// Get the audio queue attribute value.
func AudioQueueGetProperty(AudioQueueRef, AudioQueuePropertyID, UnsafeMutableRawPointer, UnsafeMutablePointer<UInt32>) -> OSStatus

// Sets an audio queue property value.
func AudioQueueSetProperty(AudioQueueRef, AudioQueuePropertyID, UnsafeRawPointer, UInt32) -> OSStatus

// Gets the size of the audio queue attribute value.
func AudioQueueGetPropertySize(AudioQueueRef, AudioQueuePropertyID, UnsafeMutablePointer<UInt32>) -> OSStatus

// Add the property listener callback to the audio queue.
func AudioQueueAddPropertyListener(AudioQueueRef, AudioQueuePropertyID, AudioQueuePropertyListenerProc, UnsafeMutableRawPointer?) -> OSStatus

// Remove a property listener callback from the audio queue.
func AudioQueueRemovePropertyListener(AudioQueueRef, AudioQueuePropertyID, AudioQueuePropertyListenerProc, UnsafeMutableRawPointer?) -> OSStatus

Copy the code
  • The following interfaces provide processing time:
// Create a timeline object for the audio queue.
func AudioQueueCreateTimeline(AudioQueueRef, UnsafeMutablePointer
       
        )
       ?> -> OSStatus

// Destruct the timeline object that releases the audio queue.
func AudioQueueDisposeTimeline(AudioQueueRef, AudioQueueTimelineRef) -> OSStatus

// Gets the current time of the audio hardware device associated with the audio queue.
func AudioQueueDeviceGetCurrentTime(AudioQueueRef, UnsafeMutablePointer<AudioTimeStamp>) -> OSStatus

// Gets the boot time of the audio hardware device closest to the requested boot time.
func AudioQueueDeviceGetNearestStartTime(AudioQueueRef, UnsafeMutablePointer<AudioTimeStamp>, UInt32) -> OSStatus

// Convert the time of the associated audio hardware device of the audio queue from one time-base representation to another.
func AudioQueueDeviceTranslateTime(AudioQueueRef, UnsafePointer<AudioTimeStamp>, UnsafeMutablePointer<AudioTimeStamp>) -> OSStatus

// Get the current audio queue time.
func AudioQueueGetCurrentTime(AudioQueueRef, AudioQueueTimelineRef? , UnsafeMutablePointer
       
        ? , UnsafeMutablePointer
        
         ?)
        
        -> OSStatus

Copy the code
  • Provides the following interfaces for performing offline rendering:
// Sets the rendering mode and audio format of the playback audio queue.
func AudioQueueSetOfflineRenderFormat(AudioQueueRef, UnsafePointer
       
        ? , UnsafePointer
        
         ?)
        
        -> OSStatus

// Use playback audio queues to export audio to buffers instead of devices.
func AudioQueueOfflineRender(AudioQueueRef, UnsafePointer<AudioTimeStamp>, AudioQueueBufferRef, UInt32) -> OSStatus

Copy the code
  • Callback functions are provided as follows:
System call when a recorded audio queue finishes filling the audio queue buffer.
typealias AudioQueueInputCallback

System call when an audio queue buffer can be reused.
typealias AudioQueueOutputCallback

System call when the specified audio queue property changes value.
typealias AudioQueuePropertyListenerProc

Copy the code

1.1.6 Cache Joining and Queuing

The buffer queue provides the audio queue (actually the audio queue service) and its name. You encountered buffer queues (a list of buffers arranged in order) in the audio queue architecture. Here, you’ll learn about audio queue objects and how callback functions manage buffer queues during recording or playback. In particular, you’ll learn about queuing, which is adding an audio queue buffer to a buffer queue. Whether you implement recording or playback, queuing is a callback task

1.1.6.1 Recording Process

At recording time, an audio queue buffer is populated with audio data retrieved from an input device, such as a microphone. The remaining buffers in the buffer queue line up behind the current buffer, waiting to fill the audio data in turn.

The audio queue passes the filled buffer to the callback in the order in which it gets the audio data. Figure 1-3 illustrates recording when using an audio queue.

  • In Step 1 of Figure 1-3, start recording. The audio queue populates the buffer with the data it gets.

  • In Step 2, the first buffer is filled. The audio queue invokes the callback, handing it the full buffer (buffer 1). The callback (Step 3) writes the contents of the buffer to the audio file. At the same time, the audio queue fills another buffer (buffer 2) with newly acquired data.

  • In step 4, the callback queues the buffer it just wrote to disk (buffer 1) so that it fills up again. The audio queue invokes the callback again (step 5) and hands it the next full buffer (buffer 2). This cyclic steady state continues until the user stops recording.

1.1.6.2 Playback Process

When playing, an audio queue buffer is sent to an output device, such as a speaker. The remaining buffers in the buffer queue are queued after the current buffers, waiting to be played in turn.

The audio queue passes the playback buffer of the audio data to the callbacks in the order they are played. The callback reads the new audio data into the buffer and queues it. Figure 1-4 illustrates how playback works when using audio queues.

In Step 1 of Figure 1-4, the application starts the playback audio queue. The application invokes a callback for each audio queue buffer, populates them, and adds them to the buffer queue. Starting ensures that playback can start immediately when the application calls the AudioQueueStart function (Step 2).

In step 3, the audio queue sends the first buffer (buffer 1) to the output.

Once the first buffer is played, the play audio queue enters a cyclic stable state. The audio queue starts playing the next buffer (buffer 2, step 4) and invokes the callback (step 5), handing it the buffer just played (buffer 1).

1.1.6.3 Controlling the Playback process

Audio queue buffers are always played in the order they were queued. However, audio queue service by AudioQueueEnqueueBufferWithParameters function provides you with some control of the playback process. This feature lets you:

  • Set the exact playback time for the buffer. This enables you to support synchronization.
  • Clip the frame at the beginning or end of the audio queue buffer. This allows you to remove guided or trailing silence.
  • Set playback gain at buffer granularity

You can use this function to exert some control over the buffer queue. You can assign audio queue Settings that are actually carried by the audio queue buffer. Therefore, the setting takes effect when the audio queue buffer starts playing. This function only works for playback. Recorded audio queues accept no parameters and do not support variable bit rate (VBR) formats (this may need to be adjusted). AudioQueueEnqueueBufferWithParameters function complete are defined as follows:

func AudioQueueEnqueueBufferWithParameters(_ inAQ: AudioQueueRef, 
                                         _ inBuffer: AudioQueueBufferRef, 
                                         _ inNumPacketDescs: UInt32, 
                                         _inPacketDescs: UnsafePointer<AudioStreamPacketDescription>? ._ inTrimFramesAtStart: UInt32, 
                                         _ inTrimFramesAtEnd: UInt32, 
                                         _ inNumParamValues: UInt32, 
                                         _inParamValues: UnsafePointer<AudioQueueParameterEvent>? ._inStartTime: UnsafePointer<AudioTimeStamp>? ._ outActualStartTime: UnsafeMutablePointer<AudioTimeStamp>?) -> OSStatus

Copy the code

Function parameters are explained as follows:

  1. inAQ: An audio queue object with an audio queue buffer.
  2. inBuffer: Audio queue buffer to add to buffer queue. Before this function can be called, the buffer must contain the audio data to play.
  3. inNumPacketDescs: Number of audio data packets in the inBuffer parameter. Use the 0 value for any of the following situations :(1) when playing a constant bitrate (CBR) format. (2) when using audioqueueallocatebufferwithpacketdescription (:::The: _:) function allocates a buffer that is being requeued. In this case, the callback should describe the buffer package in the mpacketDescription and mPacketDescriptionCount fields of the buffer.
  4. inPacketDescs: An array of package descriptions. NULL values are used in any of the following situations :(1) when playing a constant bitrate (CBR) format. (2) when using audioqueueallocatebufferwithpacketdescription (:::The: _:) function allocates a buffer that is being requeued. In this case, the callback should describe the buffer package in the mpacketDescription and mPacketDescriptionCount fields of the buffer.
  5. inTrimFramesAtStart: The number of startup frames to skip at the start of the buffer.
  6. inTrimFramesAtEnd: The number of frames to skip at the end of the buffer.
  7. inNumParamValues: inParamValues Number of audio queue parameter values to which the parameter points. If no parameter is set, 0 is used.
  8. inParamValues: Array of parameters to apply to the audio queue buffer. (In OS X V10.5, there is only one audio queue parameter, kAudioQueueParam_Volume.) If no buffer parameters are set, NULL is used. Assign parameter values before playback – Parameter values cannot be changed while the buffer is playing. Changes to the audio queue buffer parameters take effect when the buffer starts playing.
  9. inStartTime: The start time required to play the buffer. To specify a time relative to the start time of the audio queue, use the mSampleTime field of the AudioTimeStamp structure. Use NULL to indicate that the buffer should play as soon as possible-possibly after the previously queued buffer has finished playing. Buffers are run in the order they are queued (first in, first out). If multiple buffers are queued, the start time must be in ascending order or NULL; Otherwise, an error will occur. This parameter specifies when audio data starts to play, ignoring any trim frames specified in the MframesatStart parameter.
  10. outActualStartTime: The time when the buffer actually starts playing on output.

1.1.7 Using codecs and audio data formats

Audio queue services convert between audio formats using codecs (audio data encoding/decoding components) as needed. Your recording or playback application can use any audio format in which a codec is installed. You don’t need to write custom code to handle various audio formats. Specifically, callbacks do not need to know the data format.

Here’s the thing. Each audio queue has an audio data formats, using AudioStreamBasicDescription structure. When you specify the format in the mFormatID field of the structure, the audio queue uses the appropriate codec. Then specify the sampling rate and channel count, and that’s all. You’ll see examples of formatting audio data in recording and playing audio.

The recording audio queue uses an installed codec, as shown in Figure 1-5.

In Step 1 of Figure 1-5, the application tells the audio queue to start recording and tells it the data format to use. In Step 2, the audio queue takes the new audio data and converts it using the codec according to the specified format. The audio queue then invokes the callback, handing it a buffer containing the appropriately formatted audio data. In step 3, the callback writes the formatted audio data to disk. Again, your callback does not need to know the data format.

The playback audio queue uses an installed codec, as shown in Figure 1-6.

In Step 1 of Figure 1-6, the application tells the audio queue to start playing and tells it the data format contained in the audio file to play. In Step 2, the audio queue invokes your callback, which reads data from the audio file. The callback function passes the data in its original format to the audio queue. In Step 3, the audio queue uses the appropriate codec and then sends the audio to the destination.

Audio queues can use any installed codec, whether native to Mac OS X or provided by a third party. To specify to use the codec, need to provide audio queue AudioStreamBasicDescription structure four character code ID. You’ll see an example of this in recorded audio.

Mac OS X includes a wide range of audio codecs, as listed in the Format IDs enumeration in the CoreAudiotypes.h header file, and as described in the CoreAudio Data Types Reference. You can use the interface in the AudioFormat.h header file in the Audio Toolbox framework to determine which codecs are available on your system. You can use Fiendishthngs applications on the system display codec, sample code can be obtained via http://developer.apple.com/samplecode/Fiendishthngs/.

1.1.8 Audio queue control and status

Audio queues have a life cycle between creation and processing. Your application manages this lifecycle and controls the state of the AudioQueue — using the six functions declared in the audioqueue.h header:

  • Start (AudioQueueStart). Call to initiate recording or playback.
  • (AudioQueuePrime). For playback, the AudioQueueStart is called before the audio queue is called to ensure that the audio queue can play the data immediately. This feature has nothing to do with recording.
  • Stop (AudioQueueStop). Call to reset the audio queue (see the AudioQueueReset description below) and then stop recording or playback. The play audio queue callback calls this function when no more data is playing.
  • Pause,AudioQueuePause). Calls pause recording or playback without affecting the buffer or resetting the audio queue. To continue, call the AudioQueueStart function.
  • Flush (AudioQueueFlush). Called after the last audio queue buffer is entered to ensure that all buffered data and any audio data being processed is recorded or played back.
  • Reset (AudioQueueReset). The call immediately silences an audio queue, removes all buffers from previously scheduled use, and resets all decoder and DSP states.

You can use the AudioQueueStop function in synchronous or asynchronous mode:

  1. Sync stop occurs immediately, regardless of previously buffered audio data.
  2. Asynchronous stops occur after all queued buffers are played or recorded.

For a complete description of these functions, see the Audio Queue Services resource, which includes more information on synchronous and asynchronous stopping Audio queues.

1.1.9 Audio Queue Parameters

Audio queues have tunable Settings called parameters. Each parameter has an enumerated constant as its key and a floating-point number as its value. Parameters are usually used for playback, not recording.

In Mac OS X V10.5, the only audio queue parameter available is for Gain. The value of this parameter is set or retrieved using the kAudioQueueParam_Volume constant and is available from 0.0(silent) to 1.0(unified gain).

Your application can set audio queue parameters in two ways:

  • For each audio queue, use the AudioQueueSetParameter function. This allows you to change the audio queue Settings directly. The changes are effective immediately.
  • Each audio queue buffer, use the AudioQueueEnqueueBufferWithParameters function. This allows you to assign audio queue Settings that are actually carried by the audio queue buffer. These changes take effect when the audio queue buffer starts playing.

In both cases, the audio queue parameter Settings will remain in effect until you change them.

The current parameter value of the audio queue can be accessed at any time using the AudioQueueGetParameter function. For a complete description of the functions that get and set parameter values, see the Audio Queue Service Resources.

Reference the apple official documentation: developer.apple.com/documentati…