Prerequisites for reading:

  • C Language Basics
  • Fundamentals of Audio and Video
  • Core Audio basic data structure
  • Audio Session

Audio Queue Services is the official recommended way to record and play in iOS and Mac OS X in a straightforward, low-cost way. Unlike the upper API, it can call back the audio frame data for more detailed manipulation.

Usage Scenarios:

Compared to the upper API, each frame of audio data can be retrieved directly, so you can do whatever you need to do with the audio frame. However, it is not possible to do some more delicate processing for the sound, such as echo cancellation, mixing, noise reduction, etc. If you need to do more low-level operations, you need to use Audio Unit.

Overview

Audio Queue Service is a SET of C-based interfaces in the Audio Toolbox framework of Core Audio.

Audio Queue Services is a set of advanced apis. Not only does it allow programs to interact with audio hardware (microphones, speakers, etc.) without having to understand the hardware, but it also allows us to use complex codecs without having to understand how they work.

Audio Queue Services also provides more fine-grained timing control to support scheduled playback and synchronization tasks. It can be used to synchronize multiple audio playback queues or between audio and video.

The following formats are supported

  • Linear PCM
  • Any natively supported compression format provided by Apple
  • Any format generated by the user using the codec

Note: Audio Queue Services is a set of pure C interfaces, so basic C,C++ understanding is required.

1. Overview of Audio Queues

In iOS, Mac OS X, the Audio Queue is a software-level object that can be used for recording and playing operations. Use AudioQueueRef to represent its data structures.

role

  • Connecting audio hardware
  • Manage memory of related modules
  • Use codecs
  • Mediate recording and playing

1.1. Audio Queue architecture

  • A set of audio queue data in which each node is a temporary repository of audio data.
  • The data in the queue is in strict order
  • The callback function

1.2. The recording

To use the recording function of the AudioQueue, create a recording queue using the AudioQueueNewInput.

The input to the Audio Queue used for recording is usually an audio device connected to the current device, such as a built-in microphone, or an external input device with microphone function. The output side is the callback function we defined. If you are recording audio data to a file, you can write the audio data from the Audio Queue to a file in the callback function. Of course, the recorded audio data can also be directly sent to the current App to realize the function of recording and playing.

Each Audio queue, whether for recording or playback, has at least one or more audio data. All audio data is stored in a special data structure called an audio queue buffer, which can be understood as nodes in a queue. As shown in the figure above, the specified number of buffers are placed in the audio queue in sequence, and they are eventually retrieved in sequence in the callback function.

1.3. Play

To use the playback function of the Audio queue, create a playback queue object with the AudioQueueNewOutput.

Play the audio queue used by the callback function on the input side. This callback puts data retrieved from local or other audio sources into the audio queue. The play callback also tells the audio queue to stop playing when no data is loaded.

The output end of the audio queue for playback is connected to audio output hardware, such as speakers or external audio devices with speaker function (e.g., headphones, stereos, etc.).

1.4. Audio queue data

AudioQueueBuffer Is used to store audio queue data.

typedef struct AudioQueueBuffer {
    const UInt32   mAudioDataBytesCapacity;
    void *const    mAudioData;
    UInt32         mAudioDataByteSize;
    void           *mUserData;
} AudioQueueBuffer;
typedef AudioQueueBuffer *AudioQueueBufferRef;
Copy the code
  • mAudioData: Real-time audio data pointer to the memory address where the audio data is actually stored in the currently fetched queue.
  • mAudioDataBytesCapacity: Indicates the maximum storage space for audio data
  • mAudioDataByteSize: The actual size of the audio data currently stored
  • mUserData: Developers can store some custom data

Audio queues can use any number of audio data nodes, but it is generally recommended to use three. Because too little is too frequently accessed and too much increases the application memory consumption, we can use the third data as compensation when there is a delay.

Because of the Audio Queue’s pure C functions, memory needs to be managed manually.

  • Used when initializing the Audio QueueAudioQueueAllocateBufferAllocate memory
  • Used when the callback function is exhaustedAudioQueueDisposeRecovery of memory

Memory management can make recording and playing more stable and optimize App resource usage.

1.5. Audio queue and queue operation

Audio Queue: The name of the audio queue Services

Audio queue buffer: Stores data from one or more nodes in an audio queue

  • The recording process

When recording, an audio queue buffer will be filled with audio data collected from input devices such as microphones. The remaining buffers in the audio queue queue in order after the currently filled buffer, waiting to be filled. On the output side, the callback receives sequential audio data from the audio queue at specified intervals. The working principle is as follows:

Figure 1: When recording begins, the audio queue is filled with the required audio data.

Figure 2: The first buffer is filled, and the calling function takes buffer 1 and writes it to a file, while Buffer2 is filled with data.

Figure 3: In step 4, the callback function puts the exhausted Buffer 1 back into the audio queue, then in step 5 the callback function retrieves the audio data buffer2 again, finally writes it to a file, then puts it back into the audio queue and repeats until the recording stops.

  • Playback process

To play, an audio queue buffer needs to be given to the output device (such as a speaker). The remaining audio data will also be arranged in order after the current audio data is taken out and played, waiting to be played. The callback retrieves the data from the audio queue in sequence and gives it to the speaker, and then puts the exhausted Audio queue buffer back into the audio queue.

Figure 1: The application starts the audio queue, populates audio queue buffers with each successive call, and places them in the audio queue. AudioQueueStart starts playing immediately when the application calls it.

Figure 2: The audio queue outputs the first audio data

Figure 3: The exhausted Audio queue buffer is put back into the audio queue. Once the first audio data is played, the audio queue enters a cyclic stable state, which starts playing the next buffer2(step 4) and then calls the callback function to prepare to fill the data (step 5). Finally (step 6) Buffer1 is filled again and loaded into the audio queue and the loop continues until the audio queue stops.

  • Control the playback process

Audio queue Buffers are always played in the order that they are enqueued. However, you can use AudioQueueEnqueueBufferWithParameters do some additional control function

A. Set the exact play time of the buffer for synchronization

B. We can crop the audio queue buffer at the beginning or end, which enables us to mute the beginning or end.

C. Add sounds

This will be explained in detail in the following broadcast section.

1.6. Callback functions

Once the callback function is registered, it will be called frequently, whether it is recorded or played. The call time depends on our setup. An important responsibility of the callback function is to return the used data to the audio queue. Queue with AudioQueueEnqueueBuffer.

1.6.1. Recorded callback functions
AudioQueueInputCallback (
    void                               *inUserData,
    AudioQueueRef                      inAQ,
    AudioQueueBufferRef                inBuffer,
    const AudioTimeStamp               *inStartTime,
    UInt32                             inNumberPacketDescriptions,
    const AudioStreamPacketDescription *inPacketDescs
);
Copy the code

The audio queue buffer can be retrieved from the callback function that triggers a callback when audio data is collected at the input.

  • InUserData: Custom data that the developer can pass in as needed for the callback function to use. Note: Normally we need to pass in the current OC class instance, because the callback function is pure C and cannot call properties and methods in the OC class, so we pass in an OC instance to interact with property methods in this class.
  • InAQ: Audio queue to call the callback function
  • InBuffer: Audio queue buffer containing audio data.
  • InStartTime: indicates the timestamp of the current audio data. Mainly used for synchronization.
  • InNumberPacketDescriptions: packet description parameters. If you are recording in VBR format, the audio queue will supply the value of this parameter. If you record a file you need to pass it toAudioFileWritePacketsThe CBR format does not use this parameter.
  • InPacketDescs: Description of a group of packets in audio data. If data is in VBR format, this value needs to be passed to if the recording file is recordedAudioFileWritePacketsfunction
1.6.2. Playback callback functions
AudioQueueOutputCallback (
    void                  *inUserData,
    AudioQueueRef         inAQ,
    AudioQueueBufferRef   inBuffer
);
Copy the code

The audio data is read in the callback function for playback

  • InUserData: Custom data that the developer can pass in as needed for the callback function to use. Note: Normally we need to pass in the current OC class instance, because the callback function is pure C and cannot call properties and methods in the OC class, so we pass in an OC instance to interact with property methods in this class.
  • InAQ: Audio queue to call the callback function
  • InBuffer: Calls back the data to be filled.

If the application is playing data in VBR format, the callback function needs to get audio packet information through AudioFileReadPackets. The callback then puts the packet information into a custom data structure so that it can be used to play the audio queue.

1.7. Use codecs

Audio Queue Services enable Audio codecs to convert Audio data formats. You can record or play in any format supported by the codec.

Each audio queue have a own audio data formats, is encapsulated in AudioStreamBasicDescription, through mFormatID can specify the audio data format, audio queue will automatically choose the proper codec compression on it. Developers can customize audio data by specifying sampling rate, number of tracks, etc.

As shown in the figure above, the application tells the audio queue to start recording using the specified format. The audio queue converts the native PCM data to AAC data using the encoder. The audio queue then notifies the callback function and passes the converted data to the callback function in the Audio Queue buffer. Finally, the callback takes the converted AAC data and uses it.

As shown in the figure above, the application tells the audio queue to play the file in the specified format (AAC). The audio queue calls a callback function to read the audio data from the audio file. The callback function passes the data in the original format to the audio queue. Finally, the audio queue uses the appropriate decoder to deliver the audio data (PCM) to the speakers.

Audio queues can take advantage of any codec whether it is native to the system or installed by a third party (Mac OS only)

1.7. Life cycle

The range of activity of an audio queue between creation and destruction is called its declaration cycle.

  • Start (AudioQueueStart): initialization
  • Prime (AudioQueuePrime): Used for playback only, on callsAudioQueueStartIt is called to ensure that audio data is played as soon as it is available.
  • Stop (AudioQueueStop): resets the audio queue and stops playing and recording.
  • Pause (AudioQueuePause): Pauses recording. Playing does not affect data in the audio queue. callAudioQueueStartRecovery.
  • AudioQueueFlush: Called when the last buffer in the audio queue is enqueued to ensure that all audio data has been processed.
  • Reset (AudioQueueReset): The audio queue removes all data and resets the codec and DSP states.

The AudioQueueStop can choose to stop synchronously or asynchronously.

  • Synchronous: Stops immediately and ignores the data in the queue
  • Asynchronous: Stop when all data in a queue is fetched and used up.

1.8. Parameter Settings

An audio queue has a tunable setting called a parameter, each of which has an enumerated constant as its key and a floating point as its value, which is used only for playback.

There are two ways to set parameters

  • For each audio queue, useAudioQueueSetParameter: Immediate change
  • For each audio queue buffer, use theAudioQueueEnqueueBufferWithParameters, set when joining the team, such changes will take effect when playing.

Use kAudioQueueParam_Volume to adjust the playback volume (0.0 to 1.0).

2. The recording

Recording using Audio Queue Services, the output can be a file, network protocol transfer, copy to an object, etc. Output to a file is only described here.

process

  • Customize a structure to manage audio formats, state, file paths, etc…
  • Use audio Queue for recording
  • Select the size of each audio data required, and generate Magic cookies(metadata information) if desired.
  • Set custom audio data format, specify file path.
  • Create an Audio queue, allocate the Audio queue buffer, and enqueue the audio queue.
  • Tell Audio Queue to start recording
  • Stop the audio queue and reclaim the memory in the Audio Queue buffer.

2.1. Use custom constructs to manage state information

The first step is to customize a structure to manage the audio format and state information.

static const int kNumberBuffers = 3;                            // 1
struct AQRecorderState {
    AudioStreamBasicDescription  mDataFormat;                   // 2
    AudioQueueRef                mQueue;                        // 3
    AudioQueueBufferRef          mBuffers[kNumberBuffers];      // 4
    AudioFileID                  mAudioFile;                    // 5
    UInt32                       bufferByteSize;                // 6
    SInt64                       mCurrentPacket;                // 7
    bool                         mIsRunning;                    // 8
};
Copy the code
  • KNumberBuffers: How many audio queue data to use.
  • MDataFormat: Specifies the audio data format
  • MQueue: Recorded audio queue created by the application.
  • MBuffers: Array of audio data Pointers in the audio queue
  • MAudioFile: recorded file
  • BufferByteSize: Size (bytes) of the currently recorded file
  • MCurrentPacket: Index of audio packets to be written to the current recorded file
  • MIsRunning: Whether the current audio queue is running.

2.2. Callback functions

static void HandleInputBuffer (
    void                                *aqData,             // 1
    AudioQueueRef                       inAQ,                // 2
    AudioQueueBufferRef                 inBuffer,            // 3
    const AudioTimeStamp                *inStartTime,        // 4
    UInt32                              inNumPackets,        // 5
    const AudioStreamPacketDescription  *inPacketDesc        // 6
)
Copy the code
  • AqData: custom data that developers can pass in as needed for the callback function. Note: Normally we need to pass in the current OC class instance, because the callback function is pure C and cannot call properties and methods in the OC class, so we pass in an OC instance to interact with property methods in this class.
  • InAQ: Audio queue to call the callback function
  • InBuffer: Audio queue buffer containing audio data.
  • InStartTime: indicates the timestamp of the current audio data. Mainly used for synchronization.
  • InNumberPacketDescriptions: packet description parameters. If you are recording in VBR format, the audio queue will supply the value of this parameter. If you record a file you need to pass it toAudioFileWritePacketsThe CBR format does not use this argument (value 0).
  • InPacketDescs: Description of a group of packets in audio data. If data is in VBR format, this value needs to be passed to if the recording file is recordedAudioFileWritePacketsfunction

2.3. Write data to a local file

Write data to an audio file using AudioFileWritePackets.

AudioFileWritePackets (                     // 1
    pAqData->mAudioFile,                    // 2
    false/ / 3inBuffer->mAudioDataByteSize,           // 4
    inPacketDesc,                           // 5
    pAqData->mCurrentPacket,                // 6
    &inNumPackets,                          // 7
    inBuffer->mAudioData                    // 8
);

Copy the code
  • 1. Write audio data to an audio file
  • 2. The audio file to be written
  • 3. Use false to indicate that data should not be cached when writing files
  • 4. Size of the file to be written
  • 5. The description of a set of audio packets, as described in 2.2, is set to NULL if the CBR is used and to NULL if the VBR is usedinPacketDescParameters.
  • 6. Index of the data packets currently written
  • 7. Number of data packets to be written during input (recording). The number of packets actually written to output (play)
  • 8. Audio data to be written.

Squad of 2.4.

When the audio data is finished in the callback function, it needs to be put back into the audio queue to store new audio data

AudioQueueEnqueueBuffer (                    // 1
    pAqData->mQueue,                         // 2
    inBuffer,                                // 3
    0,                                       // 4
    NULL                                     // 5
);
Copy the code
  • 1. Add audio data to an audio queue
  • 2. Recorded audio queue
  • 3. Audio data waiting for joining the team
  • 4. Description of audio packets. Set this parameter to 0 because this parameter is not used for recording.
  • 5. Packet description array describing audio queue data. Set to NULL because this parameter is not used for recording.

2.5. Full record callback

static void HandleInputBuffer (
    void                                 *aqData,
    AudioQueueRef                        inAQ,
    AudioQueueBufferRef                  inBuffer,
    const AudioTimeStamp                 *inStartTime,
    UInt32                               inNumPackets,
    const AudioStreamPacketDescription   *inPacketDesc ) { AQRecorderState *pAqData = (AQRecorderState *) aqData; / / 1if (inNumPackets == 0 && // 2 pAqData->mDataFormat.mBytesPerPacket ! = 0)inNumPackets =
           inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
 
    if (AudioFileWritePackets (                                          // 3
            pAqData->mAudioFile,
            false.inBuffer->mAudioDataByteSize,
            inPacketDesc,
            pAqData->mCurrentPacket,
            &inNumPackets,
            inBuffer->mAudioData
        ) == noErr) {
            pAqData->mCurrentPacket += inNumPackets;                     // 4
    }
   if (pAqData->mIsRunning == 0)                                         // 5
      return;
 
    AudioQueueEnqueueBuffer (                                            // 6
        pAqData->mQueue,
        inBuffer,
        0,
        NULL
    );
}
Copy the code
  • 1. Structure used to record some information of the audio queue, which contains the information of the current recorded file, status and other parameters.
  • 2. If the audio data is CBR data, calculate how many audio packets are contained in the current data. For VBR data, it can be directly retrieved from the callback functioninNumPacketsParameter acquisition.
  • 3. Write audio data to an audio file
  • 4. If the recording succeeds, add the audio packet index so that the recording can continue next time
  • 5. Return if the audio queue has stopped.
  • 6. Re-load the used audio queue data into the audio queue.

2.6. Obtaining the size of the Audio Queue Buffer

void DeriveBufferSize ( AudioQueueRef audioQueue, // 1 AudioStreamBasicDescription &ASBDescription, // 2 Float64 seconds, // 3 UInt32 *outBufferSize // 4 ) { static const int maxBufferSize = 0x50000; // 5 int maxPacketSize = ASBDescription.mBytesPerPacket; / / 6if (maxPacketSize == 0) {                                 // 7
        UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
        AudioQueueGetProperty (
                audioQueue,
                kAudioQueueProperty_MaximumOutputPacketSize,
                // inMac OS X v10.5, home use / / kAudioConverterPropertyMaximumOutputPacketSize & maxPacketSize, & maxVBRPacketSize); } Float64 numBytesForTime = ASBDescription.mSampleRate * maxPacketSize * seconds; // 8 *outBufferSize = UInt32 (numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize); / / 9}Copy the code
  • 1. Specify the audio queue
  • 2. Configure the audio queue
  • 3. Interval of audio data collection (the size of each collected data can be calculated by sampling rate and interval)
  • 4. Use this parameter to return the size of the calculated audio data
  • 5. Upper limit of audio queue data size, in bytes. In this example, the upper limit is set to 320 KB. This corresponds to approximately 5 seconds of stereo, 24-bit audio at a sampling rate of 96 kHz.
  • 6. For CBR data, you can obtain the value from ASBD. If it is VBR data, the value extracted from ASBD is 0.
  • 7. For VBR data, manually estimate the maximum value.
  • 8. Get audio data size (bytes)
  • 9. Limit the audio data Max if necessary.

2.7. Set magin cookies for audio files

For some compressed audio data formats, such as AAC,MPEG 4 AAC, etc., audio metadata must be included. The data structure that contains this metadata information is called magic cookies. When you record an audio file in compressed audio data format, you must get the metadata from the Audio Queue and set it to the audio file.

Note: We set the Magin cookie before recording and after stopping recording, because some encoders need to update the Magin cookie after stopping recording.

OSStatus SetMagicCookieForFile (
    AudioQueueRef inQueue,                                      // 1
    AudioFileID   inFile // 2 ) { OSStatus result = noErr; // 3 UInt32 cookieSize; / / 4if (
            AudioQueueGetPropertySize (                         // 5
                inQueue, kAudioQueueProperty_MagicCookie, &cookieSize ) == noErr ) { char* magicCookie = (char *) malloc (cookieSize); / / 6if (
                AudioQueueGetProperty (                         // 7
                    inQueue,
                    kAudioQueueProperty_MagicCookie,
                    magicCookie,
                    &cookieSize
                ) == noErr
        )
            result =    AudioFileSetProperty (                  // 8
                            inFile,
                            kAudioFilePropertyMagicCookieData,
                            cookieSize,
                            magicCookie
                        );
        free (magicCookie);                                     // 9
    }
    return result;                                              // 10
}
Copy the code
  • 1. Recorded audio queue
  • 2. Prepare the recorded file
  • 3. Define a variable to record whether the setting was successful
  • 4. Define a variable to record the size of the Magic cookie
  • 5. Get the magic cookie size from the Audio queue.
  • 6. Define a variable to record the content of Magic cookie and allocate the required memory to it
  • 7. Retrieve the contents of the Magic cookie from the Audio Queue
  • 8. Set the obtained Magic cookie to the file.
  • 9. Release the magic cookie variable that you just temporarily saved
  • 10. Return the Settings

2.8. Set the audio format.

Focus on the following parameters

  • Audio format (PCM,AAC…)
  • Sampling rate (44.1khz, 48kHz)
  • Number of channels (mono, double channel)
  • Sampling bits (16bits)
  • Number of frames per audio packet (linear PCM is usually 1 frame, compressed data is usually more)
  • Audio file type (CAF, AIFF…)
AQRecorderState aqData; // 1 aqData.mDataFormat.mFormatID = kAudioFormatLinearPCM; / / 2 aqData. MDataFormat. MSampleRate = 44100.0; // 3 aqData.mDataFormat.mChannelsPerFrame = 2; // 4 aqData.mDataFormat.mBitsPerChannel = 16; // 5 aqData.mDataFormat.mBytesPerPacket = // 6 aqData.mDataFormat.mBytesPerFrame = aqData.mDataFormat.mChannelsPerFrame * sizeof (SInt16); aqData.mDataFormat.mFramesPerPacket = 1; // 7 AudioFileTypeID fileType = kAudioFileAIFFType; // 8 aqData.mDataFormat.mFormatFlags = // 9 kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;Copy the code
  • 1. Create a structure to store audio status information.
  • 2. Specify the audio format
  • 3. Specify the sampling rate
  • 4. Specify the number of channels
  • 5. Specify the sampling number
  • 6. Specify the number of bytes in each packet
  • 7. Specify the number of frames in each packet
  • 8. Specify the file type
  • 9. Specify the flags required by the file type

2.9. Create the recorded Audio Queue

AudioQueueNewInput (                              // 1
    &aqData.mDataFormat,                          // 2
    HandleInputBuffer,                            // 3
    &aqData,                                      // 4
    NULL,                                         // 5
    kCFRunLoopCommonModes,                        // 6
    0,                                            // 7
    &aqData.mQueue                                // 8
);
Copy the code
  • 1. Create a recording audio queue
  • 2. Specify the audio format to record
  • 3. Specify the callback function
  • 4. Custom data structures can be passed in, which can be instances of this class or structures for recording audio information
  • 5. In which loop the callback function is called. Set to NULL as the default, i.e. the thread in which the callback function resides is controlled internally by the Audio Queue.
  • 6. Callbacks run in loop mode using kCFRunLoopCommonModes.
  • 7. The reserved value can be 0.
  • 8. Output the newly allocated audio queue.

2.10. Get the full audio format.

When the Audio Queue starts working, it will probably produce more audio format information than when we initialized it, so we need to do a check on the audio data we get.

UInt32 dataFormatSize = sizeof (aqData.mDataFormat);       // 1
 
AudioQueueGetProperty (                                    // 2
    aqData.mQueue,                                         // 3
    kAudioQueueProperty_StreamDescription,                 // 4
    // in Mac OS X, instead use
    //    kAudioConverterCurrentInputStreamDescription
    &aqData.mDataFormat,                                   // 5
    &dataFormatSize                                        // 6
);
Copy the code
  • 1. Query the audio data format
  • 2. Obtain the value of the audio queue attribute
  • 3. Query the audio queue
  • 4. ID of the audio queue data format
  • 5. Output the complete audio data format as output
  • 6. When the input, the expected size AudioStreamBasicDescription structure. At output, the actual size. Your recording application does not need this value.

2.11. Create an audio file

CFURLRef audioFileURL =
    CFURLCreateFromFileSystemRepresentation (            // 1
        NULL,                                            // 2
        (const UInt8 *) filePath,                        // 3
        strlen (filePath),                               // 4
        false                                            // 5
    );
 
AudioFileCreateWithURL (                                 // 6
    audioFileURL,                                        // 7
    fileType,                                            // 8
    &aqData.mDataFormat,                                 // 9
    kAudioFileFlags_EraseFile,                           // 10
    &aqData.mAudioFile                                   // 11
);
Copy the code
  • 1. Create oneCFURLType represents the recording file path
  • 2. Use NULL(kCFAllocatorDefault) to use the current default memory allocator
  • 3. Set the file path
  • 4. File name length
  • 5. False: indicates a file, not a folder.
  • 6. Create a new file or initialize an existing file.
  • 7. Path to the audio file (i.e. created in 3)
  • 8. Audio file type (CAF,AIFF…)
  • 9.ASBD
  • 10. If the value is set, the file will be overwritten if it already exists
  • 11. Represents recorded files.

2.12. Set the size of audio queue data

Use the functions in section 2.6. To set the size of the audio queue data for later use.

DeriveBufferSize (// 1 aqdata.mqueue, // 2 aqdata.mdataformat, // 3 0.5, // 4 & aqData.bufferBytesize // 5);Copy the code

2.13. Prepare the specified number of buffers for the Audio Queue

for (int i = 0; i < kNumberBuffers; ++i) {           // 1
    AudioQueueAllocateBuffer (                       // 2
        aqData.mQueue,                               // 3
        aqData.bufferByteSize,                       // 4
        &aqData.mBuffers[i]                          // 5
    );
 
    AudioQueueEnqueueBuffer (                        // 6
        aqData.mQueue,                               // 7
        aqData.mBuffers[i],                          // 8
        0,                                           // 9
        NULL                                         // 10
    );
}
Copy the code
  • 1. Generally, 3 buffers are specified. Here is a simple loop that allocates memory for a specified number of buffers and enlists them
  • 2. Allocate memory for each buffer
  • 3. Specify the audio queue to allocate memory
  • 4. Specify the size of the allocated Buffer (as obtained in 2.12)
  • 5. Output a buffer with allocated memory
  • 6. Join the audio queue
  • 7. The audio queue to be queued
  • 8. Incoming audio data
  • 9. This parameter is useless for recording
  • 10. This parameter is useless for recording

2.14. Record audio

aqData.mCurrentPacket = 0;                           // 1
aqData.mIsRunning = true;                            // 2
 
AudioQueueStart (                                    // 3
    aqData.mQueue,                                   // 4
    NULL                                             // 5
);
// Wait, on user interface thread, until user stops the recording
AudioQueueStop (                                     // 6
    aqData.mQueue,                                   // 7
    true                                             // 8
);
 
aqData.mIsRunning = false; / / 9Copy the code
  • Initialization Record The packet index of the current recorded file is 0
  • The audio queue is running
  • Start an Audio queue
  • Specifies the audio queue that is enabled
  • If this parameter is set to NULL, data collection starts immediately
  • Stops and resets the current audio queue
  • Specifies the audio queue to stop
  • True: synchronous stop. False: asynchronous stop
  • Updates the current working status of the audio queue.

2.15. Clear the memory after recording

After recording, recycle the audio queue data and close the audio file.

AudioQueueDispose (                                 // 1
    aqData.mQueue,                                  // 2
    true/ / 3); AudioFileClose (aqData.mAudioFile); / / 4Copy the code
  • 1. Reclaim all resources in the audio queue
  • 2. Specify the audio queue to recycle
  • 3. True: synchronous, false: asynchronous
  • 4. Close the recording file.

3. Play

When playing Audio using Audio Queue Services, the source data can be local files, in-memory objects, or other Audio storage methods. In this chapter, you can only play local files.

  • Define a structure to manage audio format state information, etc.
  • Implement a playback callback function
  • Set the size of the audio queue data
  • Open an audio file and determine the audio data format
  • Create and configure an audio queue to play
  • Allocate memory to queue for audio queue data. Tells the audio queue to start playing. When finished, tell the audio queue to stop.
  • Reclaim memory to release resources

3.1. Define a structure to manage audio state

static const int kNumberBuffers = 3;                              // 1
struct AQPlayerState {
    AudioStreamBasicDescription   mDataFormat;                    // 2
    AudioQueueRef                 mQueue;                         // 3
    AudioQueueBufferRef           mBuffers[kNumberBuffers];       // 4
    AudioFileID                   mAudioFile;                     // 5
    UInt32                        bufferByteSize;                 // 6
    SInt64                        mCurrentPacket;                 // 7
    UInt32                        mNumPacketsToRead;              // 8
    AudioStreamPacketDescription  *mPacketDescs;                  // 9
    bool                          mIsRunning;                     // 10
};
Copy the code

The data in this structure is essentially the same as when recording.

  • 1. Set the number of reusable audio data in an audio queue. The value is usually 3
  • 2.ASBD
  • 3. Play the audio queue
  • 4. Manage an array of audio data in an audio queue
  • 5. Play audio files
  • 6. Size of each audio data
  • 7. Index of audio packets to be played
  • 8. The number of audio packets to read per call to the callback function
  • 9. For VBR audio data, it represents the descriptive array of audio packets being played. For CBR audio data, it can be NULL.
  • 10. Check whether the audio queue is running.

3.2. Callback functions

role

  • Reads a specified amount of audio data from an audio file and loads it into audio queue data.
  • Enqueue audio queue data
  • After the file is read, stop the audio queue
3.2.1. Define the callback function
static void HandleOutputBuffer (
    void                 *aqData,                 // 1
    AudioQueueRef        inAQ,                    // 2
    AudioQueueBufferRef  inBuffer                 // 3
)

Copy the code
  • 1. The same recording, custom structure or class object, can be passed into the callback function for use, that is, OC class and callback function communication object
  • 2. Current working audio queue
  • 3. Audio data obtained by reading audio files
3.2.2. Read audio files
AudioFileReadPackets (                        // 1
    pAqData->mAudioFile,                      // 2
    false,                                    // 3
    &numBytesReadFromFile,                    // 4
    pAqData->mPacketDescs,                    // 5
    pAqData->mCurrentPacket,                  // 6
    &numPackets,                              // 7
    inBuffer->mAudioData                      // 8
);
Copy the code
  • 1. Function to read files
  • 2. The audio file to read
  • False: Data should not be cached while reading.
  • 4. As output: The number of bytes to be read from the file
  • 5. As output :VBR: data packet description array read from audio file,CBR:NULL
  • 6. The current read index value for the next read
  • 7. As input: number of audio packets read from audio files; as output: actual audio packets read
  • 8. For output: Data read from audio files
3.2.3. The team

After reading the audio data, the queue operation is performed.

AudioQueueEnqueueBuffer (                      // 1
    pAqData->mQueue,                           // 2
    inBuffer,                                  // 3
    (pAqData->mPacketDescs ? numPackets : 0),  // 4
    pAqData->mPacketDescs                      // 5
);

Copy the code
  • 4. Number of audio packets. Use 0 for CBR data
  • 5. Use the packet description for compressed data
3.2.4. Stop the audio queue

If you detect that the current audio file has been read, stop the audio queue.

if (numPackets == 0) {                          // 1
    AudioQueueStop (                            // 2
        pAqData->mQueue,                        // 3
        false                                   // 4
    );
    pAqData->mIsRunning = false;                // 5
}
Copy the code
  • 1. ByAudioFileReadPacketsCheck whether the packet is 0
  • 4. True: synchronous, false: asynchronous
3.2.5. Full callback
static void HandleOutputBuffer (
    void                *aqData,
    AudioQueueRef       inAQ,
    AudioQueueBufferRef inBuffer ) { AQPlayerState *pAqData = (AQPlayerState *) aqData; / / 1if (pAqData->mIsRunning == 0) return;                     // 2
    UInt32 numBytesReadFromFile;                              // 3
    UInt32 numPackets = pAqData->mNumPacketsToRead;           // 4
    AudioFileReadPackets (
        pAqData->mAudioFile,
        false,
        &numBytesReadFromFile,
        pAqData->mPacketDescs, 
        pAqData->mCurrentPacket,
        &numPackets,
        inBuffer->mAudioData 
    );
    if (numPackets > 0) {                                     // 5
        inBuffer->mAudioDataByteSize = numBytesReadFromFile;  // 6
       AudioQueueEnqueueBuffer ( 
            pAqData->mQueue,
            inBuffer,
            (pAqData->mPacketDescs ? numPackets : 0),
            pAqData->mPacketDescs
        );
        pAqData->mCurrentPacket += numPackets;                // 7 
    } else {
        AudioQueueStop (
            pAqData->mQueue,
            false
        );
        pAqData->mIsRunning = false; }}Copy the code
  • 3. Record the number of bytes read
  • 4. Record the number of received audio packets
  • 7. Add audio packets so that the next triggered callback can continue to play the last content

3.3. Calculate audio queue data

We need to specify the size of an audio queue buffer. Allocate memory for audio queue data based on the calculated size.

  • Called in a callback functionAudioFileReadPacketsGets the number of packets read
  • Set the audio buffer lower limit to prevent frequent access.
void DeriveBufferSize ( AudioStreamBasicDescription &ASBDesc, // 1 UInt32 maxPacketSize, // 2 Float64 seconds, // 3 UInt32 *outBufferSize, // 4 UInt32 *outNumPacketsToRead // 5 ) { static const int maxBufferSize = 0x50000; // 6 static const int minBufferSize = 0x4000; / / 7if(ASBDesc.mFramesPerPacket ! = 0) { // 8 Float64 numPacketsForTime = ASBDesc.mSampleRate / ASBDesc.mFramesPerPacket * seconds; *outBufferSize = numPacketsForTime * maxPacketSize; }else {                                                         // 9
        *outBufferSize =
            maxBufferSize > maxPacketSize ?
                maxBufferSize : maxPacketSize;
    }
 
    if (                                                             // 10
        *outBufferSize > maxBufferSize &&
        *outBufferSize > maxPacketSize
    )
        *outBufferSize = maxBufferSize;
    else {                                                           // 11
        if (*outBufferSize < minBufferSize)
            *outBufferSize = minBufferSize;
    }
 
    *outNumPacketsToRead = *outBufferSize / maxPacketSize;           // 12
}
Copy the code
  • 2. Run the command to estimate the maximum size of audio packetsAudioFileGetPropertyThe querykAudioFilePropertyPacketSizeUpperBoundAttributes can be
  • 3. Sampling time: The size of audio data can be calculated according to the sampling rate and sampling time
  • 4. Size of each audio data
  • 5. The number of audio packets read from the audio playback callback each time
  • 6. Upper limit of audio packet size
  • 7. Lower limit of the size of audio packets
  • 8. Calculate the total size of audio packets
  • 9. Export a proper size of audio queue data based on the maximum packet size and the upper limit you set
  • 10. Set a ceiling
  • 11. Set a lower limit
  • 12. Count the number of audio packets read

3.4. Open the audio file

  • To get aCFURLObject represents the path to the audio file
  • Open audio file
  • Get file format
3.4.1. Get oneCFURLObject represents the path to the audio file
CFURLRef audioFileURL =
    CFURLCreateFromFileSystemRepresentation (           // 1
        NULL,                                           // 2
        (const UInt8 *) filePath,                       // 3
        strlen (filePath),                              // 4
        false                                           // 5
    );
Copy the code
  • 1. Create oneCFURLType represents the recording file path
  • 2. Use NULL(kCFAllocatorDefault) to use the current default memory allocator
  • 3. Set the file path
  • 4. File name length
  • 5. False: indicates a file, not a folder.
3.4.2. Open the audio file
AQPlayerState aqData; // 1 OSStatus result = AudioFileOpenURL ( // 2 audioFileURL, // 3 fsRdPerm, // 4 0, // 5 &aqData.mAudioFile // 6 ); CFRelease (audioFileURL); / / 7Copy the code
  • 2. Open an audio file you want to play
  • 3. Audio file path
  • 4. File permissions
  • 5. Optional file type. 0: do not use this parameter
  • 6. As output, get a reference to the file object
3.4.3. Obtaining the file format
UInt32 dataFormatSize = sizeof (aqData.mDataFormat);    // 1
 
AudioFileGetProperty (                                  // 2
    aqData.mAudioFile,                                  // 3
    kAudioFilePropertyDataFormat,                       // 4
    &dataFormatSize,                                    // 5
    &aqData.mDataFormat                                 // 6
);
Copy the code
  • 5. As input: input, AudioStreamBasicDescription structure size, used to describe the data format of audio files. At output, the actual size. This value is not required for playback.
  • 6. Output: Assign the ASBD data format represented by the file to the variable

3.5. Create a queue for playing audio

AudioQueueNewOutput (                                // 1
    &aqData.mDataFormat,                             // 2
    HandleOutputBuffer,                              // 3
    &aqData,                                         // 4
    CFRunLoopGetCurrent (),                          // 5
    kCFRunLoopCommonModes,                           // 6
    0,                                               // 7
    &aqData.mQueue                                   // 8
);
Copy the code
  • 3. Callback functions
  • 4. Audio queue data
  • 5. Invoke the run loop that plays the callback
  • 6. Invoke play callback running loop mode

3.6. Set the size of the audio queue

3.6.1. Set buffer size to the number of audio packets read
UInt32 maxPacketSize;
UInt32 propertySize = sizeof (maxPacketSize);
AudioFileGetProperty (                               // 1
    aqData.mAudioFile,                               // 2
    kAudioFilePropertyPacketSizeUpperBound,          // 3
    &propertySize,                                   // 4
    &maxPacketSize                                   // 5
);
 
DeriveBufferSize (                                   // 6
    aqData.mDataFormat,                              // 7
    maxPacketSize,                                   // 8
    0.5,                                             // 9
    &aqData.bufferByteSize,                          // 10
    &aqData.mNumPacketsToRead                        // 11
);
Copy the code
3.6.2. Allocate memory for packet description array
bool isFormatVBR = (                                       // 1
    aqData.mDataFormat.mBytesPerPacket == 0 ||
    aqData.mDataFormat.mFramesPerPacket == 0
);
 
if (isFormatVBR) {                                         // 2
    aqData.mPacketDescs =
      (AudioStreamPacketDescription*) malloc (
        aqData.mNumPacketsToRead * sizeof (AudioStreamPacketDescription)
      );
} else {                                                   // 3
    aqData.mPacketDescs = NULL;
}
Copy the code
  • 1. Determine whether the audio file data is VBR or CBR. For VBR data, the number of frames in each packet (and the number of bytes in each packet) is variable, so this property is 0.
  • 2. For VBR data, allocate specified memory for packet description dictionary.
  • 3. For CBR data, you do not need to set this parameter to NULL

3.7. Set magic Cookie

For compressed audio data format (AAC…) , we must set magic cookies, namely metadata information, for the audio queue before playing.

UInt32 cookieSize = sizeof (UInt32);                   // 1
bool couldNotGetProperty =                             // 2
    AudioFileGetPropertyInfo (                         // 3
        aqData.mAudioFile,                             // 4
        kAudioFilePropertyMagicCookieData,             // 5
        &cookieSize,                                   // 6
        NULL                                           // 7
    );
 
if(! couldNotGetProperty && cookieSize) { // 8 char* magicCookie = (char *) malloc (cookieSize); AudioFileGetProperty ( // 9 aqData.mAudioFile, // 10 kAudioFilePropertyMagicCookieData, // 11 &cookieSize, // 12 magicCookie // 13 ); AudioQueueSetProperty ( // 14 aqData.mQueue, // 15 kAudioQueueProperty_MagicCookie, // 16 magicCookie, // 17 cookieSize // 18 ); free (magicCookie); / / 19}Copy the code
  • 1. Estimate the size of magic cookie data based on the UInt32
  • 2. Record whether magic cookie results can be obtained
  • 3. Obtain the size of magic cookie in the file.
  • 4. The file you want to play
  • 5. The key value, representing kAudioFilePropertyMagicCookieData audio files
  • 6. It represents the estimated size of magic cookie when used as input and the actual size when used as output
  • 7. If the value is set to NULL, the read and write permissions of the property are not concerned
  • 8. If the file contains a Magic cookie, allocate memory to hold it
  • 9. Obtain magic cookies in files
  • 12. Indicates the size of magic cookie in the file when entering
  • 13. Output magic cookies as files
  • Set the audio queue function

3.8. Assign audio queue data

aqData.mCurrentPacket = 0; / / 1for (int i = 0; i < kNumberBuffers; ++i) {                // 2
    AudioQueueAllocateBuffer (                            // 3
        aqData.mQueue,                                    // 4
        aqData.bufferByteSize,                            // 5
        &aqData.mBuffers[i]                               // 6
    );
 
    HandleOutputBuffer (                                  // 7
        &aqData,                                          // 8
        aqData.mQueue,                                    // 9
        aqData.mBuffers[i]                                // 10
    );
}
Copy the code
  • 1. The initial index of read audio packets is 0
  • 7. Custom playback audio callback letter

3.9. Set the volume

Before playing, you can set the volume (0~1).

Float32 gain = 1.0;                                       // 1
    // Optionally, allow user to override gain setting here
AudioQueueSetParameter (                                  // 2
    aqData.mQueue,                                        // 3
    kAudioQueueParam_Volume,                              // 4
    gain                                                  // 5
);
Copy the code

3.10. Start Audio Queue

aqData.mIsRunning = true;                          // 1
 
AudioQueueStart (                                  // 2
    aqData.mQueue,                                 // 3
    NULL                                           // 4
);
 
do{// 5 CFRunLoopRunInMode (// 6 kCFRunLoopDefaultMode, // 7 0.25, // 8false                                      // 9
    );
} while (aqData.mIsRunning);
 
CFRunLoopRunInMode (                               // 10
    kCFRunLoopDefaultMode,
    1,
    false
);

Copy the code
  • 4. Set it to NULL to start playing immediately
  • 8. Set the running cycle time to 0.25 seconds
  • 9. Use false to indicate that the loop should continue for the full time specified
  • 10. After the audio queue stops, the run loop runs for some time to ensure that the currently playing audio queue buffer has time to complete.

3.11. Clean up

After playing, recycle the audio queue, close the audio file, and release all related resources

AudioQueueDispose (                            // 1
    aqData.mQueue,                             // 2
    true/ / 3); AudioFileClose (aqData.mAudioFile); // 4 free (aqData.mPacketDescs); / / 5Copy the code
  • 3:true: synchronous, false: asynchronous

Apple Official Documents