AudioQueue theory learning

AudioQueue is a framework for audio playback and recording provided by iOS. It is a member of AudioToolbox. framework.

In the documentation, Apple recommends that developers use AudioQueue to implement playback and recording functions in apps. Here we will introduce the playback function.

It supports PCM data, compressed formats supported by iOS/MacOSX platforms (MP3, AAC, etc.), and audio data that other users can provide their own decoders.

How an AudioQueue works

When audio data needs to be played, it first needs to be memcpy to the mAudioData of the AudioQueueBufferRef (the memory that mAudioData points to has already been allocated, which was done by the AudioQueueAllocateBuffer). And assign the incoming data size to the mAudioDataByteSize field. When this is done, the AudioQueueEnqueueBuffer is called to insert the Buffer holding the audio data into the built-in Buffer queue of the AudioQueue. If there are buffers in the Buffer queue, the AudioQueueStart will be called. Then the AudioQueue will play the buffers in the Buffer queue one by one in Enqueue order. Each time a Buffer is used, it is removed from the Buffer queue and triggers a callback on the user-specified RunLoop to tell the user that an AudioQueueBufferRef object has been used and that you can continue to reuse the object to store future audio data. So over and over again the audio data will be played one by one until the end.

First take a look at the official workflow flow chart for the AudioQueue.

The following is the process of playing audio files in the local path:

  • Read the audio file and populate buffers in the callback of the audio file
  • Plays the full buffers to the AudioQueue
  • After the AudioQueue plays a Buffer, the Buffer is filled back to the Callback of the AudioQueue
  • Loop 2 and 3 until the audio ends

Create AudioQueue

OSStatus AudioQueueNewOutput(const AudioStreamBasicDescription *inFormat, AudioQueueOutputCallback inCallbackProc, void *inUserData, CFRunLoopRef inCallbackRunLoop, CFStringRef inCallbackRunLoopMode, UInt32 inFlags, AudioQueueRef  _Nullable *outAQ);
Copy the code

This method is used to create an AudioQueue for output audio

Parameters and return instructions for example:

  • InFormat: This parameter specifies the data format of the audio to be played
  • InCallbackProc: This callback is used to notify the user when the AudioQueue has run out of a buffer and the user can continue filling the audio data
  • InUserData: Pointer to data passed in by the user to the callback function
  • InCallbackRunLoop: Specifies the RunLoop in which the callback event occurs. If NULL is passed, it means that the callback event is run on the thread in which the AudioQueue resides.
  • InCallbackRunLoopMode: specifies the RunLoop mode in which the callback event occurs. Passing NULL is equivalent to kCFRunLoopCommonModes. In general, passing NULL will do
  • OutAQ: reference instance of the AudioQueue
void AudioQueueOutput_Callback(void *inClientData,AudioQueueRef inAQ,AudioQueueBufferRef inBuffer)
Copy the code

This is the callback to the AudioQueue that returns the buffer that has been played.

Buffer related API

1. Create Buffer

OSStatus AudioQueueAllocateBuffer(AudioQueueRef inAQ, UInt32 inBufferByteSize, AudioQueueBufferRef  _Nullable *outBuffer);
Copy the code

The purpose of this method is to make space for the buffer that holds the audio data

Parameters and return instructions for example:

  • InAQ: reference instance of the AudioQueue
  • InBufferByteSize: Specifies the size of the buffer to be opened
  • OutBuffer: Reference instance of the buffer opened
  1. Destruction of Buffer
OSStatus AudioQueueFreeBuffer(AudioQueueRef inAQ,AudioQueueBufferRef inBuffer);
Copy the code

Note that this method is usually only used when a specific buffer needs to be destroyed (as the Dispose method automatically destroys all buffers) and only when the AudioQueue is not processing data. So this method is not always available.

  1. Insert the Buffer
OSStatus AudioQueueEnqueueBuffer(AudioQueueRef inAQ,
                                 AudioQueueBufferRef inBuffer,
                                 UInt32 inNumPacketDescs,
                                 const AudioStreamPacketDescription * inPacketDescs);
Copy the code

This method is used to enqueue the AudioQueueBuffer that has been filled with data to the AudioQueue

Parameters and return instructions for example:

  • InAQ: reference instance of the AudioQueue
  • InBuffer: Instances of buffers that need to be enqueued
  • InNumPacketDescs: How many frames of audio data are in the buffer
  • InPacketDescs: Information about each frame in the buffer. The user needs to specify the offset value of the data in the buffer for each frame, specified by the field mStartOffset

Control related

  1. Start playing
OSStatus AudioQueueStart(AudioQueueRef inAQ,const AudioTimeStamp * inStartTime);
Copy the code

The second argument can be used to control the start time of playback, normally just pass NULL to start playback.

  1. Decoding data
OSStatus AudioQueuePrime(AudioQueueRef inAQ,
                          UInt32 inNumberOfFramesToPrepare,
                          UInt32 * outNumberOfFramesPrepared);  
Copy the code

This method is not often used because calling the AudioQueueStart directly will automatically start decoding (if needed). The parameter is used to specify the number of frames to be decoded and the number of frames to be decoded.

  1. On TV
OSStatus AudioQueuePause(AudioQueueRef inAQ);
Copy the code

Note that the playback is paused as soon as this method is called, which means that the AudioQueueOutputCallback callback is also paused, and special attention needs to be paid to thread scheduling to prevent threads from getting stuck in an infinite wait.

  1. Stop playing
OSStatus AudioQueueStop(AudioQueueRef inAQ, Boolean inImmediate);
Copy the code

The second argument will stop immediately (synchronous) if true, and the AudioQueue will stop (asynchronous) if false after the AudioQueue has played all buffers already in the Enqueue. Be careful to pass in appropriate parameters as required.

  1. Flush
OSStatus
AudioQueueFlush(AudioQueueRef inAQ);
Copy the code

This call resets the decoder state after all Enqueu’s buffers have been played to prevent the current decoder state from affecting the decoding of the next audio (for example, when switching between played songs). This does not work if used with AudioQueueStop(AQ,false) because the false argument to the Stop method does the same thing.

  1. reset
OSStatus AudioQueueReset(AudioQueueRef inAQ);
Copy the code

Reset the AudioQueue will remove all has the Enqueue buffer, and trigger AudioQueueOutputCallback, call AudioQueueStop method will also trigger the method. The direct call to this method is usually used in seek to clear the residual buffer (another method is to AudioQueueStop and wait for seek to complete and restart).

  1. Get play time
OSStatus AudioQueueGetCurrentTime(AudioQueueRef inAQ,
                                  AudioQueueTimelineRef inTimeline,
                                  AudioTimeStamp * outTimeStamp,
                                  Boolean * outTimelineDiscontinuity);
Copy the code

The first and fourth parameters that are passed in are the AudioQueueTimeline parameters that we’re not using here, so we pass in NULL. AudioTimeStamp is returned after the call. From the timeSTAp structure, the playback time can be calculated as follows:

AudioTimeStamp time = ... ; / / obtained AudioQueueGetCurrentTime NSTimeInterval playedTime = time. MSampleTime / _format mSampleRate;Copy the code

Two things must be noted when using this time capture method:

1. The first thing that needs to be noted is that the playing time refers to the difference between the actual playing time and the generally understood playing progress. For example, the user operated slider 8 seconds after the start of play to seek 20 seconds after the play progress and then 3 seconds after the play, normally at this point the play time should be 23 seconds, that is, the play progress; The GetCurrentTime method gives the actual playing time of 11 seconds. Therefore, we must save the timingOffset of seek each time:

AudioTimeStamp time = ... ; / / obtained AudioQueueGetCurrentTime NSTimeInterval playedTime = time. MSampleTime / _format mSampleRate; NSTimeInterval seekTime =... ; NSTimeInterval timingOffset = seektime-playedTime;Copy the code

The play progress after seek should be calculated according to timingOffset and playedTime:

NSTimeInterval progress = timingOffset + playedTime;
Copy the code

The second thing to note is that the GetCurrentTime method sometimes fails, so it is best to save the last playback time and return it if the call fails.

Destroy the AudioQueue

AudioQueueDispose(AudioQueueRef inAQ,  Boolean inImmediate);
Copy the code

The second argument has the same meaning and usage as the AudioQueueStop method.

When you use this method, you need to be aware that the AudioQueue doesn’t actually start after the AudioQueueStart is called, there’s a short gap. Calling the AudioQueueDispose method after the AudioQueueStart call and before the AudioQueue actually runs will cause the program to jam. I found this issue when I was using AudioStreamer, and it will appear on iOS 6 (I didn’t test iOS 7, but it was before iOS 7 was released). The reason is that the AudioStreamer will run the Cleanup session during the audio EOF. Dispose is called when there is very little data in the audio file. EOF may enter Cleanup when AudioQueueStart is called.

The first way to avoid this problem is to schedule threads and ensure that the Dispose method must be called after each RunLoop (that is, at least one buffer has been successfully played). The second way to do this is to listen for the kAudioQueueProperty_IsRunning property, which becomes 1 when the AudioQueue is actually running, 0 when it stops, Therefore, it is necessary to ensure that the Dispose method can be called only when IsRunning is 1 after the call of Start method.

In actual combat

Here are just a few of the more important details, the rest can refer to the code in the demo.

  • The AudioQueueNewOutput was created with two runloop-related parameters. NULL is used instead of the current runloop and model
  • The AudioQueueOutput_Callback must be locked when marking available buffers otherwise the audio will not play properly
  • Remember to set the category for AVAudioSession
  • Using a while loop when reading audio data is more elegant than using a timer
  • The property kAudioFileStreamProperty_DataFormat must be obtained and passed in when creating an AudioQueue
  • When loading data, determine how much space is available to fill the current buffer. If not, stop loading.
  • AudioQueueEnqueueBuffer After filling the AudioQueue with data, the AudioQueue needs to determine whether the next buffer is available. If not, the AudioQueue must wait until the next buffer is available.

Here (github.com/Nicholas86/…). Is the code.