When recording using Audio Queue Services, the target can be anything – a file on disk, a network connection, an object in memory, and so on. This chapter describes the most common scenario: basic recording to disk files
To implement the recording function in the program, you usually need to perform the following steps:
- Define a custom structure to manage state, format, and path information
- Write an audio queue callback function to perform the actual recording
- You can choose to write code to determine the appropriate size for the audio queue buffer. If you want to record in a format that uses cookies, write code that uses Magic cookies
- Populate the fields of the custom structure. Includes the data stream that specifies the audio queue to send to the file it wants to record, and the path to that file
- Create a recording audio queue and ask it to create a set of audio queue buffers. Also create a file to record
- Tell the audio queue to start recording
- When you’re done, tell the audio queue to stop, and then release it. The audio queue frees its buffer
Define a Custom Structure to Manage State
To implement recording using Audio Queue Services, you first need to define a custom structure that manages the Audio format and Audio Queue status information
static const int kNumberBuffers = 3; / / 1
struct AQRecorderState {
AudioStreamBasicDescription mDataFormat; / / 2
AudioQueueRef mQueue; / / 3
AudioQueueBufferRef mBuffers[kNumberBuffers]; / / 4
AudioFileID mAudioFile; / / 5
UInt32 bufferByteSize; / / 6
SInt64 mCurrentPacket; / / 7
bool mIsRunning; / / 8
};
Copy the code
-
Sets the number of audio queue buffers
-
AudioStreamBasicDescription structure (CoreAudioBaseTypes. H), said to be written to disk audio data format. This format is used by the audio queue specified in the mQueue field. The mDataFormat field was originally Set by code in the program, see Set Up an Audio Format for Recording. It is a good practice to update the value of this field by querying the AudioQueue’s kAudioQueueProperty_StreamDescription property, see Getting the Full Audio Format from an AudioQueue
Detailed information about the AudioStreamBasicDescription structure, see * Core Audio Data Types Reference
-
A recording audio queue created by an application
-
Array that contains Pointers to the audio queue buffer managed by the audio queue
-
An audio file object that represents the file in which the program records audio data
-
The size, in bytes, of each audio queue buffer. This value is calculated in the DeriveBufferSize function after the audio queue is created and before the audio queue is started. See Write a Function to Derive Recording Audio Queue Buffer Size
-
The packet index of the first packet to be written from the current audio queue buffer
-
Records whether the audio queue is running
Write a Recording Audio Queue Callback
Next, write a callback function for the recording audio queue. This callback does two things:
- Writes the contents of the newly filled audio queue buffer to the audio file being recorded
- Queues the audio queue buffer, whose contents have just been written to disk, to the buffer queue
The Recording Audio Queue Callback Declaration
The following shows a sample declaration of a recording AudioQueue callback function, declared as AudioQueueInputCallback in the audioqueue.h header file
static void HandleInputBuffer (
void *aqData, / / 1
AudioQueueRef inAQ, / / 2
AudioQueueBufferRef inBuffer, / / 3
const AudioTimeStamp *inStartTime, / / 4
UInt32 inNumPackets, / / 5
const AudioStreamPacketDescription *inPacketDesc / / 6
)
Copy the code
- In general, aqData is a custom structure that contains audio queue status data, as described in the previous section
- The audio queue that owns this callback function
- An audio queue buffer containing incoming audio data to record
- The sampling time of the first sample in the audio queue buffer (not required for simple recordings)
- InPacketDesc Indicates the number of packet descriptions. 0 indicates CBR data
- For compressed audio data formats that require a packet description, the packet description generated by the encoder for the packet in the buffer.
Writing an Audio Queue Buffer to Disk
The first task of the recording audio queue callback is to write the audio queue buffer to disk. This buffer is the buffer that the callback’s audio queue has just filled with new audio data from the input device. Write to the file using the AudioFileWritePackets function in the Audiofile.h header file
AudioFileWritePackets ( / / 1
pAqData->mAudioFile, / / 2
false./ / 3
inBuffer->mAudioDataByteSize, / / 4
inPacketDesc, / / 5
pAqData->mCurrentPacket, / / 6
&inNumPackets, / / 7
inBuffer->mAudioData / / 8
);
Copy the code
- The AudioFileWritePackets function, declared in the header file audiofile.h, writes the contents of the buffer to the audio data file
- Audio file object (AudioFileID type) Represents the audio file to be written
- False indicates that the function should not cache data while writing to it
- The number of bytes of audio data being written. The inBuffer variable represents the audio queue buffer that the audio queue passes to the callback
- Audio data packet description array. NULL indicates that no packet description is required (such as CBR audio data)
- The package index of the first package to write to
- The number of packets to be written while entering. In the output, the number of packets actually written
- Write new audio data to an audio file
Enqueuing an Audio Queue Buffer
Now that the audio data in the audio queue buffer has been written to the audio file, the callback will requeue the buffer. Once back in the buffered queue, the buffer is in the queue and ready to receive new incoming audio data
AudioQueueEnqueueBuffer ( / / 1
pAqData->mQueue, / / 2
inBuffer, / / 3
0./ / 4
NULL / / 5
);
Copy the code
- The AudioQueueEnqueueBuffer function adds the audio queue buffer to the audio queue buffer queue
- The audio queue to which you want to add an audio buffer
- An audio queue buffer for queuing
- Number of packet descriptions in audio queue buffer data. Set to 0 because this parameter is not used for recording
- An array of packet descriptions describing audio queue buffer data. Set to NULL because this parameter is not used for recording
A Full Recording Audio Queue Callback
static void HandleInputBuffer (
void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc
) {
AQRecorderState *pAqData = (AQRecorderState *) aqData; / / 1
if (inNumPackets == 0 && / / 2pAqData->mDataFormat.mBytesPerPacket ! =0)
inNumPackets =
inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if (AudioFileWritePackets ( / / 3
pAqData->mAudioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDesc,
pAqData->mCurrentPacket,
&inNumPackets,
inBuffer->mAudioData
) == noErr) {
pAqData->mCurrentPacket += inNumPackets; / / 4
}
if (pAqData->mIsRunning == 0) / / 5
return;
AudioQueueEnqueueBuffer ( / / 6
pAqData->mQueue,
inBuffer,
0.NULL
);
}
Copy the code
- A custom structure passed in when initialized
- If the audio queue buffer contains CBR data, count the number of packets in the buffer. This number = total bytes of data in the buffer/(constant) bytes per packet. For VBR data, the audio queue provides the number of packets in the buffer when the callback is invoked.
- Writes the contents of the buffer to an audio data file
- If the audio data is successfully written, the package index of the audio data file is increased in preparation for writing the audio data to the next buffer
- Returns if the audio queue has stopped
- Queue the audio buffer you just wrote to the file into an audio queue
Write a Function to Derive Recording Audio Queue Buffer Size
Audio Queue Services expects your application to specify the size of the Audio Queue buffer you use.
The calculations here take into account the audio data format that you record. The format includes all the factors that might affect the buffer size, such as the number of audio channels.
void DeriveBufferSize (
AudioQueueRef audioQueue, / / 1
AudioStreamBasicDescription &ASBDescription, / / 2
Float64 seconds, / / 3
UInt32 *outBufferSize / / 4
) {
static const int maxBufferSize = 0x50000; / / 5
int maxPacketSize = ASBDescription.mBytesPerPacket; / / 6
if (maxPacketSize == 0) { / / 7
UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
AudioQueueGetProperty (
audioQueue,
kAudioQueueProperty_MaximumOutputPacketSize,
// in Mac OS X v10.5, instead use
// kAudioConverterPropertyMaximumOutputPacketSize
&maxPacketSize,
&maxVBRPacketSize
);
}
Float64 numBytesForTime =
ASBDescription.mSampleRate * maxPacketSize * seconds; / / 8
*outBufferSize =
UInt32 (numBytesForTime < maxBufferSize ?
numBytesForTime : maxBufferSize); / / 9
}
Copy the code
-
To specify the audio queue on which the size buffer resides
-
Audio queue AudioStreamBasicDescription structure
-
The size you specify for each audio queue buffer, in audio seconds
-
The size, in bytes, of each audio queue buffer at output time
-
The upper limit of the audio queue buffer size, in bytes. In this case, the upper limit is set to 320KB. (0x50000 converted to decimal = 327680/1024 = 320KB) This equates to about 5 seconds of stereo, with a sampling rate of 96 kHz for 24-bit audio
-
For CBR audio data, obtained from AudioStreamBasicDescription structure (constant) packet size. Use this value as the maximum packet size. The side effect of this allocation is to decide whether the audio data to be recorded is CBR or VBR. If it is a VBR, audio queue AudioStreamBasicDescription structure will be the value of the number of bytes in each package as a 0
-
For VBR audio data, the audio queue is queried for the estimated maximum packet size
-
Export buffer size, in bytes
-
Set the buffer size according to the calculation, cannot exceed the maximum
Set a Magic Cookie for an Audio File
Some lossy audio formats, such as MPEG 4 AAC, make use of structures that contain audio metadata. These structures are called magic cookies. When you record in this format using Audio Queue Services, you must retrieve magic cookies from the Audio Queue and add them to the Audio file before you start recording.
The following shows how to get a Magic cookie from an audio queue and apply it to an audio file. Your code will call such a function before recording and then again after recording. Some codecs update magic cookies data when recording stops
OSStatus SetMagicCookieForFile (
AudioQueueRef inQueue, / / 1
AudioFileID inFile / / 2
) {
OSStatus result = noErr; / / 3
UInt32 cookieSize; / / 4
if (
AudioQueueGetPropertySize ( / / 5
inQueue,
kAudioQueueProperty_MagicCookie,
&cookieSize
) == noErr
) {
char* magicCookie =
(char *) malloc (cookieSize); / / 6
if (
AudioQueueGetProperty ( / / 7
inQueue,
kAudioQueueProperty_MagicCookie,
magicCookie,
&cookieSize
) == noErr
)
result = AudioFileSetProperty ( / / 8
inFile,
kAudioFilePropertyMagicCookieData,
cookieSize,
magicCookie
);
free (magicCookie); / / 9
}
return result; / / 10
}
Copy the code
- An audio queue for recording
- An audio file in which recordings are stored
- A result variable indicating the success or failure of this function
- Save the magic Cookie data size variable
- Gets the magic Magic cookie’s data size from the audio queue and stores it in the cookieSize variable
- Allocates a byte array to hold magic cookie information
- Obtain the Magic cookie by querying the audio queue’s kAudioQueueProperty_MagicCookie property
- Set the Magic cookie for the audio file that saves the recording. The AudioFileSetProperty function is declared in audiofile.h
- Free memory for temporary cookie variables
- Returns the success or failure of this function
Set Up an Audio Format for Recording
This section describes how to format audio data for an audio queue. The audio queue is recorded to a file using this format.
- Audio Data Format Type, such as Linear PCM, AAC, etc
- Sampling rate, e.g. 44.1 kHz
- Number of audio channels, such as 2, STEREO
- Bit depth, for example, 16 bits
- Frames per packet, for example, Linear PCM uses one frame per packet
- Audio file type, such as CAF and AIFF
- File type Details of the audio data format required
The following shows how to set the audio format of a recording, using fixed options for each property. In production code, you will often allow the user to specify some or all aspects of the audio format. The goal of both methods is to populate the mDataFormat field in the AQRecorderState Custom Structure to Define a Custom Structure to Manage State
AQRecorderState aqData; / / 1
aqData.mDataFormat.mFormatID = kAudioFormatLinearPCM; / / 2
aqData.mDataFormat.mSampleRate = 44100.0; / / 3
aqData.mDataFormat.mChannelsPerFrame = 2; / / 4
aqData.mDataFormat.mBitsPerChannel = 16; / / 5
aqData.mDataFormat.mBytesPerPacket = / / 6
aqData.mDataFormat.mBytesPerFrame =
aqData.mDataFormat.mChannelsPerFrame * sizeof (SInt16);
aqData.mDataFormat.mFramesPerPacket = 1; / / 7
AudioFileTypeID fileType = kAudioFileAIFFType; / / 8
aqData.mDataFormat.mFormatFlags = / / 9
kLinearPCMFormatFlagIsBigEndian
| kLinearPCMFormatFlagIsSignedInteger
| kLinearPCMFormatFlagIsPacked;
Copy the code
- Create an instance of the AQRecorderState custom structure. The mDataFormat field of this structure contains an individual. The value set in the mDataFormat field provides an initial definition of the audio format for the audio queue — which is also the audio format that you record to a file
- Define the audio data format type as Linear PCM. Please refer to theCore Audio Data Types ReferenceGet a complete list of available data formats
- The sampling rate was defined as 44.1 kHz
- Channel 2
- Define the bit depth of each channel as 16
- Define the number of bytes per packet and the number of bytes per frame as 4, that is, 2 channels multiplied by 2 bytes per sample
- Define the number of frames per packet as 1
- Define the file type as AIFF. See the enumeration of AudioFile types in the audiofile.h header file for a complete list of available file types. You can specify the file type of any installed Codecs, as described in Using Codecs and Audio Data Formats
- Sets the format flags required to specify the file type
Creating a Recording Audio Queue
Demonstrates how to create a recording audio queue. Notice that the AudioQueueNewInput function uses the callback function, custom structure, and audio data format configured in the previous step.
AudioQueueNewInput ( / / 1
&aqData.mDataFormat, / / 2
HandleInputBuffer, / / 3
&aqData, / / 4
NULL./ / 5
kCFRunLoopCommonModes, / / 6
0./ / 7
&aqData.mQueue / / 8
);
Copy the code
- The AudioQueueNewInput function creates a new recording audio queue
- An audio data format used for recording. See Set Up an Audio Format for Recording
- A callback function for audio queues. See Write a Recording Audio Queue Callback
- Custom data structures for recording audio queues. See Define a Custom Structure to Manage State
- On which runloop the callback function will be called. If NULL is passed, the callback will be called on a thread inside the audio queue
- Runloop modes. The kCFRunLoopCommonModes constant is usually used
- Keep the parameters. The 0
- When returned, the variable contains a pointer object to the newly created recording audio queue
Getting the Full Audio Format from an Audio Queue
When there is an Audio Queue (see Creating a Recording Audio Queue), it may be better than you more fully filled AudioStreamBasicDescription structure body structure, especially for a compressed format. To get a complete format description, call the AudioQueueGetProperty function
UInt32 dataFormatSize = sizeof (aqData.mDataFormat); / / 1
AudioQueueGetProperty ( / / 2
aqData.mQueue, / / 3
kAudioQueueProperty_StreamDescription, / / 4
// in Mac OS X, instead use
// kAudioConverterCurrentInputStreamDescription
&aqData.mDataFormat, / / 5
&dataFormatSize / / 6
);
Copy the code
- Gets the size of the expected attribute value to use when querying the audio data format for an audio queue
- The AudioQueueGetProperty function gets the value of the specified property in the audio queue.
- The audio queue from which to get the audio data format.
- The attribute ID used to get the audio queue data format value.
- AudioStreamBasicDescription structure from the audio queue to get the complete audio data format
- The incoming AudioStreamBasicDescription structure expected size. Returns the actual size. This value is not required for recording
Create an Audio File
Once the audio queue is created and configured, you can create the audio file to which the audio data is recorded. Audio files use data format and file format specifications previously stored in custom structures for audio queues.
CFURLRef audioFileURL =
CFURLCreateFromFileSystemRepresentation ( / / 1
NULL./ / 2
(const UInt8 *) filePath, / / 3
strlen (filePath), / / 4
false / / 5
);
AudioFileCreateWithURL ( / / 6
audioFileURL, / / 7
fileType, / / 8
&aqData.mDataFormat, / / 9
kAudioFileFlags_EraseFile, / / 10
&aqData.mAudioFile / / 11
);
Copy the code
- In the header file CFURL. H CFURLCreateFromFileSystemRepresentation functions declared in the create a CFURL object, the object is said to log file.
- Use NULL (or kCFAllocatorDefault) to use the current default memory allocator.
- The file system path to convert to a CFURL object. In production code, you usually get the value of filePath from the user.
- Number of bytes in the file system path.
- When false, filePath represents a file, not a directory.
- AudioFileCreateWithURL function (in the audiofile.h header file) that creates a new AudioFile or initializes an existing file.
- The URL used to create a new audio file or to initialize an existing file. The CFURLCreateFromFileSystemRepresentation URL from step 1.
- File type. In the example code in this chapter, it was previously set to AIFF via the kAudioFileAIFFType file type constant. See Set Up an Audio Format for Recording
- To the recording file audio data formats, designated as AudioStreamBasicDescription structure. In the example code in this chapter, this is also Set in Set Up an Audio Format for Recording
- If the file already exists, it is erased.
- Returns an audio file object of type AudioFileID that represents the audio file to be recorded
Set an Audio Queue Buffer Size
Before preparing to record using the Audio Queue Buffer, use the DeriveBufferSize Function written earlier (see Write a Function to Derive Recording Audio Queue BufferSize). Assigns this size to the recording audio queue in use
DeriveBufferSize ( / / 1
aqData.mQueue, / / 2
aqData.mDataFormat, / / 3
0.5./ / 4
&aqData.bufferByteSize / / 5
);
Copy the code
- The DeriveBufferSize Function, described in Write a Function to Derive Recording Audio Queue BufferSize, sets an appropriate Audio Queue BufferSize
- Sets the buffer size of the audio queue
- The audio data format of the file being recorded. See Set Up an Audio Format for Recording
- The number of audio seconds that each audio queue buffer should hold. As set up here, half a second is usually a good choice
- Returns the size, in bytes, of each audio queue buffer. This value is placed in a custom structure for the audio queue
Prepare a Set of Audio Queue Buffers
It is now requested to prepare a set of Audio Queue buffers for the created Audio Queue
for (int i = 0; i < kNumberBuffers; ++i) { / / 1
AudioQueueAllocateBuffer ( / / 2
aqData.mQueue, / / 3
aqData.bufferByteSize, / / 4
&aqData.mBuffers[i] / / 5
);
AudioQueueEnqueueBuffer ( / / 6
aqData.mQueue, / / 7
aqData.mBuffers[i], / / 8
0./ / 9
NULL / / 10
);
}
Copy the code
- Iterates to allocate and queue each audio queue buffer, with kNumberBuffers being the number of audio queues previously set
- The AudioQueueAllocateBuffer function requests the audio queue to allocate an audio queue buffer
- The audio queue
- The size, in bytes, of the buffer allocated to the audio queue. See Write a Function to Derive Recording Audio Queue Buffer Size
- Newly allocated audio queue buffer. Pointer to buffer
- The AudioQueueEnqueueBuffer function adds the audio queue buffer to the end of the buffer queue
- The audio queue
- The audio queue buffer that needs to be queued
- This parameter is not used when the buffer is queued for recording
- This parameter is not used when the buffer is queued for recording
Record Audio
aqData.mCurrentPacket = 0; / / 1
aqData.mIsRunning = true; / / 2
AudioQueueStart ( / / 3
aqData.mQueue, / / 4
NULL / / 5
);
// Wait, on user interface thread, until user stops the recording
AudioQueueStop ( / / 6
aqData.mQueue, / / 7
true / / 8
);
aqData.mIsRunning = false; / / 9
Copy the code
- Initialize the packet index to 0 to start recording at the start of the audio file.
- Setting mIsRunning=true in a custom structure indicates that the audio queue is running. This flag is used by the recording audio queue callback.
- The AudioQueueStart function starts the audio queue on its own thread.
- The audio queue to start.
- NULL indicates that the audio queue should start recording immediately.
- The AudioQueueStop function stops and resets the recording audio queue.
- The audio queue to stop
- Use true to use sync stop. For an explanation of synchronous and asynchronous stops, see Audio Queue Control and Status.
- Synchronous stop occurs immediately, regardless of previously buffered audio data
- Asynchronous stops after all queued buffers have been played or recorded
- Set mIsRunning=false in the custom structure to indicate that the audio queue is not running
Clean Up After Recording
When you are finished recording, release the audio queue and close the audio file
AudioQueueDispose ( / / 1
aqData.mQueue, / / 2
true / / 3
);
AudioFileClose (aqData.mAudioFile); / / 4
Copy the code
- The AudioQueueDispose function frees the audio queue and all its resources, including its buffers
- The audio queue to release
- Use true to release the audio queue synchronously (that is, immediately)
- Close the audio file used for recording. The AudioFileClose function is declared in audiofile.h
The official documentation
- Recording Audio
- The Demo address