Requirement: The APP converts the sound (Audio Queue/Audio Unit) collected by the microphone into DB through the formula and then displays it on the interface to detect DB changes in real time.

Process:

  • To configure Audio initialization parameters, you must use the Audio Queue or Audio Unit to collect sounds.
  • Converts sound data to DB in callbacks to the Audio Queue or Audio Unit that collects sounds.
  • Pass the DB value of each frame of sound to the UI of the master controller to reflect the change of sound

The final effect is as follows, with yellow bars reflecting the DB of the sound:


GitHub address (with code)Volume column implementation

Letter Address:Volume column implementation

Blog Address:Volume column implementation

Address of nuggets:Volume column implementation


Pay attention to the point

  • After testing, if Audio Unit is used to collect sound, the sound level is set toaudioUnit.componentSubType = kAudioUnitSubType_VoiceProcessingIO;However, the collected sound data became abnormal from 512, with extremely large data and abnormal range data, so we did not process the following data from 512. Details are given below
  • If the data is computed in the Audio Queue, no additional processing is required

The specific implementation

1. Initialize the Audio Queue/Audio Unit to collect soundsAudio Queue/ Audio Unit Collects sounds
2. Convert the sound data into the DB value of the sound in the callback for collecting sounds.

In the case of the Audio Unit, the following is handled in the callback

#pragma mark - AudioUnit
static OSStatus RecordCallback(void *inRefCon,
                               AudioUnitRenderActionFlags *ioActionFlags,
                               const AudioTimeStamp *inTimeStamp,
                               UInt32 inBusNumber,
                               UInt32 inNumberFrames,
                               AudioBufferList *ioData) {

    XDXRecorder *recorder = (XDXRecorder *)inRefCon; _buffList AudioUnitRender(Recorder ->_audioUnit, ioActionFlags,inTimeStamp, inBusNumber, inNumberFrames, recorder->_buffList);
    
    void    *bufferData = recorder->_buffList->mBuffers[0].mData;
    UInt32   bufferSize = recorder->_buffList->mBuffers[0].mDataByteSize;
    //    printf("Audio Recoder Render dataSize : %d \n",bufferSize);
    
    float channelValue[2];
    caculate_bm_db(bufferData, bufferSize, 0, k_Mono, channelValue,true);
    recorder.volLDB = channelValue[0];
    recorder.volRDB = channelValue[1];
Copy the code

According to the sound calculation formula dB=20∗log(A)→A= POw (10,(dB /20.0)), we process the sound data transmitted in the callback. It should be noted here that if Audio Unit is used to collect sound through the test, Due to set the sound level is audioUnit.com ponentSubType = kAudioUnitSubType_VoiceProcessingIO; However, the collected sound data became abnormal from 512, with extremely large data and abnormal range data, so we did not process the following data from 512. The reason may be that the AudioUnit’s kAudioUnitSubType_VoiceProcessingIO class has done some sound de-echo optimization and so the data is slightly different from the normal data, which does not exist in the AudioQueue.

Monophonic channel is used in our APP to traverse the sound data. As follows, we traverse the complete audioData of each frame to find the maximum value (Max) and process it. The processed data can obtain a sound DB value (-40-0) according to the formula.

void caculate_bm_db(void * const data ,size_t length ,int64_t timestamp, ChannelCount channelModel,float channelValue[2],bool isAudioUnit) {
    int16_t *audioData = (int16_t *)data;
    
    if(channelModel == k_Mono) {// int sDbChnnel = 0; int16_t curr = 0; int16_t max = 0; size_t traversalTimes = 0;if(isAudioUnit) { traversalTimes = length/2; // The data after 512 is not displayed properly.else{
            traversalTimes = length;
        }
        
        for(int i = 0; i< traversalTimes; i++) {
            curr = *(audioData+i);
            if(curr > max) max = curr;
        }
        
        if(max < 1) {
            sDbChnnel = -100;
        }else {
            sDbChnnel = (20*log10((0.0 + Max)/32767) -0.5); } channelValue[0] = channelValue[1] = sDbChnnel; }else if(channelModel == k_Stereo){// sDbChA = 0; int sDbChB = 0; int16_t nCurr[2] = {0}; int16_t nMax[2] = {0};for(unsigned int i=0; i<length/2; i++) {
            nCurr[0] = audioData[i];
            nCurr[1] = audioData[i + 1];
            
            if(nMax[0] < nCurr[0]) nMax[0] = nCurr[0];
            
            if(nMax[1] < nCurr[1]) nMax[1] = nCurr[0];
        }
        
        if(nMax[0] < 1) {
            sDbChA = -100;
        } else {
            sDbChA = (20*log10((0.0 + nMax[0])/32767) -0.5); }if(nMax[1] < 1) {
            sDbChB = -100;
        } else {
            sDbChB = (20*log10((0.0 + nMax[1])/32767) -0.5); } channelValue[0] = sDbChA; channelValue[1] = sDbChB; }}Copy the code
3. The obtained DB value is displayed on the UI
  • View class for customizing volume columns

We’re using CALayer here to change the volume bar. The nice thing about using CALayer is that the bottom layer is automatically animated, so when we set it to different DB values in a row the UI changes continuously. The specific UI processing can be viewed in xDXvolumeView.m.

  • In the master controller, the timer is set to update the UI every 0.25s, which can ensure the continuous change of the volume bar
  • Note: We changed the DB value of the sound (-40-0) to 0-40 to facilitate the UI display