AVAudioEngine is more powerful than AVAudioPlayer, but of course it is more cumbersome to use than AVAudioPlayer.

AVAudioEngine for Core Audio made some use of the package to simplify, simple to do some Audio signal processing.

Using AVAudioPlayer, is audio file level processing.

Using AVAudioEngine, is audio data stream level processing.

AVAudioEngine enables low-latency, real-time audio processing. You can also do multi-input audio and add special effects, such as three-dimensional space sound effects

AVAudioEngine can make powerful music processing and mixing APP, with the production of complex three-dimensional space sound games, this article to a simple sound change application

General architecture diagram, the scene is karaoke

AVAudioEngine Usage Guide

First of all, a quick understanding

Take an AVAudioEngine instance and add the nodes Node, Player Node with the Player, and Effect Node with the sound.

Connect the node to the audio engine, the AVAudioEngine instance. Then the association between nodes is established to form an audio data processing chain. The processed audio data flows through the last node, which is the output of the audio engine.

Start doing an inflection feature, that is, pitch changes

You need AVAudioEngine and AVAudioPlayerNode

Var enginePlayer = AVAudioPlayerNode() // Voice unit: adjust the pitchletPitchEffect = AVAudioUnitTimePitch() // reverb unitletReverbEffect = AVAudioUnitReverb() // Adjust audio playback speed unitletRateEffect = AVAudioUnitVarispeed() // Adjust the volume unitletVolumeEffect = AVAudioUnitEQ()Copy the code

Do some Settings

Get the AVAudioFormat reference to the input node,

This is the default description file for audio stream data, containing information such as channel count, sampling rate, and so on.

AVAudioFormat, in effect, is the Core of the Audio Audio buffer AudioStreamBasicDescription data file format, did some encapsulation.

AudioAVEngine does child node association

// Do some configuration, function initialization funcsetupAudioEngine() {// This example is monophoniclet format = audioAVEngine.inputNode.inputFormat(forBus: AudioAVEngine. Attach (enginePlayer) audioAVEngine. Attach (pitchEffect) audioAVEngine. Attach (reverbEffect) AudioAVEngine. Attach (rateEffect) audioAVEngine. Attach (volumeEffect) audioAVEngine. pitchEffect, format: format) audioAVEngine.connect(pitchEffect, to: reverbEffect, format: format) audioAVEngine.connect(reverbEffect, to: rateEffect, format: format) audioAVEngine.connect(rateEffect, to: volumeEffect, format: format) audioAVEngine.connect(volumeEffect, to: audioAVEngine.mainMixerNode, format: Format) / / choose reverberation effect for large room reverbEffect loadFactoryPreset (AVAudioUnitReverbPreset. LargeChamber)doTry audioavengine.start ()} catch {print("Error starting AVAudioEngine.")}}Copy the code

play

func  play() {let fileURL = getURLforMemo()
        var playFlag = true
        
        do// AVAudioFile // AVAudioFile // AVAudioFile // AVAudioFileforReading: FileURL) // Sound change effect, first give a default pitch value // look at the effect, Something to sharp pitchEffect. Pitch = 2400 reverbEffect wetDryMix = UserSetting. Shared. The reverb rateEffect. Rate = UserSetting.shared.rate volumeEffect.globalGain = UserSetting.shared.volume } catch { engineAudioFile = nil playFlag =false
            print("Error loading AVAudioFile."} // AVAudioPlayer is mainly used to check the volume of the audioplayerdo {
            audioPlayer = try AVAudioPlayer(contentsOf: fileURL)
            audioPlayer.delegate = self
            ifDuration > 0.0 {// AudioPlayer. duration > 0.0 {// audioPlayer is not used for playing audio, So he volume is set to 0 audioPlayer. Volume = 0.0 audioPlayer. IsMeteringEnabled =true
                audioPlayer.prepareToPlay()
            } else {
                playFlag = false
            }
        } catch {
            audioPlayer = nil
            engineAudioFile = nil
            playFlag = false
            print("Error loading audioPlayer."} // Two players that want to play together are marked with an audioPlayer availableif playFlag == true{/ / enginePlayer, a voice / / is used for playing the enginePlayer enginePlayer. ScheduleFile (engineAudioFile, at: nil, completionHandler: Nil) engineplayer.play () // audioPlayer, no sound, used to detect audioplayer.play ()setPlayButtonOn(flag: true)
            startUpdateLoop()
            audioStatus = .playing
        }
    }

Copy the code

Tip above: AVAudioPlayerNode + AVAudioPlayer

Simultaneously play AVAudioPlayerNode (with sound), AVAudioPlayer (mute, just to remove data and state), add sound changes and other sound effects through AVAudioPlayerNode, by doing volume detection.

It seems a bit cumbersome, and Apple certainly wouldn’t recommend it.

In the case of recording, NodeTapBlock is used to perform real-time analysis on the audio input stream.

Similarly for playback, process the audio signal, take out the average volume, and you can refresh the UI.

With AVAudioPlayer, you can get the current playing time, the duration of the file,

With AVAudioPlayerDelegate, it’s easy to refresh the UI when the playback is over

Of course, all of this is possible with AVAudioPlayerNode


End play

func stopPlayback() {
        setPlayButtonOn(flag: falseAudioStatus =.stopped // Two players, start together, end together AudioPlayer.stop () engineplayer.stop () stopUpdateLoop()}Copy the code

Sound effects: pitch, reverb, playback speed, volume

Adjust the pitch, used to change the sound, AVAudioUnitTimePitch

The pitch property of sound, ranging from -2400 to 2400 minutes, contains four octaves. The default value is 0

An octave can be divided into 12 semitones.

Each chromatic interval corresponds to the interval between adjacent piano keys and is equal to 100 minutes

    func setPitch(value: Float) {
        pitchEffect.pitch = value
    }
Copy the code
Adjust reverb, AVAudioUnitReverb

WetDryMix ranges from 0 to 100.

0 is all dry, dry sound is pure human voice without music

100 is all wet, strong sense of space.

Dry sound is the original, wet sound is post-processed.

   func toSetReverb(value: Float) {
        reverbEffect.wetDryMix = value
    }
Copy the code
Adjust the audio playback speed, AVAudioUnitVarispeed

Audio playback speed The value of rate ranges from 0.25 to 4.0.

The default value is 1.0.

func toSetRate(value: Float) {
        rateEffect.rate = value
    }
Copy the code
Adjust the volume, AVAudioUnitEQ

GlobalGain ranges from -96 to 24 decibels

func toSetVolumn(value: Float){
        volumeEffect.globalGain = value
    }
Copy the code

Speech synthesis TTS, input text, play the corresponding speech

TTS, usually use AVSpeechSynthesizer and his agent AVSpeechSynthesizerDelegate AVSpeechSynthesizer AVFoundation is under the framework of a class, its function is to enter text, Have your app, select the language and dialect supported by iOS, synthesize speech and play it.

IOS platform, support three kinds of Chinese, namely three kinds of accent, simplified Chinese zh-CN, ting-ting reading; Zh-hk, sin-ji read; There is zh-TW, Mei-JIA reading.

See How to Get a List of ALL Voices on iOS

AVSpeechSynthesizer

AVSpeechSynthesizer requires the material AVSpeechUtterance to be read aloud.

AVSpeechUtterance encapsulates the words and their corresponding speech effect parameters.

In the reading effect, you can set the accent. This Demo uses zh-CN. You can also set tone changes and speed of speech (pronunciation speed).

Get the avspeech speech ance and the AVSpeechSynthesizer. If the AVSpeechSynthesizer is reading, the AVSpeechUtterance will be placed in the AVSpeechSynthesizer’s reading queue, waiting in first-in, first-out order.

Apple’s frameworks are fine-grained, AVSpeechSynthesizer, and have pause, continue, and end functions.

Stopped the AVSpeechSynthesizer, so if his speech queue has AVSpeechUtterance left in it, the rest will be removed.

AVSpeechSynthesizerDelegate synthesizer agent

Use synthesizer agent, you can listen to the event when reading. For example: start reading aloud, finish reading aloud

TTS: Text To Speech in three steps

To set up the first
// Use a synthesizerletsynthesizer = AVSpeechSynthesizer() // ... // Set the synthesizer's delegate to listen for the event synthesizer. Delegate = selfCopy the code
Read, pause, continue and stop
/ / read funcplay() {
    letWords = UserSetting. Shared. The message / / text, voice to instantiate the text unitletSpeech = AVSpeechUtterance(string: words)"zh-CN")/read/set the speed of utterance. Rate = AVSpeechUtteranceMaximumSpeechRate * UserSetting Shared. What / / set the pitch Utterance. PitchMultiplier = UserSetting. Shared. Pitch synthesizer. The speak (utterance)} / / pause to read, not set immediately suspend, suspend func is according to the wordpausePlayback() {synthesizer. PauseSpeaking (ats: AVSpeechBoundary word)} / / continue reading funccontinuePlayback() {synthesizer. ContinueSpeaking ()} / / stop play funcstopPlayback() {// stop the synthesizer immediately. AVSpeechBoundary. Immediate) / / stop the timer status updates, see the end of the lot as repo stopUpdateLoop ()setPlayButtonOn(false)
    audioStatus = .stopped
  }
Copy the code
Set up the synthesizer agent to listen for the timing of state changes
// Start reading aloud. 5. Synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance {synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterancesetPlayButtonOn(true) startUpdateLoop() audioStatus =.playing} // Finish reading. 5. Synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { stopUpdateLoop()setPlayButtonOn(false) audioStatus =.Stopped} // In your speechSynthesizer unit, you can use this speechSynthesizer to highlight your speech (_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) {let speakingString = utterance.speechString as NSString
    let word = speakingString.substring(with: characterRange)
    print(4) synthesizer: AVSpeechSynthesizer, didPause utterance: AVSpeechUtterance) { stopUpdateLoop()setPlayButtonOn(false(synthesizer: AVSpeechSynthesizer, didContinue utterance: AVSpeechUtterance) {setPlayButtonOn(true)
        startUpdateLoop()
        audioStatus = .playing
    }
Copy the code

Code:

github repo