How to draw a picture on Android platform

【Android audio and video learning road (2) 】AudioRecord API detailed explanation and application

Basic use of AudioTrack

AudioTrack class can complete the task of audio data output on the Android platform. AudioTrack has two data loading modes (MODE_STREAM and MODE_STATIC), which correspond to the data loading mode and the audio stream type, corresponding to two completely different usage scenarios.

  • MODE_STREAM: In this mode, audio data is written to AudioTrack over and over again by write. This is similar to writing data to a file through the write system call, but this works by copying the data from the user-provided Buffer into AudioTrack’s internal Buffer each time, which can introduce some delays. To solve this problem, AudioTrack introduces a second model.

  • MODE_STATIC: In this mode, all data needs to be passed to an internal buffer in AudioTrack via a single write call before play, and no data needs to be passed thereafter. This mode is suitable for files with low memory usage and high latency requirements, such as ring tones. However, it has a disadvantage that you should not write too much data at once. Otherwise, the system cannot allocate enough memory to store all the data.

1.1 MODE_STATIC mode

The MODE_STATIC mode outputs audio as follows. (Note: If STATIC mode is used, you must first call Write and then call play.)

First read the audio file data:

Private fun readAudioData() {globalScope.launch (IO) {runCatching {// use the audio generated in the previous section val Path = Environment.getExternalStorageDirectory().absolutePath + File.separator + "test1.wav" val audioFile = File(path) val fileSize = audioFile.length() ALog.e("xiao", "fileSize: $fileSize") val inputStream = audioFile.inputStream() val outputStream = ByteArrayOutputStream(fileSize.toInt()) inputStream.use { input -> outputStream.use { output -> var data: Int while (input.read().also { data = it } ! = -1) { output.write(data) } ALog.e("xiao", "Got the data") audioData = output.toByteArray() } } }.onSuccess { ALog.e("xiao", "Read the audio file success") withContext (Main) {binding. BtnPlay. IsEnabled = true}}. OnFailure {ALog. E (" xiao readAudioData error ", it.message) it.printStackTrace() } } }Copy the code

AudioTrack’s MODE_STATIC mode is used for playback

private fun playAudioByStatic() { binding.btnPlay.isEnabled = false GlobalScope.launch(IO) { runCatching { releaseAudioTrack() audioTrack = AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, audioData!! .size, AudioTrack.MODE_STATIC ) ALog.e("xiao", "Writing audio data..." ) audioTrack? .write(audioData!! , 0, audioData!! .size) ALog.e("xiao", "Starting playback") audioTrack? .play() ALog.e("xiao", "Playing") }.onSuccess { withContext(Main) { binding.btnPlay.isEnabled = true } }.onFailure { ALog.e("xiao playAudio error", it.message) it.printStackTrace() } } } private fun releaseAudioTrack() { audioTrack? .stop() audioTrack? .release() }Copy the code

1.2 MODE_STREAM mode

The MODE_STREAM mode outputs audio as follows:

Private fun playAudioBySteam() {releaseAudioTrack() Globalscope.launch (IO) {try {// use the generated audio recorded in the previous section val Path = Environment.getExternalStorageDirectory().absolutePath + File.separator + "test1.wav" val audioFile = File(path) val bufferSize = AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT ) audioTrack = AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM ) val buffer = ByteArray(bufferSize) val inputStream = DataInputStream(audioFile.inputStream()) inputStream.use { input -> var readCount = 0 while (input.available() > 0) { readCount = input.read(buffer) if (readCount == AudioTrack.ERROR_INVALID_OPERATION || readCount == AudioTrack.ERROR_BAD_VALUE) { continue } if (readCount ! = 0 && readCount ! = -1) { audioTrack? .play() audioTrack? .write(buffer, 0, readCount) } } } } catch (e: Exception) { ALog.e("xiao playAudio error", e.message) e.printStackTrace() } } }Copy the code

The source address

2, AudioTrack detailed explanation

2.1 Audio stream type

In the AudioTrack constructor, the audiomanager.stream_music parameter is touched. Its meaning is related to the management and classification of audio streams in the Android system.

Android divides the system’s sound into several streaming types. Here are some common ones:

  • STREAM_ALARM: warning sound
  • STREAM_MUSIC: music sounds, such as music, etc
  • STREAM_RING: the bell
  • STREAM_SYSTEM: system sound, such as low electric prompt tone and screen lock tone
  • STREAM_VOCIE_CALL: call sound

Note: These types of partitioning are not related to the audio data itself. For example, both MUSIC and RING types can be MP3 songs. In addition, there is no fixed standard for the selection of the sound stream type; for example, the ringtone in the ringtone preview can be set to the MUSIC type. The division of Audio stream type is related to the Audio system’s Audio management strategy.

2.2 Concept of Buffer allocation and Frame

When calculating the size of a Buffer allocation, we often use a method called getMinBufferSize. This function determines the size of the data Buffer allocated by the application layer.

AudioTrack. GetMinBufferSize (8000, / / 8 k per second sampling point AudioFormat CHANNEL_CONFIGURATION_STEREO, / / double track AudioFormat.ENCODING_PCM_16BIT);Copy the code

From AudioTrack. GetMinBufferSize began to trace the code, the code can be found on the bottom there is a very important concept: Frame (Frame). Frame is a unit that describes the amount of data. A Frame of 1 unit is equal to the number of bytes at 1 sampling point × the number of channels (for example, PCM16, a Frame of two channels is equal to 2×2=4 bytes). One sampling point is for only one channel, but there may be one or more channels. Since it is impossible to use an independent unit to represent the data amount sampled by all sound channels at one time, the concept of Frame is also introduced. The size of the Frame is the number of bytes at a sampling point × the number of channels. In addition, in the current sound card driver, its internal buffer is also used as a unit of Frame to allocate and manage.

Here’s how to trace back to the native layer:

// Uint32_t minBufCount = afLatency/((1000 *afFrameCount)/afSamplingRate); if(minBufCount < 2) minBufCount = 2; // uint32_tminFrameCount = (afFrameCount*sampleRateInHertz*minBufCount)/afSamplingRate; IntminBuffSize = minFrameCount * (audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1) * nbChannels; returnminBuffSize;Copy the code

GetMinBufSize takes into account hardware conditions (such as whether sampling rates are supported, the latency of the hardware itself, etc.) to arrive at a minimum buffer size. Normally we allocate buffers that are multiple of that size.

2.3 Construction process of AudioTrack

Each audio stream corresponds to an instance of the AudioTrack class, each AudioTrack is registered with the AudioFlinger when it is created, and all AudioTracks are mixedby the AudioFlinger, At present, Android can create up to 32 audio streams at the same time, which means that a maximum of 32 AudioTrack data streams will be processed in Mixer.

3. Comparison between AudioTrack and MediaPlayer

Sounds can be played using MediaPlayer and AudioTrack, both of which provide Java apis for developers to use. Both can play sound, but there’s a big difference.

3.1 the difference between

The biggest difference is that MediaPlayer can play multiple formats of sound files, such as MP3, AAC, WAV, OGG, MIDI and so on. MediaPlayer creates the corresponding audio decoder in the Framework layer. AudioTrack can only play PCM streams that have been decoded. If you compare the supported file formats, AudioTrack only supports waV audio files, because most of the audio files in WAV format are PCM streams. AudioTrack does not create a decoder, so only waV files that do not need decoding can be played.

3.2 contact

MediaPlayer still creates AudioTrack in the Framework layer, passes the decoded PCM number to AudioTrack, AudioTrack passes to AudioFlinger for mixing, It’s then passed to the hardware to play, so MediaPlayer contains AudioTrack.

3.3 SoundPool

When I explored the Android audio playback API, I discovered that SoundPool can also be used to play audio. Here are the scenarios: MediaPlayer is better for playing local music files or streaming online resources in the background for a long time; SoundPool plays shorter audio clips, like game sounds, button sounds, ringtone clips, etc. It can play more than one audio at a time. AudioTrack, on the other hand, is closer to the bottom, providing very powerful control and support for low-latency playback, suitable for scenarios such as streaming media and VoIP voice calls.