How to draw a picture on Android platform
AudioRecord API
AudioRecord is provided by The Android system for recording function class, the main function is to let a variety of Java applications to manage audio and video resources, through this class can record the related hardware collected sound.
This is done by reading the audio data from an AudioRecord object in a pre-configured format for storing the sound.
To start recording, an AudioRecord needs to initialize an associated sound buffer, which is used to hold new sound data. The size of this buffer can be specified during object construction. It indicates how long an AudioRecord object can record (that is, how much sound can be recorded at one time) before the sound data is read (synchronized). The sound data is read from the audio hardware. The data size does not exceed the size of the entire recording data (it can be read multiple times), that is, each time the initial buffer capacity is read.
Graph TD constructs the AudioRecord object, and the minimum buffer size is obtained by getMinBufferSize --> initializes a buffer that is greater than or equal to the size of the buffer used by the AudioRecord object to write sound data --> Start recording --> Create data stream, read the audio data from AudioRecord to the initialized buffer and import the buffer data into the data stream --> close the data stream --> Stop recording
Use AudioRecord to achieve recording, and generate WAV
Create an AudioRecord object
1. First declare some global variables:
private var audioRecord: AudioRecord? = null private var recordBufSize: Int = 0 private val frequency = 44100 private val channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO private val encodingBitRate = AudioFormat.ENCODING_PCM_16BIT private var isRecording = false private var audioFilePath = "" private var recordingThread: Job? = null // Coroutine for recording audioCopy the code
Function prototype:
public static int getMinBufferSize (int sampleRateInHz, int channelConfig, int audioFormat)
Function:
Returns the minimum buffer size required to successfully create an AudioRecord object
Parameters:
SampleRateInHz: The default sampling rate, in Hz, here set to 44100Hz. 44100Hz is the only current sampling rate that can guarantee working on all devices.
ChannelConfig: Describes the audio channel Settings. Set this parameter to audioformat. CHANNEL_CONFIGURATION_MONO.
AudioFormat: The sampling accuracy of audio data. This is set to audioformat.encoding_16bit;
The return value:
Returns the minimum buffer size required to successfully create an AudioRecord object. Note: This size does not guarantee smooth recording under load, a higher value should be selected based on the expected frequency, which AudioRecord instances use when pushing new data
ERROR_BAD_VALUE (-2) is returned if the hardware does not support recording parameters, or if an invalid parameter is entered, or ERROR (-1) is returned if the hardware queries that the output property is not implemented, or the minimum buffer is in byte.
Function prototype:
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes) throws IllegalArgumentException
Parameters:
AudioSource: Recording the source, here set the MediaRecorder. AudioSource. MIC, other please see MediaRecorder. AudioSource recording source definition, such as the MediaRecorder. AudioSource. FM_TUNER, etc;
SampleRateInHz: The default sampling rate, in Hz, here set to 44100Hz. 44100Hz is the only current sampling rate that can guarantee working on all devices.
ChannelConfig: Describes the audio channel Settings. Set this parameter to audioformat. CHANNEL_CONFIGURATION_MONO.
AudioFormat: Audio data is guaranteed to support this format, which is set here to audioFormat.encoding_16bit;
BufferSizeInBytes: The total number (bytes) of audio data written to the buffer during recording, as obtained by getMinVufferSize().
Exception:
IllegalArgumentException is thrown when arguments are incorrectly set or arguments are not supported
2. Obtain the size of the buffer and create an AudioRecord:
recordBufSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, encodingBitRate)
audioRecord = AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration,
encodingBitRate,recordBufSize)
Copy the code
3. Initialize a buffer
val buffer = ByteArray(recordBufSize)
Copy the code
4. Start recording
audioRecord.startRecording()
isRecording = true
Copy the code
5. Create a data stream and read the audio data from the AudioRecord into the initialized buffer while importing the data from the buffer into the data stream
audioFilePath = Environment.getExternalStorageDirectory().absolutePath + File.separator + "test.wav"
val audioFile = File(audioFilePath)
if (audioFile.exists()) audioFile.delete()
val fileOutputStream = FileOutputStream(audioFile)
fileOutputStream.use {
while (isRecording) {
audioRecord.read(buffer, 0, recordBufSize)
it.write(buffer)
}
}
Copy the code
6. Close the data flow
If isRecording is set to false, the while loop stops, the Stream stops flowing, and the Stream is closed.
isRecording = false;
7. Stop recording
After you stop recording, be careful to release resources.
isRecording = false audioRecord? .stop() audioRecord? .release() audioRecord = null recordingThread? .cancel()Copy the code
Note: The permissions are WRITE_EXTERNAL_STORAGE and RECORD_AUDIO
So far the basic recording process has been introduced. But then someone asked the question:
1. I output all the audio data to the file according to the process. After I stop recording, I open the file and find that it cannot be played.
Answer: according to the process covered, data is entered, but now the inside of the file content is only the original audio data, termed raw (Chinese explanation is that the “raw material” or “raw”), at this time, you let the player to open, it does not know what is preservation of lattice type, and don’t know how to decode operations. Of course it doesn’t play.
2. How can I play my recorded content in the player?
A: Just add WAVE HEAD data at the beginning of the file’s data. Only by adding the data in the header of the file can the player know exactly what the contents are, and then can parse and play the contents normally. The description of the specific header file can be understood in Play a WAV file on an AudioTrack.
The code for adding the WAVE header is as follows:
/** * @param bufferSize @param sampleRate sampleRate * @param channel number of channels */ class PcmToWavUtil(private val) bufferSize: Int, private val sampleRate: Long, private val channel: Int) {/** * @param inFilename Source file path * @param outFilename Target file path */ fun pcmToWav(inFilename: String,outFilename: String) { val inputStream = FileInputStream(inFilename) val outputStream = FileOutputStream(outFilename) val totalAudioLen = inputStream.channel.size() val totalDataLen = totalAudioLen + 36 val longSampleRate: Long = sampleRate val channels = if (channel == AudioFormat.CHANNEL_IN_MONO) 1 else 2 val byteRate: Long = 16 * sampleRate * channels / 8 val data: ByteArray = ByteArray(bufferSize) writeWaveFileHeader(outputStream, totalAudioLen, totalDataLen ,longSampleRate, channels, byteRate) inputStream.use { input -> outputStream.use { output -> while (input.read(data) ! = -1) {output.write(data)}}}} /** * Add the wav header */ @throws (IOException::class) private fun writeWaveFileHeader( outputStream: FileOutputStream, totalAudioLen: Long, totalDataLen: Long, longSampleRate: Long, channels: Int, byteRate: Long ) { val header: ByteArray = ByteArray(44) // RIFF/WAVE header header[0] = 'R'.toByte() header[1] = 'I'.toByte() header[2] = 'F'.toByte() header[3] = 'F'.toByte() header[4] = ((totalDataLen and 0xff).toByte()) header[5] = (totalDataLen shr 8 and 0xff).toByte() header[6] = (totalDataLen shr 16 and 0xff).toByte() header[7] = (totalDataLen shr 24 and 0xff).toByte() //WAVE header[8] = 'W'.toByte() header[9] = 'A'.toByte() header[10] = 'V'.toByte() header[11] = 'E'.toByte() // 'fmt ' chunk header[12] = 'f'.toByte() header[13] = 'm'.toByte() header[14] = 't'.toByte() header[15] = ' '.toByte() // 4 bytes: size of 'fmt ' chunk header[16] = 16 header[17] = 0 header[18] = 0 header[19] = 0 // format = 1 header[20] = 1 header[21] = 0 header[22] = channels.toByte() header[23] = 0 header[24] = ((longSampleRate and 0xff).toByte()) header[25] = ((longSampleRate shr 8 and 0xff).toByte()) header[26] = ((longSampleRate shr 16 and 0xff).toByte()) header[27] = ((longSampleRate shr 24 and 0xff).toByte()) header[28] = (byteRate and 0xff).toByte() header[29] = (byteRate shr 8 and 0xff).toByte() header[30] = (byteRate shr 16 and 0xff).toByte() header[31] = (byteRate shr 24 and 0xff).toByte() // block align header[32] = (2 * 16 / 8).toByte() header[33] = 0 // bits per sample header[34] = 16 header[35] = 0 //data header[36] = 'd'.toByte() header[37] = 'a'.toByte() header[38] = 't'.toByte() header[39] = 'a'.toByte() header[40] = (totalAudioLen and 0xff).toByte() header[41] = (totalAudioLen shr 8 and 0xff).toByte() header[42] = (totalAudioLen shr 16 and 0xff).toByte() header[43] = (totalAudioLen shr 24 and 0xff).toByte() outputStream.write(header, 0, 44) } }Copy the code
The source address
After the speech
The Android SDK provides two apis for audio capture. They are: MediaRecorder and AudioRecord, the former is a more upper-level API, it can directly put the mobile phone microphone input audio data encoding compression (such as AMR, MP3, etc.) and coexisting articles, while the latter is closer to the bottom, can be more free and flexible control, You can get an original PCM audio data frame. If you want to simply make a recorder, recording audio files, it is recommended to use MediaRecorder, and if you need to do further algorithm processing of audio, or use a third party coding library for compression, and network transmission applications, it is recommended to use AudioRecord, In fact, MediaRecorder bottom also calls AudioRecord and Android Framework layer AudioFlinger interaction. It is natural to use AudioRecord to collect audio in real time in live broadcast.