directory
- Introduction to OpenSL ES
- OpenSL ES Audio playback process
- Code implementation
- Problems encountered
- data
- harvest
In the last article, we realized the playback of PCM audio data decoded by FFmpeg through AudioTrack. There is another way to play audio on Android, namely OpenSL ES. What is OpenSL ES? The reason is that the MediaPlayer or AudioTrack on the Java layer can be used to play most audio files. This is not easy to implement if you have special requirements such as adding sound effects. OpenSL can solve this problem well and has a lot of rich features. Let’s learn and practice.
Introduction to OpenSL ES
1.1 What is OPenSL ES?
OpenSL ES (Open Sound Library for Embedded System) is an Embedded audio acceleration standard and MediaPlayer and MediaRecorder apis in the Android Java framework Provides similar audio capabilities. OpenSL ES provides a C language interface and CPP bindings that allow you to call apis from code written in either language. OpenSL ES is a lower-level C API compared to Java layer apis such as MediaPlayer and MediaRecorder. In development, advanced apis are usually used directly, except for performance bottlenecks such as voice chat, 3D Audio, and certain Effects. Developers can directly develop OpenSL ES Audio applications using C/CPP to improve the Audio performance of their applications.
What is OpenSL ES capable of?
Support for audio playback, mixing, sound effects, and recording can be seen in the OpenSL ES user guide below.
The two images above are from: Official Guide: OpenSL ES
1.3 How can I Introduce it?
OpenSL ES programming instructions
OpenSL ES libraries can be found in the NDK package
Eg: $30 / arch NDK_PATH_ / platforms/android - - arm/usr/lib/libOpenSLES. SoCopy the code
Add OpenSLES to target_link_libraries in cmakelist. TXT
target_link_libraries(
native-lib
avformat
avcodec
avfilter
avutil
swresample
swscale
OpenSLES
${log-lib})
Copy the code
1.4 Objects and Interfaces
Although OpenES SL is written in procedural C language, it provides objects and interfaces with the idea of object orientation, which is convenient for development and use in projects.
OpenSL ES objects are similar to the concept of objects in programming languages such as Java and CPP, but OpenSL ES objects are only accessible through their associated interfaces. This includes the initial interface for all objects, called SLObjectItf. The object itself has no handle, only a handle that connects to the SLObjectItf interface of the object. Note that OpenSL ES objects cannot be used directly. You must use GetInterface to get the ID of the specified interface (such as the player interface), and then access function functions through this interface
The OpenSL ES object is created first, it will return SLObjectItf, and then realize, and then use GetInterface to get the interface for each function it needs. Audio playback uses engine, mixer, and player objects and interfaces. In the next section, we will look at the specific process.
Ii. OpenSL ES audio playback process
Image source: OpenSL-ES official documentation
OpenSL ES can be used by importing the OpenSL library in CmakeList and importing the corresponding header file in the corresponding CPP file. The detailed process is as follows
- Creating engine objects
SLObjectItf engineObj
Initialize engineRealize
Get engine interfaceGetInterface SLEngineItf
- Create a mixer object
SLObjectItf outputMixObj
Initialize the mixerRealize
- Set input and output data parameters
- Creating a player object
SLPlayItf playerObj
Initialize the playerRealize
Get player interfaceGetInterface
- Get the playback callback interface (that is, the buffer queue)
SLAndroidSimpleBufferQueueItf bufferQueue
- Register to play the callback ‘RegisterCallback
- Setting the Playback State
SetPlayState
- Waiting for an audio frame to be enqueued triggers a playback callback
(*mBufferQueue)->Enqueue
- Release resources
Demo Native Audio is a simple audio recorder/player
Three, OpenSL ES play decoding PCM code implementation
Now that you know the basics of OpenSL ES and how to use it, let’s start with the code implementation.
#include <jni.h>
#include <string>
#include <unistd.h>
extern "C" {
#include "include/libavcodec/avcodec.h"
#include "include/libavformat/avformat.h"
#include "include/log.h"
#include <libswscale/swscale.h>
#include <libavutil/imgutils.h>
#include <libswresample/swresample.h>
#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>
}
//函数声明
jint playPcmBySL(JNIEnv *env, jstring pcm_path);
extern "C"
JNIEXPORT jint JNICALL
Java_android_spport_mylibrary2_Demo_decodeAudio(JNIEnv *env, jobject thiz, jstring video_path,
jstring pcm_path) {
....
//在音频解码完成后调用使用sl播放的函数
playPcmBySL(env,pcm_path);
}
// engine interfaces
static SLObjectItf engineObject = NULL;
static SLEngineItf engineEngine;
// output mix interfaces
static SLObjectItf outputMixObject = NULL;
static SLEnvironmentalReverbItf outputMixEnvironmentalReverb = NULL;
static SLObjectItf pcmPlayerObject = NULL;
static SLPlayItf pcmPlayerPlay;
static SLAndroidSimpleBufferQueueItf pcmBufferQueue;
FILE *pcmFile;
void *buffer;
uint8_t *out_buffer;
jint playPcmBySL(JNIEnv *env, const _jstring *pcm_path);
// aux effect on the output mix, used by the buffer queue player
static const SLEnvironmentalReverbSettings reverbSettings = SL_I3DL2_ENVIRONMENT_PRESET_STONECORRIDOR;
//播放回调
void playerCallback(SLAndroidSimpleBufferQueueItf bufferQueueItf, void *context) {
if (bufferQueueItf != pcmBufferQueue) {
LOGE("SLAndroidSimpleBufferQueueItf is not equal");
return;
}
while (!feof(pcmFile)) {
size_t size = fread(out_buffer, 44100 * 2 * 2, 1, pcmFile);
if (out_buffer == NULL || size == 0) {
LOGI("read end %ld", size);
} else {
LOGI("reading %ld", size);
}
buffer = out_buffer;
break;
}
if (buffer != NULL) {
LOGI("buffer is not null");
SLresult result = (*pcmBufferQueue)->Enqueue(pcmBufferQueue, buffer, 44100 * 2 * 2);
if (SL_RESULT_SUCCESS != result) {
LOGE("pcmBufferQueue error %d",result);
}
}
}
jint playPcmBySL(JNIEnv *env, jstring pcm_path) {
const char *pcmPath = env->GetStringUTFChars(pcm_path, NULL);
pcmFile = fopen(pcmPath, "r");
if (pcmFile == NULL) {
LOGE("open pcmfile error");
return -1;
}
out_buffer = (uint8_t *) malloc(44100 * 2 * 2);
//1. 创建引擎`
// SLresult result;
//1.1 创建引擎对象
SLresult result = slCreateEngine(&engineObject, 0, 0, 0, 0, 0);
if (SL_RESULT_SUCCESS != result) {
LOGE("slCreateEngine error %d", result);
return -1;
}
//1.2 实例化引擎
result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
if (SL_RESULT_SUCCESS != result) {
LOGE("Realize engineObject error");
return -1;
}
//1.3获取引擎接口SLEngineItf
result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
if (SL_RESULT_SUCCESS != result) {
LOGE("GetInterface SLEngineItf error");
return -1;
}
slCreateEngine(&engineObject, 0, 0, 0, 0, 0);
(*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
(*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
//获取到SLEngineItf接口后,后续的混音器和播放器的创建都会使用它
//2. 创建输出混音器
const SLInterfaceID ids[1] = {SL_IID_ENVIRONMENTALREVERB};
const SLboolean req[1] = {SL_BOOLEAN_FALSE};
//2.1 创建混音器对象
result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 1, ids, req);
if (SL_RESULT_SUCCESS != result) {
LOGE("CreateOutputMix error");
return -1;
}
//2.2 实例化混音器
result = (*outputMixObject)->Realize(outputMixObject, SL_BOOLEAN_FALSE);
if (SL_RESULT_SUCCESS != result) {
LOGE("outputMixObject Realize error");
return -1;
}
//2.3 获取混音接口 SLEnvironmentalReverbItf
result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB,
&outputMixEnvironmentalReverb);
if (SL_RESULT_SUCCESS == result) {
result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties(
outputMixEnvironmentalReverb, &reverbSettings);
}
//3 设置输入输出数据源
//setSLData();
//3.1 设置输入 SLDataSource
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE,2};
SLDataFormat_PCM formatPcm = {
SL_DATAFORMAT_PCM,//播放pcm格式的数据
2,//2个声道(立体声)
SL_SAMPLINGRATE_44_1,//44100hz的频率
SL_PCMSAMPLEFORMAT_FIXED_16,//位数 16位
SL_PCMSAMPLEFORMAT_FIXED_16,//和位数一致就行
SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT,//立体声(前左前右)
SL_BYTEORDER_LITTLEENDIAN//结束标志
};
SLDataSource slDataSource = {&loc_bufq, &formatPcm};
//3.2 设置输出 SLDataSink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
//4.创建音频播放器
//4.1 创建音频播放器对象
const SLInterfaceID ids2[1] = {SL_IID_BUFFERQUEUE};
const SLboolean req2[1] = {SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &pcmPlayerObject, &slDataSource, &audioSnk,
1, ids2, req2);
if (SL_RESULT_SUCCESS != result) {
LOGE(" CreateAudioPlayer error");
return -1;
}
//4.2 实例化音频播放器对象
result = (*pcmPlayerObject)->Realize(pcmPlayerObject, SL_BOOLEAN_FALSE);
if (SL_RESULT_SUCCESS != result) {
LOGE(" pcmPlayerObject Realize error");
return -1;
}
//4.3 获取音频播放器接口
result = (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_PLAY, &pcmPlayerPlay);
if (SL_RESULT_SUCCESS != result) {
LOGE(" SLPlayItf GetInterface error");
return -1;
}
//5. 注册播放器buffer回调 RegisterCallback
//5.1 获取音频播放的buffer接口 SLAndroidSimpleBufferQueueItf
result = (*pcmPlayerObject)->GetInterface(pcmPlayerObject, SL_IID_BUFFERQUEUE, &pcmBufferQueue);
if (SL_RESULT_SUCCESS != result) {
LOGE(" SLAndroidSimpleBufferQueueItf GetInterface error");
return -1;
}
//5.2 注册回调 RegisterCallback
result = (*pcmBufferQueue)->RegisterCallback(pcmBufferQueue, playerCallback, NULL);
if (SL_RESULT_SUCCESS != result) {
LOGE(" SLAndroidSimpleBufferQueueItf RegisterCallback error");
return -1;
}
//6. 设置播放状态为Playing
result = (*pcmPlayerPlay)->SetPlayState(pcmPlayerPlay, SL_PLAYSTATE_PLAYING);
if (SL_RESULT_SUCCESS != result) {
LOGE(" SetPlayState error");
return -1;
}
//7.触发回调
playerCallback(pcmBufferQueue,NULL);
return 0;
}
Copy the code
OpenSL ES has many more features, such as mixing, setting volume, recording, and playing audio in urls or Assert. See the official documentation and the NDK demo for details,
This paper is to learn and practice here, the more learning hair around the more excellent people, their own things, to learn more, seize a core pain point, learning and practice together.
The code has been uploaded to Github. [github.com/ayyb1988/ff… welcome to exchange, learn and grow together.
4. Problems encountered
Question 1: after obtaining mixing interface object no result after SetEnvironmentalReverbProperties set to 0 lead to home to 0, cause here always prompt error. Solution, remove the result check here, the official demo returns the same value 16
result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB, &outputMixEnvironmentalReverb); if (SL_RESULT_SUCCESS == result) { result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties( outputMixEnvironmentalReverb, &reverbSettings); if (SL_RESULT_SUCCESS ! = result) { LOGE(" SetEnvironmentalReverbProperties error"); return -1; }} was changed to the following: result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties( outputMixEnvironmentalReverb, &reverbSettings);Copy the code
Problem 2: The creation player object remains empty, causing it to fail to play
Cause: SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE was incorrectly written as SL_DATALOCATOR_ANDROIDBUFFERQUEUE when setting the data source for SLData
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDBUFFERQUEUE, 2}; - > to SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};Copy the code
Problem 3. The audio is stuck and repeated when playing audio
while (! feof(pcmFile)) { size_t size = fread(out_buffer, 44100 * 2 * 2, 1, pcmFile); if (out_buffer == NULL || size == 0) { LOGI("read end %ld", size); } else { LOGI("reading %ld", size); } buffer = out_buffer; // The reason is that I forgot to break out of the loop; }Copy the code
A small mistake in the early days of learning can be a couple of hours, using a step-by-step process and looking at details, and comparing with a working demo to figure out what the problem is. The root also lies in the lack of carefulness and understanding.
Five, the data
- Opensl-es official document
- NDK guide: OpenSL ES
- NDK Guide Demo: Native Audio is a simple audio recorder/player
- AudioTrack, OpenSL ES audio rendering
- FFmpeg development (03) : FFmpeg + OpenSL ES audio decoding playback
- Android Platform OpenSL ES plays PCM data
- Android through OpenSL ES playback audio routine details
Six, harvest
- Understand the basic knowledge of OpenSl ES and the process of playing audio data
- Code to achieve OpenSL ES playback audio stream
- Combined with FFmpeg, openSL plays decoded audio data
- Solve problems
Thank you for reading
Learning practice of video decoding, audio decoding and playback, the next we through OpenGL ES to achieve decoding video rendering, welcome to pay attention to the public account “audio and video development journey”, learn and grow together.
Welcome to communicate