MediaCodec is used for video recording in this chapter. Ffmpeg soft editing is inefficient, so MediaCodec is used for coding here.
MediaCodec
MediaCodec is a codec API provided by Android 4.1.2(API 16). It is very simple to use, it has an input buffer and an output buffer, when coding we put data into the input buffer, and then from the output buffer after coding data is done.
In addition to directly manipulating the input buffer, there is another way to tell MediaCodec what data needs to be encoded:
public native final Surface createInputSurface();
Copy the code
Using this interface, we create a Surface and then “draw” on the Surface. MediaCodec automatically encodes the “draw” in the Surface by simply retrieving the encoded data from the output buffer.
Previously, we used OpenGL to draw on the screen, but it was not very easy to copy the screen image into CPU memory. So we can just put the OpenGL image on the screen and draw it into MediaCodec#createInputSurface.
PBO(Pixel Buffer Object) is accessed through Direct Memory Access,DMA) high-speed copy screen image pixel data to CPU memory, but here we directly use createInputSurface is easier……
We record in another thread (live recording), so the recorded EGL environment and the displayed EGL environment (GLSurfaceView, display thread) are two separate working environments that can share context resources: Texture, etc. used in the thread, needs to be able to operate in the recording thread (by drawing to the Surface of MediaCodec using OpenGL in the recording thread).
In this thread we need to do it ourselves:
1. Configure the EGL environment for recording (refer to GLSurfaceView)
2. Finish drawing the displayed image to the Surface of MediaCodec
3, coding (H.264) and multiplexing (encapsulation MP4) work
Video recording
Handles recording Button callbacks
recordButton.setOnRecordListener(new RecordButton.OnRecordListener() {
@Override
public void onRecordStart() {
douyinView.startRecord();
}
@Override
public void onRecordStop() { douyinView.stopRecord(); }});Copy the code
Then Douyinview records through Render
public void startRecord(float speed) {
try {
mMediaRecorder.start(speed);
} catch(IOException e) { e.printStackTrace(); }}public void stopRecord(a) {
mMediaRecorder.stop();
}
Copy the code
Since we don’t have the EGL environment for GLSurfaceView, we need to create our own EGL environment because OpenGL displays the image to the screen and draws it to MediaCodec#createInputSurface.
Create encoder MediaRecorder processing class, frame rate, code rate.
/ * * *@paramContext *@paramPath Specifies the address to save the video@paramWidth Video width *@paramYou can also pass the frame rate, FPS, bit rate, etc. */
public MediaRecorder(Context context, String path, int width, int height, EGLContext eglContext){
mContext = context.getApplicationContext();
mPath = path;
mWidth = width;
mHeight = height;
mEglContext = eglContext;
}
Copy the code
To encoder parameters: here the bit rate, frame rate directly write dead.
/** * Start recording video */
public void start(float speed) throws IOException{
mSpeed = speed;
/** * Configure the MediaCodec encoder */
// Video format
// Type (AVC advanced encoding H264) encoding out of the width, height
MediaFormat mediaFormat = MediaFormat.createVideoFormat(
MediaFormat.MIMETYPE_VIDEO_AVC, mWidth, mHeight);
// Parameter configuration
/ / 1500 KBS bit rate
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 1500 _000);
/ / frame rate
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 20);
// Keyframe interval
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 20);
// Color format (RGB\YUV)
// Go back from the surface
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.
CodecCapabilities.COLOR_FormatSurface);
/ / encoder
mMediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
// Configure the parameters to the encoder
mMediaCodec.configure(mediaFormat, null.null, MediaCodec.
CONFIGURE_FLAG_ENCODE);
// Render the preview texture to the virtual screen using OpengL
// MediaCodec will automatically encode the images in the inputSurfacemInputSurface = mMediaCodec.createInputSurface(); . }Copy the code
So you create an InputSurface, where Mediacodec writes data.
After coding, we also need to deal with the corresponding encapsulation operation: handing data to our virtual screen environment in the GLThread thread. In this case, we take the Looper through the HandlerThread to the Handler for processing
The communication between the GLThread and the child thread we created.
/** * Start recording video */
public void start(float speed) throws IOException{
// H.264
/ / play:
// MP4 -> Demultiplexing -> Decoding -> Drawing
// Wrapper multiplexer
// An MP4 wrapper writes H.264 to a file
mMediaMuxer = new MediaMuxer(mPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
/** * The EGL environment needs to be handled in a thread, communication between threads * Handler * Handler: the child thread notifies the main thread * looper.loop () */
HandlerThread handlerThread = new HandlerThread("VideoCodec");
handlerThread.start();
Looper looper = handlerThread.getLooper();
// Used by other threads to notify child threads
mHandler = new Handler(looper);
// Subthread: the EGL binding thread in which opengL operations are performed
mHandler.post(new Runnable() {
@Override
public void run(a) {
// Create our child thread for
mEglBase = new EGLBase(mContext, mWidth, mHeight, mInputSurface, mEglContext);
// Start the encoder
mMediaCodec.start();
isStart = true; }}); }Copy the code
Create an EGL work environment
- Create EGLContext
- Create a mEglSurface for drawing
- Double buffering for drawing mEglSurface + mEglDisplay for alternating drawing
Create EGLBase to record the EGL environment configuration required by Opengl operation, pass in the width, height, surface, refer to GLSurfaceView configuration process.
As shown in the code, creating mEglContext requires passing in mEglDisplay, mEglConfig, mEglContext = EGL14.eglCreateContext(mEglDisplay, mEglConfig, eglContext, ctx_attrib_list, 0);
private void createEGL(EGLContext eglContext) {
// Create a virtual display
mEglDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
if (mEglDisplay == EGL14.EGL_NO_DISPLAY){
throw new RuntimeException("eglGetDisplay failed");
}
// Initialize the display
int[] version = new int[2];
/ / 12.1020203
// Major: Record the major version in version[0]
// Minor: minor recorded in version[1]
if(! EGL14.eglInitialize(mEglDisplay, version,0, version, 1)) {
throw new RuntimeException("eglInitialize failed");
}
Egl selects a configuration based on the properties we configured
int[] attrib_list = {
EGL14.EGL_RED_SIZE, 8.// The number of red components in the buffer
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT, / / egl version 2
EGL14.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] num_config = new int[1];
// attrib_list: attribute list + the beginning of the attribute list
// configs: the obtained configuration (output parameters)
//num_config: the length is the same as configs
if(! EGL14.eglChooseConfig(mEglDisplay, attrib_list,0,
configs, 0, configs.length, num_config, 0)) {
throw new IllegalArgumentException("eglChooseConfig#2 failed");
}
mEglConfig = configs[0];
int[] ctx_attrib_list = {
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2./ / egl version 2
EGL14.EGL_NONE
};
// Create EGL context
// 3 share_context: EGL context in GLThread to share resources
mEglContext = EGL14.eglCreateContext(mEglDisplay, mEglConfig, eglContext, ctx_attrib_list, 0);
// Failed to create
if (mEglContext == EGL14.EGL_NO_CONTEXT) {
throw new RuntimeException("EGL Context Error."); }}Copy the code
So once you’ve created the EglContext, you need to pass the surface to the EglDisplay, create
// Draw the image on the mEglSurface
mEglSurface = EGL14.eglCreateWindowSurface(mEglDisplay, mEglConfig, surface, attrib_list, 0);
Copy the code
Bind and draw to the virtual screen:
// Bind the display device and context of the current thread, and then operate on opengL, which is the virtual display
if(! EGL14.eglMakeCurrent(mEglDisplay,mEglSurface,mEglSurface,mEglContext)) {throw new RuntimeException("EglMakeCurrent failed!");
}
// Draw to the virtual screen
mScreenFilter = new ScreenFiliter(context);
mScreenFilter.onReady(width,height);
Copy the code
Double cache drawing: mEglSurface + mEglDisplay for alternating drawing.
public void draw(int textureId, long timestamp){
// Bind the display device and context of the current thread, and then operate on opengL, which is the virtual display
if(! EGL14.eglMakeCurrent(mEglDisplay,mEglSurface,mEglSurface,mEglContext)) {throw new RuntimeException("EglMakeCurrent failed!");
}
/ / to draw
mScreenFilter.onDrawFrame(textureId);
// Refresh the eglSurface timestamp
EGLExt.eglPresentationTimeANDROID(mEglDisplay, mEglSurface, timestamp);
// Exchange data
//EGL works in double buffer mode with two frame buffers (fb)
// When EGL displays one FB on the screen, the other waits in the background for OpengL to swap
EGL14.eglSwapBuffers(mEglDisplay, mEglSurface);
}
Copy the code
Add the shared EGLContext, encode it in the child thread of the created EGL environment, accept the video width, height, and Surface as passed in, and pass the EGLContext in the render thread to the custom draw EGL as share_context.
mHandler.post(new Runnable() {
@Override
public void run(a) {
// Create our child thread EGL environment
mEglBase = new EGLBase(mContext, mWidth, mHeight, mInputSurface, mEglContext);
// Start the encoder
mMediaCodec.start();
isStart = true; }});/** * Create the renderer *@param gl
* @param config
*/
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {...// Note that the file must be created in the Gl thread
mCameraFiliter = new CameraFilter(mDouyinView.getContext());
mScreenFiliter = new ScreenFiliter(mDouyinView.getContext());
// Render the context of the thread, which needs to be given to its own EGL environment as share_context
EGLContext eglContext = EGL14.eglGetCurrentContext();
mMediaRecorder = new MediaRecorder(mDouyinView.getContext(), "/sdcard/a.mp4", CameraHelper.HEIGHT,CameraHelper.WIDTH, eglContext);
}
Copy the code
Draw, encode and read output
Initiate coding in child threads
// Render the preview texture to the virtual screen using OpengL
// MediaCodec will automatically encode the images in the inputSurfacemInputSurface = mMediaCodec.createInputSurface(); . mHandler.post(new Runnable() {
@Override
public void run(a) {
// Create our child thread to store the preview image in virtual Diaplay.
mEglBase = new EGLBase(mContext, mWidth, mHeight, mInputSurface, mEglContext);
// Start the encoder
mMediaCodec.start();
isStart = true; }});Copy the code
The above mMediacodec.start () will then fetch the data from the mInputSurface, and mEglBase will write the data image to the mInputSurface in the draw method.
public void encodeFrame(final int textureId,final long timestamp) {
if(! isStart){return;
}
mHandler.post(new Runnable() {
@Override
public void run(a) {
// Draw the image to the virtual screen
mEglBase.draw(textureId, timestamp);
// Get the encoded data from the encoder's output buffer
getCodec(false); }}); }Copy the code
Finally, take the encoded data from the output buffer through mMediaMuxer encapsulation, the generation of path path corresponding MP4 files.
/** * get the encoded data **@paramEndOfStream mark whether to end recording */
private void getCodec(boolean endOfStream) {
// No record, give mediacodec a flag
if (endOfStream) {
mMediaCodec.signalEndOfInputStream();
}
// Output buffer
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
// Want to get all the encoded data and write it to mp4 file
while (true) {
// Wait for 10 ms
int status = mMediaCodec.dequeueOutputBuffer(bufferInfo, 10 _000);
1, Need more data 2, may not be finished encoding (need more time)
if (status == MediaCodec.INFO_TRY_AGAIN_LATER) {
// If it is stop I continue the loop
// Continue the loop to indicate that no new wait encoded image is received
// Ensure that all the data to be encoded in Mediacodec has been encoded, and retry to retrieve the encoded data from the encoder
// Mark is not stop, we exit, the next round receives more data to fetch the output encoded data
if(! endOfStream) {// It's too long to write this, there's no need for you to continue recording, you can still call this method!
break;
}
// Otherwise continue
} else if (status == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// once encoding starts
MediaFormat outputFormat = mMediaCodec.getOutputFormat();
// Configure the wrapper
// Add a stream of media video in the specified format
index = mMediaMuxer.addTrack(outputFormat);
mMediaMuxer.start();
} else if (status == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
/ / ignore
} else {
// Successfully extract a valid output
ByteBuffer outputBuffer = mMediaCodec.getOutputBuffer(status);
// If the ByteBuffer obtained is the configuration information, it does not need to be written to mp4
if((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) ! =0) {
bufferInfo.size = 0;
}
if(bufferInfo.size ! =0) {
bufferInfo.presentationTimeUs = (long) (bufferInfo.presentationTimeUs / mSpeed);
/ / to mp4
// Locate according to offset
outputBuffer.position(bufferInfo.offset);
//ByteBuffer Indicates the total length of read and write operations
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
/ / write
mMediaMuxer.writeSampleData(index, outputBuffer, bufferInfo);
}
// We have run out of output buffer, can be recycled, let mediacodec continue to use
mMediaCodec.releaseOutputBuffer(status, false);
/ / end
if((bufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) ! =0) {
break; }}}}Copy the code