Original address original article, reprint please contact the author

Green warbler ti spring is thick, the hairpin green apricot small, green into a cluster. The jade boat is red with wind and wine. Song pharynx, meet when heavy?

MediaLearn

Welcome to my project MediaLearn, which is established for the purpose of learning and sharing audio and video knowledge. Currently, it is only limited to the Android platform and will be gradually expanded in the future. Friends interested in audio and video field knowledge, welcome to learn together!!

The feed

This is the third chapter in the MediaCodeC series on how to use MediaCodeC to encode photo sets as video files. MediaCodeC is a very useful API for handling Android multimedia. In this experiment, the set of images used is MediaCodeC hard decoded video, and the video frames are stored in the image file article. The set of images decoded from the video has a total of 332 image frames. If you are interested in MediaCodeC video decoding, you can also browse the previous article: MediaCodeC video decoding with specified frames, fast and accurate

The core processes

The normal workflow of MediaCodeC is to take the available input queue and populate it with data; Get the available output queue, fetch the data, and repeat until the end. In general, the fill and take out actions are not instantaneous, that is to say, it is not possible to press a frame of data and take out a frame of data. Except, of course, when every frame of the encoded video is a key frame. In general, both input and output buffers are written as follows:

for(;;) {// Get the id of the available InputBuffer int inputBufferId = codec.dequeueInputBuffer(timeoutUs);if(inputBufferId >= 0) {ByteBuffer inputBuffer = codec.getinputBuffer (...) ; // inputBuffer fills data codec.queueInputBuffer(inputBufferId...) ; } / / query whether there is available OutputBuffer int outputBufferId = codec. DequeueOutputBuffer (...). ;Copy the code

The coding core flow of this article is similar to the above code. The InputBuffer is replaced by Surface, and Surface is used instead of InputBuffer to fill data.

Why use Surface

There is a description of the Data Type in the official MediaCodeC documentation:

CodeC accepts three types of data: compressed data, raw audio data and raw video data. All three types of data can be processed as Bytebuffers. But for raw video data, Surface should be used to improve CodeC performance.

In this project, using the MediaCodeCcreateInputSurface function to create the Surface, with OpenGL Surface data input. Here I draw a simple flow chart:

knowledge

In the code, MediaCodeC is only responsible for data transfer, and the main class that generates MP4 files is MediaMuxer. Overall, the main apis involved in the project are:

  • MediaCodeC, the image is encoded as frame data
  • MediaMuxer, frame data encoded as Mp4 files
  • OpenGL, which draws the image onto the Surface

Next, I will explain each step in detail according to the working order of the process:

The process,

Before going into the process, it is important to note that all parts of the workflow must be on the same thread.

configuration

First, the child thread is started. Configuration MediaCodeC:

Var codec = MediaCodec. CreateEncoderByType (MediaFormat. MIMETYPE_VIDEO_AVC)/color/MediaFormat configuration format, bit rate, frame rate, key frame interval / / Color the default format for MediaCodecInfo. CodecCapabilities. COLOR_FormatSurface var mediaFomat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, size.width, size.height) .apply {setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat)
                setInteger(MediaFormat.KEY_BIT_RATE, bitRate)
                setInteger(MediaFormat.KEY_FRAME_RATE, frameRate)
                setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iFrameInterval)
            }
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
var inputSurface = codec.createInputSurface()
codec.start()
Copy the code

After configuring the encoder, configure the EGL environment and GPU Program of OpenGL. Because OpenGL involves a lot of knowledge, I won’t repeat it here. In the video coding project, FOR the convenience of use, I have encapsulated the OpenGL environment construction and GPU Program construction in the GLEncodeCore class. If you are interested, you can have a look. When the EGL environment is initialized, it can choose two ways to connect to the device. One is eglCreatePbufferSurface. The other option is eglCreateWindowSurface, to create a windowSurface that can actually be displayed, you need to pass a Surface parameter, no doubt select this function.

var encodeCore = GLEncodeCore(...) encodeCore.buildEGLSurface(inputSurface) fun buildEGLSurface(surface: Eglenv.setupenv (). BuildWindowSurface (Surface) // GPU Program build encodeProgram.build()}Copy the code

Image data comes in and begins encoding

After the various apis are configured, a loop is started to pass the Bitmap read by the File File into the encoding.

val videoEncoder = VideoEncoder(640, 480, 1800000, 24)
videoEncoder.start(Environment.getExternalStorageDirectory().path
                    + "/encodeyazi640${videoEncoder.bitRate}.mp4") val file = file (gallery folder address) file.listfiles (). ForEachIndexed {index, it -> BitmapFactory.decodeFile(it.path)?.apply { videoEncoder.drainFrame(this, index) } } videoEncoder.drainEnd()Copy the code

As mentioned in the feed, the set of images used in the encoding project is 332 images decoded from the video files that MediaCodeC harddecoded and stored the video frames as image files. In the loop code, we pass the image Bitmap each time into the drainFrame(…). Function for encoding. When all frames are encoded, use the drainEnd function to notify the coder that the encoding is complete.

Video frame coding

Then let’s look at drameFrame(…) A concrete implementation in a function.

 /**
     *
     * @b : draw bitmap to texture
     *
     * @presentTime: frame current time
     * */
    fun drainFrame(b: Bitmap, presentTime: Long) {
        encodeCore.drainFrame(b, presentTime)
        drainCoder(false) } fun drainFrame(b: Bitmap, index: Int) { drainFrame(b, index * mediaFormat.perFrameTime * 1000) } fun drainCoder(...) {pseudo-code: MediaCodeC gets output queue data, using MediaMuxer to encode Mp4 files}Copy the code

First use OpenGL to draw the Bitmap on the texture, transfer the data to the Surface, and need to pass in the time stamp represented by the Bitmap. Use the drainCoder function after the incoming data to read the output data from MediaCodeC and encode it as an Mp4 video file using MediaMuxer. The drainCoder function is implemented as follows:

loopOut@ while (trueVal outputBufferId = dequeueOutputBuffer(bufferInfo, defTimeOut) log.d ("handleOutputBuffer"."output buffer id : $outputBufferId ")
        if (outputBufferId == MediaCodec.INFO_TRY_AGAIN_LATER) {
            if(needEnd) {// Output has no responsebreak@loopOut
            }
        } else if(outputBufferId == mediacodec.info_output_format_changed) {// Output data format changes, start mediaMuxer here}else if(outputBufferId >= 0) {// Get the corresponding output dataif(bufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM ! = 0) {break@loopOut
            }
        }
    }
Copy the code

As mentioned earlier, you don’t get a frame of data by pressing a frame of data. After drawing the Bitmap to the texture using OpenGL and transferring it to the Surface. To get the output, you have to get the MediaCodeC output in an infinite loop of code. That is, in this code, when the output data format changes, add a video track to MediaMuxer and start it.

trackIndex = mediaMuxer!! .addTrack(codec.outputFormat) mediaMuxer!! .start()Copy the code

The overall workflow is the above code, pass a frame of data to the Surface- >MediaCodeC loop to get the output data -> MediaMuxer write Mp4 video file. Of course, the concept of the last two steps has been relatively clear, only the realization of the first step is a difficult point, which is also a point that bothered me at that time. Next we’ll explain how to transfer data from a Bitmap to the Surface using OpenGL.

Bitmap –> Surface

In the project, Bitmap data was transferred to the Surface mainly by this code:

fun drainFrame(b: Bitmap, presentTime: Long) {encodeProgram. RenderBitmap (b) / / to render a frame set a timestamp eglEnv. SetPresentationTime (presentTime) eglEnv. SwapBuffers ()}Copy the code

EncodeProgram is a graphics card drawing program, which internally generates a texture and then draws a Bitmap onto the texture. The texture now represents the image, and the texture is drawn onto the window. After that, the current render result is submitted using EGL’s swapBuffer and, prior to the submission, the timestamp represented by the current frame is submitted using setPresentationTime.

More specific code implementation is in my Github project. GLEncodeCore and EncodeProgram GPU Program and EGL environment construction

conclusion

Here is the project address, click send