PS: Do what you want to do, at least a little every day, do not do much, slowly change.
To learn about audio and video, you can read the following two articles first:
- Fundamentals of audio and video development
- Audio frame, video frame and its synchronization
The main content of this paper is to achieve the recording of MP4 video files through the Android native hard codec framework MediaCodec and the multiplexer MediaMuxer. The video data source is provided by Camera2. The focus here is the process of encoding and multiplexing rather than the recording of MP4. If it is just for video recording, you can choose the more convenient MediaRecorder, according to the convention or in the form of case learning MediaCodec, its more usage will be introduced in the subsequent article, the main content of this article is as follows:
- The use of Camera2
- MediaCodec input mode
- MediaCodec encodes Camera2 data
- The recording process
- Recording effect
The use of Camera2
Camera2 is a new camera API released from Android 5.0. The latest one is CameraX. CameraX is based on Camera2 and provides a better API than Camera2. The API mentioned in this article can be directly referred to in the following diagram, which is also the use of Camera2, as follows:
The input mode of MediaCodec
In order to be able to encode using MediaCodec, you need to input the camera data into the encoder MediaCodec. You can write the data to MediaCodec in two ways:
- Surface: Using the Surface as the input to the encoder MediaCodec, the Surface created by MediaCodec is created by MediaCodec’s createInputSurface method. When the camera renders valid data to the Surface, MediaCodec can directly output the encoded data.
- InputBuffer: Use the input buffer as the input of the encoder MediaCodec. The data to be filled in here is the raw frame data. For Camera2, the frame data can be obtained directly through ImageReader. The obtained Image contains information such as width, height, format, timestamp and YUV data component, which is more controllable.
MediaCodec encodes Camera2 data
Before Android 5.0, only ByteBuffer[] synchronization is supported. After that, it is recommended to use ByteBuffer synchronization and asynchronous synchronization. Here, ByteBuffer synchronization is used. The process involved is mainly video data encoding and multiplexing. As the input of MediaCodec is completed by Surface, it only needs to obtain the encoded data and use the multiplexer MediaMuxer to generate Mp4 files. The key codes are as follows:
// Returns the index of the successfully encoded output buffer
var outputBufferId: Int = mMediaCodec.dequeueOutputBuffer(bufferInfo, 0)
if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// Add a video track
mTrackIndex = mMediaMuxer.addTrack(mMediaCodec.outputFormat)
mMediaMuxer.start()
mStartMuxer = true
} else {
while (outputBufferId >= 0) {
if(! mStartMuxer) { Log.i(TAG,"MediaMuxer not start")
continue
}
// Get valid data
valoutputBuffer = mMediaCodec.getOutputBuffer(outputBufferId) ? :continue
outputBuffer.position(bufferInfo.offset)
outputBuffer.limit(bufferInfo.offset + bufferInfo.size)
if (pts == 0L) {
pts = bufferInfo.presentationTimeUs
}
bufferInfo.presentationTimeUs = bufferInfo.presentationTimeUs - pts
// Write data to the multiplexer to generate a file
mMediaMuxer.writeSampleData(mTrackIndex, outputBuffer, bufferInfo)
Log.d(
TAG,
"pts = ${bufferInfo.presentationTimeUs / 1000000.0f} s ,${pts / 1000} ms"
)
mMediaCodec.releaseOutputBuffer(outputBufferId, false)
outputBufferId = mMediaCodec.dequeueOutputBuffer(bufferInfo, 0)}}Copy the code
The recording process
Create a Surface when MediaCodec is configured. CreateInputSurface is only called between configure and start. For reference:
// Configuration status
mMediaCodec.configure(mediaFormat, null.null, MediaCodec.CONFIGURE_FLAG_ENCODE)
CreateInputSurface can only be called between configure and start
mSurface = mMediaCodec.createInputSurface()
// start ...
Copy the code
Add it to the output Surface list of SessionConfiguration as follows:
/ / create CaptureSession
@RequiresApi(Build.VERSION_CODES.P)
private suspend fun createCaptureSession(a): CameraCaptureSession = suspendCoroutine { cont ->
val outputs = mutableListOf<OutputConfiguration>()
/ / the preview Surface
outputs.add(OutputConfiguration(mSurface))
// Add MediaCodec as the input Surface
outputs.add(OutputConfiguration(EncodeManager.getSurface()))
val sessionConfiguration = SessionConfiguration(
SessionConfiguration.SESSION_REGULAR,
outputs, mExecutor, ...)
mCameraDevice.createCaptureSession(sessionConfiguration)
}
Copy the code
Then initiate CaptureRequest to start preview and receive Surface output, and at the same time start coding, as shown below:
// Add preview Surface and Image generated Surface
mCaptureRequestBuild = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
val sur = EncodeManager.getSurface()
mCaptureRequestBuild.addTarget(sur)
mCaptureRequestBuild.addTarget(mSurface)
// Set various parameters
mCaptureRequestBuild.set(CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE, 1) // Whether the video stabilization function is enabled
/ / send CaptureRequest
mCameraCaptureSession.setRepeatingRequest(
mCaptureRequestBuild.build(),
null,
mCameraHandler
)
// Start encoding
EncodeManager.startEncode()
Copy the code
Recording effect
Can pay attention to personal wechat public number practice exchange learning.