I. Introduction to functions

Call the Android Camera component, get the byte[] array from the preview, and then render it to the TextureView of the Activity. At the same time, use MediaCodec to encode AVC (i.e. H264), and use MediaMuxer to package it to generate an MP4 file.

Ii. Architecture design

The whole function module is divided into the following sub-functions:

  1. Use of camera components (permission application, preview screen, size setting, etc., not focusing for the time being, because it is mainly coding function)
  2. Use of TextureView (render preview to screen)
  3. Use of MediaCodec (selection of MediaFormat, bufferQueue, etc.)
  4. Use of MediaMuxer (mixer, mix H264 video stream and audio stream, audio stream has not been added, there is time to add later)

Camera components

Android.hardware. Camera is used to draw 3D graphics and Android.hardware. Camera2 is a new Camera operation class. The first generation Camera is selected here.

The first and most important thing to do in the list is to ask for permission.

After getting the permission, we need to initialize the Camera:

The main ones are initialization: the properties cameraId, which is the ID of the camera, and outputSizes, which is the frame size the camera outputs.

private fun initCamera(a) { 
	// Initialize some camera parameters
    val instanceOfCameraUtil = CameraUtils.getInstance(this).apply {
        this@CameraActivity.cameraManager = this.cameraManager!!        
        cameraId = this.getCameraId(false)!!!!!// Use the rear camera by default
        // Get the list of output sizes for the specified camera
        outPutSizes = this.getCameraOutputSizes(cameraId, SurfaceTexture::class.java)!! .get(0)}}Copy the code

Assume that at this point, your Layout file, there is already a TextureView (id: TextureView), we need to declare a TextureView. SurfaceTextureListener:

private val mSurfaceTextureListener = object : TextureView.SurfaceTextureListener {    
    override fun onSurfaceTextureSizeChanged(surface: SurfaceTexture, width: Int, height: Int){}override fun onSurfaceTextureUpdated(surface: SurfaceTexture){}override fun onSurfaceTextureDestroyed(surface: SurfaceTexture): Boolean {
        return false    
    }
    
    override fun onSurfaceTextureAvailable(surface: SurfaceTexture, width: Int, height: Int) {        		
        openCameraPreview(surface, width, height)         
    }
}
Copy the code

What we need to focus on is the fourth override method, which will be called back when TextureView is available, so that we can build preview screens based on this method. This part of the code is described in many posts on the web. It should be noted that this does not include screen focus and other functions, if you need to baidu.

Iv. Construction of preview screen

My initial vision was to build a full-screen portrait video player for the phone, so the (horizontal and vertical) size must be: 1080 * 1920. In this way, we input the encoder with a width and length of 1080 * 1920, but in the setPreviewCallback photo data: byte[] array, our photos are placed horizontally, so the size is 1920 * 1080. This data directly into the encoder will lead to abnormalities in the screen:

Therefore, we need to rotate nv21 data in the one-dimensional byte[] array by 90 degrees. This is the rotateYUV420Degree90 method.

private fun openCameraPreview(surfaceTexture: SurfaceTexture, width: Int, height: Int) {        
    // Initialize the preview size. These properties must be called back after the Texture is available, otherwise there will be problems.
    mPreviewSize = Size(1080.1920)        // Initialize the encoder and force the declaration to be 1080*1920, which can also be set according to the length and width of this. 1080P is a common size, but putting a full screen TextureView in a full screen may cause the image to stretch and other issues that need to be resolved separately.
       		
    mTextureView.setAspectRation(mPreviewSize.width, mPreviewSize.height);       
    mCameraDevice = Camera.open(0)        
    mCameraDevice.setDisplayOrientation(90)        
    /** * Obtain the captured video information. * /        
    mCameraDevice.parameters = mCameraDevice.parameters.apply {           
        this!!!!! .setPreviewSize(mPreviewSize.height, mPreviewSize.width)this.setPictureSize(mPreviewSize.height, mPreviewSize.width)            
        this.previewFormat = CAMERA_COLOR_FORMAT               
    }        
    /** * The Camera, as the producer, produces the image data for the SurfaceTexture to process. * Either render further * or display, where the PreviewTexture is set to display. When we preview the availability of the TextureView function, the callback function provides a hook: the surfaceTexture * the surfaceTexure will be displayed as a vehicle for display. * /        
    mCameraDevice.setPreviewTexture(surfaceTexture)        
    mCameraDevice.setPreviewCallback { data, camera ->            
       // Note: the width and height of the photo are the opposite of the date, not the day
       if (::mHandler.isInitialized) {                
          mHandler.post {                    
          // Convert horizontal video resolution: 1920 * 1080 to vertical video resolution: 1080 * 1920
          val verticalData = ImageFormatUtils.rotateYUV420Degree90(data, mPreviewSize.height,mPreviewSize.width)                    
              onFrameAvailable(verticalData)                
          }            
       }        
    }        
    mCameraDevice.startPreview()    
}
Copy the code

Declaration of the encoder

In view of the differences in DSP chips of various devices, various devices support different color formats and other parameters, here I use one of the color formats available on the Qualcomm 865 on the Mi 10: COLOR_FormatYUV420SemiPlanar, i.e. NV21, next, we initialize MediaCodec and MediaMuxer. The specific supported formats need to be determined and obtained dynamically by the real runtime.

If the device has a poor DSP chip and supports fewer formats, hard decoding is not available, so software decoding (FFmpeg, etc.) should also be introduced in time. The use of MediaCodec is just an example. The format must be compatible, otherwise it can lead to a variety of problems: color and position deviation, color skew, splashes, etc.

private val MEDIA_TYPE = MediaFormat.MIMETYPE_VIDEO_AVCprivate 
val MEDIACODEC_COLOR_FORMAT = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar// Accept NV21

private fun initEncoder(a) {    
    val supportedColorFormat = ImageFormatUtils.getSupportColorFormat()// Get the supported color formats
    try {        
        mMediaCodec = MediaCodec.createEncoderByType(MEDIA_TYPE)        
        mMediaFormat = MediaFormat.createVideoFormat(MEDIA_TYPE,mPreviewSize.width,mPreviewSize.height).apply {            				setInteger(MediaFormat.KEY_COLOR_FORMAT, MEDIACODEC_COLOR_FORMAT)// To set the input color I420, we need to convert NV21 to I420
			setInteger(MediaFormat.KEY_BIT_RATE, 10000000)            
             setInteger(MediaFormat.KEY_FRAME_RATE, 30)           
             setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5)        
                                                                                                              }        
        mMediaCodec.configure(mMediaFormat, null.null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        // Place the mixer
        val fileName = this.obbDir.absolutePath + "/" + System.currentTimeMillis() + ".mp4"        		    mMuxer = MediaMuxer(fileName, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)        				} catch (e: Exception) {        
        e.printStackTrace()        
        return}}Copy the code

If there are no errors to the Muxer, then both the Codec and Muxer are built successfully.

The common color format is I420. In this case, you should use COLOR_FormatYUV420Flexible. Nv21 needs to be converted to I420 encoding before data encoding, if not, using mainstream players is not too big a problem.

6. Data recording

We need to open a new thread to record the encoding. After we get a frame data in the Camera preview interface, we POST a task to it through the Handler of the child thread.

// Code the thread
private lateinit var mHandler: Handler
private lateinit var mWorkerThread: HandlerThread
private fun startEncoder(a) {    
	isEncoding = true    // Start coding
	mMediaCodec.start()    // Build the connector.
	mWorkerThread = HandlerThread("WorkerThread-Encoder")    
	mWorkerThread.start()    
	mHandler = Handler(mWorkerThread.looper)
}
Copy the code

Note that we don’t start Muxer here, we start mixing at some state when the child thread receives the data.

mCameraDevice.setPreviewCallback { data, camera ->    
	if (::mHandler.isInitialized) {        
            mHandler.post {            
            // Convert horizontal video resolution: 1920 * 1080 to vertical video resolution: 1080 * 1920
            val verticalData = ImageFormatUtils.rotateYUV420Degree90(data, mPreviewSize.height, mPreviewSize.width)            
            onFrameAvailable(verticalData)        
           }    
	}
 }
Copy the code

When I checked the resolution supported by Camera, I found that all the resolutions were horizontal, that is, 1920*1080, but our MediaCodec originally set the resolution as vertical, which is also a hole.

In the onFrameAvailable() method, we continuously insert a byte array that contains the preview footage from the camera in real time, which we encode. After the encoding is completed, the encoded screen is connected to the Muxer:

    private fun onFrameAvailable(_data: ByteArray?). {
        if(! isEncoding) {return;
        }
        //(optional NV21->I420), and then feed into the decoder
        val data: ByteArray = _data!!

        var index = 0
        try {
            index = mMediaCodec.dequeueInputBuffer(0)}catch (e: Exception) {
            e.printStackTrace()
            return
        }
        if (index >= 0) {
            valinputBuffer = mMediaCodec.getInputBuffer(index) inputBuffer!! .clear() inputBuffer.put(data.0.data.size)
            mMediaCodec.queueInputBuffer(
                    index,
                    0.data.size,
                    System.nanoTime() / 1000.0)}while (true) {
            val bufferInfo = MediaCodec.BufferInfo()
            val encoderStatus = mMediaCodec.dequeueOutputBuffer(bufferInfo, 10 _000)
            if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                breakTry again later
            } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                // The output format has changed. Turn on the mixer here
                valnewFormat = mMediaCodec.outputFormat mVideoTrack = mMuxer!! .addTrack(newFormat) mMuxer!! .start() }else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                //
            } else {
                // The normal code gets the buffer index
                val encodedDat = mMediaCodec.getOutputBuffer(encoderStatus)
                if((bufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG) ! =0) {
                    bufferInfo.size = 0
                }
                if(bufferInfo.size ! =0) {
                    // Set to start reading data from XXencodedDat!! .position(bufferInfo.offset)// Set the total length of read data
                    encodedDat.limit(bufferInfo.offset + bufferInfo.size)
                    / / write MP4
                    if(! isEncoding) {return} mMuxer!! .writeSampleData(mVideoTrack, encodedDat, bufferInfo) }// Release the buffer
                mMediaCodec.releaseOutputBuffer(encoderStatus, false)}}}Copy the code

This method is executed in the child thread.

7. Generate files

private fun pauseRecord(a) {    
    +send// Display the send button
    record.isRunning = false    
    Timer.cancel()// Cancel the timer
    showBackOrCancel()    
    if (isEncoding) {        
        stopEncoder()    
    }
}

private fun stopEncoder(a) {
    isEncoding = false
    Toast(this.obbDir.absolutePath + "Under the \ \")
    try{ mMuxer? .stop() mMuxer? .release()/ / stop
        mMediaCodec.stop()
        mMediaCodec.release()
    } catch (e: Exception) {
        e.printStackTrace()
    }
}

Copy the code

This way, we have the generated file in the storage directory under Android/obb/ package name /.

Eight. Summary

Overall, it’s pretty rudimentary. There’s no device to dynamically determine the size of the recording, which color format to record, and camera-related features like flash and focus are not included.

MediaCodec itself is a codec, and FFmpeg is different, it will prefered hardware decoding, high efficiency, low power consumption, but the disadvantage is that the compatibility, scalability compared to software decoding will be lower. A part of the player software, the hard solution or soft solution choice to the user, so that both scalability and power consumption can be taken into account.

The final result (out of focus) :

Some related methods:

Nv21-> Nv21

    public byte[] rotateYUV420Degree90(byte[] data, int imageWidth, int imageHeight) {
        byte[] yuv = new byte[imageWidth * imageHeight * 3 / 2];
        // Rotate the Y luma
        int i = 0;
        for (int x = 0; x < imageWidth; x++) {
            for (int y = imageHeight - 1; y >= 0; y--) {
                yuv[i] = data[y * imageWidth + x];
                i++;
            }
        }
        // Rotate the U and V color components
        i = imageWidth * imageHeight * 3 / 2 - 1;
        for (int x = imageWidth - 1; x > 0; x = x - 2) {
            for (int y = 0; y < imageHeight / 2; y++) {
                yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + (x - 1)];
                i--;
                yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + x];
                i--;
            }
        }
        return yuv;
    }
Copy the code

2. Query the supported color format

public static int getSupportColorFormat() { int numCodecs = MediaCodecList.getCodecCount(); MediaCodecInfo codecInfo = null; for (int i = 0; i < numCodecs && codecInfo == null; i++) { MediaCodecInfo info = MediaCodecList.getCodecInfoAt(i); if (! info.isEncoder()) { continue; } String[] types = info.getSupportedTypes(); boolean found = false; for (int j = 0; j < types.length && ! found; j++) { if (types[j].equals("video/avc")) { Log.d("TAG:", "found"); found = true; } } if (! found) continue; codecInfo = info; } Log.e("TAG", "Found " + codecInfo.getName() + " supporting " + "video/avc"); // Find a color profile that the codec supports MediaCodecInfo.CodecCapabilities capabilities = codecInfo.getCapabilitiesForType("video/avc"); Log.e("TAG", "length-" + capabilities.colorFormats.length + "==" + Arrays.toString(capabilities.colorFormats)); for (int i = 0; i < capabilities.colorFormats.length; i++) { Log.d(TAG, "TAG MediaCodecInfo COLOR FORMAT :" + capabilities.colorFormats[i]); if ((capabilities.colorFormats[i] == MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar) || (capabilities.colorFormats[i] == MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar)) { return capabilities.colorFormats[i]; } } return MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible; }Copy the code