The renderings are as follows:



1. Set rounded corners for preview controls

Set up the ViewOutlineProvider for the control

public RoundTextureView(Context context, AttributeSet attrs) {
        super(context, attrs);
        setOutlineProvider(new ViewOutlineProvider() {
            @Override
            public void getOutline(View view, Outline outline) {
                Rect rect = new Rect(0.0, view.getMeasuredWidth(), view.getMeasuredHeight()); outline.setRoundRect(rect, radius); }}); setClipToOutline(true);
    }
Copy the code

Modify and update rounded corner values as needed

    public void setRadius(int radius) {
        this.radius = radius;
    }

    public void turnRound(a) {
        invalidateOutline();
    }
Copy the code

You can update the control display fillet size according to the set fillet value. When the control is square and the fillet value is half the length of the side, the display is circular.

Implement square preview

1. The device supports 1:1 preview size

First, a simple but limited implementation is introduced: adjust the camera preview size and the preview control size to 1:1.

Most Android devices support a variety of preview sizes. Take the Samsung Tab S3 for example

  • When using the Camera API, the following preview sizes are supported:
The 2019-08-02 13:16:08. 669, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 1920 x1080 13:16:08. 2019-08-02, 669, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 1280 x720 13:16:08. 2019-08-02, 669, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 1440 x1080 13:16:08. 2019-08-02, 669, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 1088 x1088 13:16:08 2019-08-02. 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 1056 x864 13:16:08. 2019-08-02, 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 960 x720 13:16:08. 2019-08-02, 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 720 x480 13:16:08. 2019-08-02, 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 640 x480 13:16:08. 2019-08-02, 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 352 x288 13:16:08 2019-08-02. 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 320 x240 13:16:08. 2019-08-02, 670, 16407-16407 / com. Wsy. Glcamerademo I/CameraHelper: supportedPreviewSize: 176 x144Copy the code

The 1:1 preview size is 1088×1088.

  • When using the Camera2 API, the supported preview sizes (which also include PictureSize) are as follows:
The 2019-08-02 13:19:24. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 4128 x3096 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 4128 x2322 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 3264 x2448 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 3264 x1836 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 3024 x3024 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2976 x2976 13:19:24 2019-08-02. 980, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2880 x2160 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2592 x1944 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2560 x1920 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2560 x1440 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2560 x1080 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2160 x2160 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2048 x1536 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 2048 x1152 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 1936 x1936 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 1920 x1080 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 1440 x1080 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 1280 x960 13:19:24 2019-08-02. 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 1280 x720 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 960 x720 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 720 x480 13:19:24. 2019-08-02, 981, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 640 x480 13:19:24. 2019-08-02, 982, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 320 x240 13:19:24. 2019-08-02, 982, 16768-16768 / com. Wsy. Glcamerademo I/Camera2Helper: getBestSupportedSize: 176 x144Copy the code

The 1:1 preview sizes are: 3024×3024, 2976×2976, 2160×2160, 1936×1936.

As long as we choose 1:1 preview size, and then set the preview control to square, we can achieve square preview; The circular preview can be realized by setting the rounded corner of the preview control as half of the side length.

2. The device does not support 1:1 preview size
  • Select 1:1 preview size for defect analysis

    • Resolution limitations As mentioned above, we can select a 1:1 preview size for the preview, but the limitations are high and the range of options is very small. If the camera does not support 1:1 preview sizes, this solution is not feasible.
    • Resource consumption take Samsung TAB S3 as an example. When using the Camera2 API, the device supports a large square preview size, which consumes a lot of system resources in image processing and other operations.
  • Handles cases where 1:1 preview sizes are not supported

    • Add a 1:1 ViewGroup
    • Put TextureView into the ViewGroup
    • Set the margin value of the TextureView to show the center square area

Schematic diagram





The sample code

    // Keep the size ratio of the preview control to the preview size the same, avoid stretching
    {
        FrameLayout.LayoutParams textureViewLayoutParams = (FrameLayout.LayoutParams) textureView.getLayoutParams();
        int newHeight = 0;
        int newWidth = textureViewLayoutParams.width;
        / / landscape
        if (displayOrientation % 180= =0) {
            newHeight = textureViewLayoutParams.width * previewSize.height / previewSize.width;
        }
        / / vertical screen
        else {
            newHeight = textureViewLayoutParams.width * previewSize.width / previewSize.height;
        }
        //// Add a layer of ViewGroup to limit the View's display area when it is not a square preview
        if(newHeight ! = textureViewLayoutParams.height) { insertFrameLayout =new RoundFrameLayout(CoverByParentCameraActivity.this);
           int sideLength = Math.min(newWidth, newHeight);
           FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(sideLength, sideLength);
           insertFrameLayout.setLayoutParams(layoutParams);
           FrameLayout parentView = (FrameLayout) textureView.getParent();
           parentView.removeView(textureView);
           parentView.addView(insertFrameLayout);

           insertFrameLayout.addView(textureView);
           FrameLayout.LayoutParams newTextureViewLayoutParams = new FrameLayout.LayoutParams(newWidth, newHeight);
           / / landscape
           if (displayOrientation % 180= =0) {
               newTextureViewLayoutParams.leftMargin = ((newHeight - newWidth) / 2);
           }
           / / vertical screen
           else {
               newTextureViewLayoutParams.topMargin = -(newHeight - newWidth) / 2; } textureView.setLayoutParams(newTextureViewLayoutParams); }}Copy the code

3. Use GLSurfaceView for a more customized preview

Square and circular previews can be completed using the above method, but only for native cameras. How to do circular previews when our data source is not a native camera? Next, GLSurfaceView is used to display NV21. It is completely their own preview data rendering.

1. GLSurfaceView usage process

OpenGL renders YUV data flow

One of the key is the preparation of Renderer, Renderer is introduced as follows:

   /** * A generic renderer interface. * <p> * The renderer is responsible for making OpenGL calls to render a frame. * <p>  * GLSurfaceView clients typically create their own classes that implement * this interface, and then call {@link GLSurfaceView#setRenderer} to
     * register the renderer with the GLSurfaceView.
     * <p>
     *
     * <div class="special reference">
     * <h3>Developer Guides</h3>
     * <p>For more information about how to use OpenGL, read the
     * <a href="{@docRoot}guide/topics/graphics/opengl.html">OpenGL</a> developer guide.</p>
     * </div>
     *
     * <h3>Threading</h3>
     * The renderer will be called on a separate thread, so that rendering
     * performance is decoupled from the UI thread. Clients typically need to
     * communicate with the renderer from the UI thread, because that's where
     * input events are received. Clients can communicate using any of the
     * standard Java techniques for cross-thread communication, or they can
     * use the {@link GLSurfaceView#queueEvent(Runnable)} convenience method.
     * <p>
     * <h3>EGL Context Lost</h3>
     * There are situations where the EGL rendering context will be lost. This
     * typically happens when device wakes up after going to sleep. When
     * the EGL context is lost, all OpenGL resources (such as textures) that are
     * associated with that context will be automatically deleted. In order to
     * keep rendering correctly, a renderer must recreate any lost resources
     * that it still needs. The {@link #onSurfaceCreated(GL10, EGLConfig)} method
     * is a convenient place to do this.
     *
     *
     * @see #setRenderer(Renderer)
     */
    public interface Renderer {
        /** * Called when the surface is created or recreated. * <p> * Called when the rendering thread * starts and whenever the EGL context is lost. The EGL context will typically * be lost when the Android device awakes after going to sleep. *  <p> * Since this method is called at the beginning of rendering, as well as * every time the EGL context is lost, this method is a convenient place to put * code to create resources that need to be created when the rendering * starts,  and that need to be recreated when the EGL context is lost. * Textures are an example of a resource that you might want  to create * here. * <p> * Note that when the EGL context is lost, all OpenGL resources associated * with that context will be automatically deleted. You do not need to call * the corresponding "glDelete" methods such as glDeleteTextures to * manually delete these lost resources. * <p> *@param gl the GL interface. Use <code>instanceof</code> to
         * test if the interface supports GL11 or higher interfaces.
         * @param config the EGLConfig of the created surface. Can be used
         * to create matching pbuffers.
         */
        void onSurfaceCreated(GL10 gl, EGLConfig config);

        /**
         * Called when the surface changed size.
         * <p>
         * Called after the surface is created and whenever
         * the OpenGL ES surface size changes.
         * <p>
         * Typically you will set your viewport here. If your camera
         * is fixed then you could also set your projection matrix here:
         * <pre class="prettyprint">
         * void onSurfaceChanged(GL10 gl, int width, int height) {
         *     gl.glViewport(0, 0, width, height);
         *     // for a fixed camera, set the projection too
         *     float ratio = (float) width / height;
         *     gl.glMatrixMode(GL10.GL_PROJECTION);
         *     gl.glLoadIdentity();
         *     gl.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
         * }
         * </pre>
         * @param gl the GL interface. Use <code>instanceof</code> to
         * test if the interface supports GL11 or higher interfaces.
         * @param width
         * @param height
         */
        void onSurfaceChanged(GL10 gl, int width, int height);

        /** * Called to draw the current frame. * <p> * This method is responsible for drawing the current frame. * <p> * The implementation of this method typically looks like this: * <pre class="prettyprint"> * void onDrawFrame(GL10 gl) { * gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); * / /... other gl calls to render the scene ... * } * </pre> *@param gl the GL interface. Use <code>instanceof</code> to
         * test if the interface supports GL11 or higher interfaces.
         */
        void onDrawFrame(GL10 gl);
    }

Copy the code
  • void onSurfaceCreated(GL10 gl, EGLConfig config)

    Callback in case of Surface creation or rebuild
  • void onSurfaceChanged(GL10 gl, int width, int height)

    Callback in case the Surface size changes
  • void onDrawFrame(GL10 gl)

    This is where the draw operation is implemented. When we set uprenderModeforRENDERMODE_CONTINUOUSLY, the function will be executed continuously;

    When we set uprenderModeforRENDERMODE_WHEN_DIRTYWill only be created after completion and calledrequestRenderThen execute. Usually we chooseRENDERMODE_WHEN_DIRTYRender mode to avoid overdrawing.

Normally, we would implement a Renderer ourselves, and then set up the Renderer for the GLSurfaceView, so to speak, writing the Renderer is the core step of the whole process. Void onSurfaceCreated(GL10 GL, EGLConfig config) initializes void onSurfaceCreated(GL10 GL, EGLConfig config) and draws void onDrawFrame(GL10 GL) :


The Renderer that renders YUV data

2. Implementation
  • Coordinate System Introduction

Android View coordinate system

OpenGL World coordinate system

As you can see, OpenGL’s coordinate system is cartesian, unlike Android’s View coordinate system. The coordinate system of Android View starts from the top left corner, increasing x to the right and y to the bottom; The OpenGL coordinate system starts at the center, increasing x to the right, increasing y to the top.

  • Shader writing
    /** * vertex shader */
    private static String VERTEX_SHADER =
            " attribute vec4 attr_position; \n" +
                    " attribute vec2 attr_tc; \n" +
                    " varying vec2 tc; \n" +
                    " void main() {\n" +
                    " gl_Position = attr_position; \n" +
                    " tc = attr_tc; \n" +
                    "}";

    /**
     * 片段着色器
     */
    private static String FRAG_SHADER =
            " varying vec2 tc; \n" +
                    " uniform sampler2D ySampler; \n" +
                    " uniform sampler2D uSampler; \n" +
                    " uniform sampler2D vSampler; \n" +
                    "Const mat3 convertMat = mat3(1.0, 1.0, 1.0, 1.0, -0.001, -0.3441, 1.772, 1.402, -0.7141, -0.58060); \n" +
                    " void main()\n" +
                    " {\n" +
                    " vec3 yuv; \n" +
                    " yuv.x = texture2D(ySampler, tc).r; \n" +
                    Y = texture2D(uSampler, tc).r-0.5; \n" +
                    "Yuv. z = texture2D(vSampler, tc).r-0.5; \n" +
                    "Gl_FragColor = vec4(convertMat * yuv, 1.0); \n" +
                    "}";
Copy the code
  • Built-in variable interpretation

    • gl_Position

      VERTEX_SHADERIn the codegl_PositionRepresents the spatial coordinates drawn. Because we are the 2 d drawing, so direct incoming OpenGL two-dimensional coordinate system in the lower left (1, 1), (1, 1) at the lower right, upper left (1, 1), the upper right (1, 1), which is {1, 1, 1, 1, 1,1,1,1}
    • gl_FragColor

      FRAG_SHADERIn the codegl_FragColorRepresents the color of a single slice
  • Explanation of other variables

    • ySampler,uSampler,vSampler

      Represents Y, U, and V texture samplers, respectively
    • convertMat

      According to the following formula:
      R = Y + 1.402 (V-128) G = y-0.34414 (U-128) -0.71414 (V-128) B = Y + 1.772 (U-128) R = Y + 1.402 (V-128) G = y-0.34414 (U-128) -0.71414 (V-128) B = Y + 1.772 (U-128)Copy the code

      We can get a YUV to RGB matrix

      1. 01,. 01,. 0, 
      0,     0344.1,77.1,403..0714.. 0Copy the code
  • Explanation of partial types and functions

    • Vec3, vec4

      Respectively represent three – dimensional vector, four – dimensional vector.
    • vec4 texture2D(sampler2D sampler, vec2 coord)

      Converts the sampler’s image texture to a color value with the specified matrix; Such as:

      texture2D(ySampler, tc).rI get Y data,

      texture2D(uSampler, tc).rWhat we get is U data,

      texture2D(vSampler, tc).rWe get V data.
  • Initialize in Java code to create ByteBuffer texture data corresponding to Y, U and V according to the width and height of the image; Select the corresponding transformation matrix according to whether mirror display and rotation Angle;

    public void init(boolean isMirror, int rotateDegree, int frameWidth, int frameHeight) {
        if (this.frameWidth == frameWidth
                && this.frameHeight == frameHeight
                && this.rotateDegree == rotateDegree
                && this.isMirror == isMirror) {
            return;
        }
        dataInput = false;
        this.frameWidth = frameWidth;
        this.frameHeight = frameHeight;
        this.rotateDegree = rotateDegree;
        this.isMirror = isMirror;
        yArray = new byte[this.frameWidth * this.frameHeight];
        uArray = new byte[this.frameWidth * this.frameHeight / 4];
        vArray = new byte[this.frameWidth * this.frameHeight / 4];
    
        int yFrameSize = this.frameHeight * this.frameWidth;
        int uvFrameSize = yFrameSize >> 2;
        yBuf = ByteBuffer.allocateDirect(yFrameSize);
        yBuf.order(ByteOrder.nativeOrder()).position(0);
    
        uBuf = ByteBuffer.allocateDirect(uvFrameSize);
        uBuf.order(ByteOrder.nativeOrder()).position(0);
    
        vBuf = ByteBuffer.allocateDirect(uvFrameSize);
        vBuf.order(ByteOrder.nativeOrder()).position(0);
        // Vertex coordinates
        squareVertices = ByteBuffer
                .allocateDirect(GLUtil.SQUARE_VERTICES.length * FLOAT_SIZE_BYTES)
                .order(ByteOrder.nativeOrder())
                .asFloatBuffer();
        squareVertices.put(GLUtil.SQUARE_VERTICES).position(0);
        // Texture coordinates
        if (isMirror) {
            switch (rotateDegree) {
                case 0:
                    coordVertice = GLUtil.MIRROR_COORD_VERTICES;
                    break;
                case 90:
                    coordVertice = GLUtil.ROTATE_90_MIRROR_COORD_VERTICES;
                    break;
                case 180:
                    coordVertice = GLUtil.ROTATE_180_MIRROR_COORD_VERTICES;
                    break;
                case 270:
                    coordVertice = GLUtil.ROTATE_270_MIRROR_COORD_VERTICES;
                    break;
                default:
                    break; }}else {
            switch (rotateDegree) {
                case 0:
                    coordVertice = GLUtil.COORD_VERTICES;
                    break;
                case 90:
                    coordVertice = GLUtil.ROTATE_90_COORD_VERTICES;
                    break;
                case 180:
                    coordVertice = GLUtil.ROTATE_180_COORD_VERTICES;
                    break;
                case 270:
                    coordVertice = GLUtil.ROTATE_270_COORD_VERTICES;
                    break;
                default:
                    break;
            }
        }
        coordVertices = ByteBuffer.allocateDirect(coordVertice.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
        coordVertices.put(coordVertice).position(0);
    }
    Copy the code

    Initialize the Renderer when the Surface is created

       private void initRenderer(a) {
            rendererReady = false;
            createGLProgram();
    
            // Enable texture
            GLES20.glEnable(GLES20.GL_TEXTURE_2D);
            // Create a texture
            createTexture(frameWidth, frameHeight, GLES20.GL_LUMINANCE, yTexture);
            createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, uTexture);
            createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, vTexture);
    
            rendererReady = true;
        }  
    Copy the code

    Where createGLProgram is used to create OpenGL Program and associate variables in shader code

      private void createGLProgram(a) {
          int programHandleMain = GLUtil.createShaderProgram();
          if(programHandleMain ! =- 1) {
              // Use the shader program
              GLES20.glUseProgram(programHandleMain);
              // Get the vertex shader variable
              int glPosition = GLES20.glGetAttribLocation(programHandleMain, "attr_position");
              int textureCoord = GLES20.glGetAttribLocation(programHandleMain, "attr_tc");
    
              // Get the fragment shader variable
              int ySampler = GLES20.glGetUniformLocation(programHandleMain, "ySampler");
              int uSampler = GLES20.glGetUniformLocation(programHandleMain, "uSampler");
              int vSampler = GLES20.glGetUniformLocation(programHandleMain, "vSampler");
    
              // Assign a value to a variable
              Gles20.gl_texture0 and ySampler bind gles20.gl_texture1 and uSampler bind gles20.gl_texture2 and vSampler bind gles20.gl_texture2 and vSampler bind The second parameter to glUniform1i represents the layer number */
              GLES20.glUniform1i(ySampler, 0);
              GLES20.glUniform1i(uSampler, 1);
              GLES20.glUniform1i(vSampler, 2);
    
              GLES20.glEnableVertexAttribArray(glPosition);
              GLES20.glEnableVertexAttribArray(textureCoord);
    
              /** * Set Vertex Shader data */
              squareVertices.position(0);
              GLES20.glVertexAttribPointer(glPosition, GLUtil.COUNT_PER_SQUARE_VERTICE, GLES20.GL_FLOAT, false.8, squareVertices);
              coordVertices.position(0);
              GLES20.glVertexAttribPointer(textureCoord, GLUtil.COUNT_PER_COORD_VERTICES, GLES20.GL_FLOAT, false.8, coordVertices); }}Copy the code

    CreateTexture is used to create a texture based on width, height and format

          private void createTexture(int width, int height, int format, int[] textureId) {
              // Create a texture
              GLES20.glGenTextures(1, textureId, 0);
              // Bind the texture
              GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId[0]);
              / * * * {@linkGLES20#GL_TEXTURE_WRAP_S} stands for left and right texture surround mode * {@linkGLES20#GL_TEXTURE_WRAP_T} for up and down texture surround mode * * {@linkGLES20#GL_REPEAT} : repeat * {@linkGLES20#GL_MIRRORED_REPEAT} : Mirror repetition * {@linkGLES20#GL_CLAMP_TO_EDGE} : ignore border interception * * for example, we use {@linkGLES20 # GL_REPEAT} : * * The squareVertices coordVertices are the same as the textureView vertices. 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f, 2f 2.0F, 0.0f, -> Compared to the textureView preview, it is split into 4 identical blocks (lower left, lower right, upper left, upper right) * -1.0F, 1.0F, 0.0f, 2.0f, * 1.0f, 1.0f 0.0f, 0.0f */
              GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
              GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
              / * * * {@linkGLES20#GL_TEXTURE_MIN_FILTER} represents the case where the displayed texture is smaller than the one loaded in * {@linkGLES20#GL_TEXTURE_MAG_FILTER} represents the situation when the displayed texture is larger than the loaded texture * * {@linkGLES20#GL_NEAREST} : Use the color of the nearest pixel in the texture as the color of the pixel to be drawn * {@linkGLES20#GL_LINEAR} : Using the closest coordinates of several colors in the texture, using a weighted average algorithm to get the color of the pixel to be drawn */
              GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
              GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
              GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height, 0, format, GLES20.GL_UNSIGNED_BYTE, null);
          }
    Copy the code
    • Call draw in Java code

    Crop and pass in frame data when the data source gets it

     @Override
      public void onPreview(final byte[] nv21, Camera camera) {
          // Crop the specified image region
          ImageUtil.cropNV21(nv21, this.squareNV21, previewSize.width, previewSize.height, cropRect);
          / / refresh GLSurfaceView
          roundCameraGLSurfaceView.refreshFrameNV21(this.squareNV21);
      }
    Copy the code

    NV21 data clipping code

      /** * Trim NV21 data **@paramOriginNV21 Original NV21 data *@paramCropNV21 Clipping results of NV21 data that requires pre-allocated memory *@paramWidth Width of the original data *@paramHeight Height of the original data *@paramLeft The left edge of the region where the raw data is clipped *@paramTop The upper boundary of the region where the raw data is clipped *@paramRight The right border * of the region where the raw data was clipped@paramBottom The lower edge of the region where the raw data is clipped */
      public static void cropNV21(byte[] originNV21, byte[] cropNV21, int width, int height, int left, int top, int right, int bottom) {
          int halfWidth = width / 2;
          int cropImageWidth = right - left;
          int cropImageHeight = bottom - top;
    
          // The original data Y is upper left
          int originalYLineStart = top * width;
          int targetYIndex = 0;
    
          // Original data UV upper left
          int originalUVLineStart = width * height + top * halfWidth;
    
          // The initial UV value of the target data
          int targetUVIndex = cropImageWidth * cropImageHeight;
    
          for (int i = top; i < bottom; i++) {
              System.arraycopy(originNV21, originalYLineStart + left, cropNV21, targetYIndex, cropImageWidth);
              originalYLineStart += width;
              targetYIndex += cropImageWidth;
              if ((i & 1) = =0) { System.arraycopy(originNV21, originalUVLineStart + left, cropNV21, targetUVIndex, cropImageWidth); originalUVLineStart += width; targetUVIndex += cropImageWidth; }}}Copy the code

    Pass to GLSurafceView and refresh the frame data

      /** * Pass NV21 refresh frame **@paramData NV21 Data */
      public void refreshFrameNV21(byte[] data) {
          if (rendererReady) {
              yBuf.clear();
              uBuf.clear();
              vBuf.clear();
              putNV21(data, frameWidth, frameHeight);
              dataInput = true; requestRender(); }}Copy the code

    PutNV21 is used to read Y, U, and V data from NV21

      /** * Remove the Y, U, and V components of NV21 data@paramSRC NV21 frames *@paramWidth the width *@param* / height height
      private void putNV21(byte[] src, int width, int height) {
    
          int ySize = width * height;
          int frameSize = ySize * 3 / 2;
    
          // take the component y
          System.arraycopy(src, 0, yArray, 0, ySize);
    
          int k = 0;
    
          // take the component uv value
          int index = ySize;
          while (index < frameSize) {
              vArray[k] = src[index++];
              uArray[k++] = src[index++];
          }
          yBuf.put(yArray).position(0);
          uBuf.put(uArray).position(0);
          vBuf.put(vArray).position(0);
      }
    Copy the code

    After requestRender is executed, the onDrawFrame function will be called back to perform data binding and drawing of the three textures

          @Override
          public void onDrawFrame(GL10 gl) {
              //Activate, bind, and set data for each texture individuallyif (dataInput) {
                  //y
                  GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
                  GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, yTexture[0]);
                  GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
                          0.0.0,
                          frameWidth,
                          frameHeight,
                          GLES20.GL_LUMINANCE,
                          GLES20.GL_UNSIGNED_BYTE,
                          yBuf);
    
                  //u
                  GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
                  GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, uTexture[0]);
                  GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
                          0.0.0,
                          frameWidth >> 1,
                          frameHeight >> 1,
                          GLES20.GL_LUMINANCE,
                          GLES20.GL_UNSIGNED_BYTE,
                          uBuf);
    
                  //v
                  GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
                  GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, vTexture[0]);
                  GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
                          0.0.0,
                          frameWidth >> 1,
                          frameHeight >> 1,
                          GLES20.GL_LUMINANCE,
                          GLES20.GL_UNSIGNED_BYTE,
                          vBuf);
                  //GLES2 is drawn after the data binding is complete0.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0.4); }}Copy the code

    Can complete the drawing.

Four, add a layer of border

Sometimes the need is not just for a circular preview, we may need to add a border to the camera preview


Border effect


Same idea, we dynamically change the border value and redraw it. The code in the border custom View is as follows:

    @Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        if (paint == null) {
            paint = new Paint();
            paint.setStyle(Paint.Style.STROKE);
            paint.setAntiAlias(true);
            SweepGradient sweepGradient = new SweepGradient(((float) getWidth() / 2), ((float) getHeight() / 2),
                    new int[]{Color.GREEN, Color.CYAN, Color.BLUE, Color.CYAN, Color.GREEN}, null);
            paint.setShader(sweepGradient);
        }
        drawBorder(canvas, 6);
    }


    private void drawBorder(Canvas canvas, int rectThickness) {
        if (canvas == null) {
            return;
        }
        paint.setStrokeWidth(rectThickness);
        Path drawPath = new Path();
        drawPath.addRoundRect(new RectF(0.0, getWidth(), getHeight()), radius, radius, Path.Direction.CW);
        canvas.drawPath(drawPath, paint);
    }

    public void turnRound(a) {
        invalidate();
    }

    public void setRadius(int radius) {
        this.radius = radius;
    }
Copy the code

Five, complete Demo code:

Github.com/wangshengya…

  • Use the Camera API and Camera2 API and select the preview size closest to the square
  • Use the Camera API and dynamically add a layer of parent controls to achieve the square preview effect
  • Use Camera API to get preview data, use OpenGL way to display

Finally, I recommend a good Android free offline face recognition SDK, which can be perfectly combined with the technology in this paper:
Ai.arcsoft.com.cn/third/mobil…