Welcome to my GitHub

Here classification and summary of xinchen all original (including supporting source code) : github.com/zq2599/blog…

About “JavaCV camera In Action” series

  • “JavaCV Camera Combat” as the name implies, is the use of JavaCV framework for various camera processing practice collection, this is a Java programmer, As a Java programmer, in the field of computer Vision, an original series, through continuous coding practice, Learn how to operate video, audio, and pictures with you
  • In addition, it should be noted that the cameras used in the whole series are USB cameras or built-in cameras in laptops, not smart cameras based on network access

This paper gives an overview of

  • This article is important as the first in a series of articles that will lay the groundwork for everything from the environment to the code. In short, this article consists of the following:
  1. Environment and version information
  2. Basic routine analysis
  3. Basic framework coding
  4. Deploying a Media Server
  • Let’s start with environment and version information

Environment and version information

  • Now the actual combat involves the hardware and software environment explained clearly, you can use for reference:
  1. Operating system: Win10
  2. The JDK: 1.8.0 comes with _291
  3. Maven: 3.8.1
  4. IDEA: 2021.2.2 (Ultimate Edition)
  5. JavaCV: 1.5.6
  6. Media server: dockek-based nginx-rtmp, the image is: alFG /nginx-rtmp:v1.3.1

Download the source code

  • The full source code for JavaCV Camera In Action is available on GitHub at github.com/zq2599/blog…
The name of the link note
Project home page Github.com/zq2599/blog… The project’s home page on GitHub
Git repository address (HTTPS) Github.com/zq2599/blog… The project source warehouse address, HTTPS protocol
Git repository address (SSH) [email protected]:zq2599/blog_demos.git The project source warehouse address, SSH protocol
  • The Git project has multiple folders. The source code for this project is in the Javacv-tutorials folder, as shown in the red box below:

  • Javacv-tutorials have many subprojects. The Code for the JavacV Camera Hands-on series was based on the Simply-grab-push project:

Basic routine analysis

  • There are many camera-based practices in the whole series, such as window preview, saving videos as files, pushing videos to media servers, etc. The basic routine is roughly the same, which is shown in the simplest flow chart as follows:

  • As you can see from the picture above, the whole process is to continuously take frames from the camera, then process and output them

Basic framework coding

  • Looking at the basics above, you might be smart enough to think that if the basics are fixed, the code can be fixed as well

  • Yes, then consider how to put the code in accordance with the fixed pattern, my train of thought is to develop an abstract class called AbstractCameraApplication, as each application “JavaCV cameras in actual combat” series of the parent class, which is responsible for building the initialization process, frame, processing, output, Its subclass focuses on the specific processing and output of frame data. The UML diagram of the whole system is as follows:

  • Then the development of abstract class AbstractCameraApplication. Java, coding design before, this is the main method and implementation process of the AbstractCameraApplication, bold all the method name, red piece of representative of the abstract methods to subclass:

  • The next step is to create the project. Here I create the Maven project, pom.xml as follows:

      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>javacv-tutorials</artifactId>
        <groupId>com.bolingcavalry</groupId>
        <version>1.0 the SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.bolingcavalry</groupId>
    <version>1.0 the SNAPSHOT</version>
    <artifactId>simple-grab-push</artifactId>
    <packaging>jar</packaging>

    <properties>
        <! -- Javacpp current version -->
        <javacpp.version>1.5.6</javacpp.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-to-slf4j</artifactId>
            <version>2.13.3</version>
        </dependency>

        <! -- Javacv dependencies, one is enough -->
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>javacv-platform</artifactId>
            <version>${javacpp.version}</version>
        </dependency>
    </dependencies>
</project>
Copy the code
  • Followed AbstractCameraApplication. Integrity of the Java code, the code of the processes and methods are consistent with the above, and add the detailed comments, there are several caveats will mention later:
package com.bolingcavalry.grabpush.camera;

import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.bytedeco.ffmpeg.global.avutil;
import org.bytedeco.javacv.*;
import org.bytedeco.opencv.global.opencv_imgproc;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.Scalar;

import java.text.SimpleDateFormat;
import java.util.Date;

/ * * *@author will
 * @email [email protected]
 * @date 2021/11/19 8:07 上午
 * @descriptionCamera application base class, which defines the basic flow of pull and push flow, subclasses only need to implement specific business methods */
@Slf4j
public abstract class AbstractCameraApplication {

    /** * Camera number, if there is only one camera, that is 0 */
    protected static final int CAMERA_INDEX = 0;

    /** * frame grabber */
    protected FrameGrabber grabber;

    /** * output frame rate */
    @Getter
    private final double frameRate = 30;

    /** * Width of camera video */
    @Getter
    private final int cameraImageWidth = 1280;

    /** * The height of the camera video */
    @Getter
    private final int cameraImageHeight = 720;

    /** * converters */
    private final OpenCVFrameConverter.ToIplImage openCVConverter = new OpenCVFrameConverter.ToIplImage();

    /** * Instantiates and initializes resources associated with output operations */
    protected abstract void initOutput(a) throws Exception;

    /** * output */
    protected abstract void output(Frame frame) throws Exception;

    /** * Releases resources associated with the output operation */
    protected abstract void releaseOutputResource(a) throws Exception;

    /** * The interval between frames *@return* /
    protected int getInterval(a) {
        // Assuming 15 frames per second, the interval between two frames is (1000/15) ms
        return (int) (1000/ frameRate);
    }

    /** * instantiates the frame grabber, the default OpenCVFrameGrabber object, * subclasses can override * as needed@throws FFmpegFrameGrabber.Exception
     */
    protected void instanceGrabber(a) throws FrameGrabber.Exception {
        grabber = new OpenCVFrameGrabber(CAMERA_INDEX);
    }

    /** * Grab a frame with the frame grabber, default call grab() method, * subclasses can override * as needed *@return* /
    protected Frame grabFrame(a) throws FrameGrabber.Exception {
        return grabber.grab();
    }

    /** * Initializes the frame grabber@throws Exception
     */
    protected void initGrabber(a) throws Exception {
        // Instantiate the frame grabber
        instanceGrabber();

        // The camera may have multiple resolutions, as specified here
        // You can specify the width and height, or you can call grabber. GetImageWidth instead.
        grabber.setImageWidth(cameraImageWidth);
        grabber.setImageHeight(cameraImageHeight);

        // Open the crawler
        grabber.start();
    }

    /** * preview and output *@paramGrabSeconds Duration *@throws Exception
     */
    private void grabAndOutput(int grabSeconds) throws Exception {
        // Add watermark to the time tool used
        SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

        long endTime = System.currentTimeMillis() + 1000L *grabSeconds;

        // The interval between the output of two frames. The default is 1000 divided by the frame rate, which can be modified by subclasses
        int interVal = getInterval();

        // The position of the watermark on the image
        org.bytedeco.opencv.opencv_core.Point point = new org.bytedeco.opencv.opencv_core.Point(15.35);

        Frame captureFrame;
        Mat mat;

        // End the loop after the specified time
        while (System.currentTimeMillis()<endTime) {
            / / in a frame
            captureFrame = grabFrame();

            if (null==captureFrame) {
                log.error("Frame object is empty");
                break;
            }

            // Convert a frame object to a MAT object
            mat = openCVConverter.convertToMat(captureFrame);

            // Add a watermark to the image, the watermark content is the current time, the position is in the upper left corner
            opencv_imgproc.putText(mat,
                    simpleDateFormat.format(new Date()),
                    point,
                    opencv_imgproc.CV_FONT_VECTOR0,
                    0.8.new Scalar(0.200.255.0),
                    1.0.false);

            // Subclass output
            output(openCVConverter.convert(mat));

            // Appropriate spacing, so that the body can not feel the flash screen
            if(interVal>0) {
                Thread.sleep(interVal);
            }
        }

        log.info("Output end");
    }

    /** * Release all resources */
    private void safeRelease(a) {
        try {
            // The resources that the subclass needs to release
            releaseOutputResource();
        } catch (Exception exception) {
            log.error("do releaseOutputResource error", exception);
        }

        if (null! =grabber) {try {
                grabber.close();
            } catch (Exception exception) {
                log.error("close grabber error", exception); }}}/** * consolidates all initialization operations@throws Exception
     */
    private void init(a) throws Exception {
        long startTime = System.currentTimeMillis();

        // Set the ffmepg log level
        avutil.av_log_set_level(avutil.AV_LOG_INFO);
        FFmpegLogCallback.set();

        // Instantiate and initialize the frame grabber
        initGrabber();

        // Instantiate and initialize the resources associated with the output operation.
        // The output is determined by subclasses, such as window preview, save video files, etc
        initOutput();

        log.info("Initialization completed, time [{}] ms, frame rate [{}], image width [{}], image height [{}]",
                System.currentTimeMillis()-startTime,
                frameRate,
                cameraImageWidth,
                cameraImageHeight);
    }

    /** * perform fetch and output operations */
    public void action(int grabSeconds) {
        try {
            // Initialize the operation
            init();
            // Keep pulling and pushing
            grabAndOutput(grabSeconds);
        } catch (Exception exception) {
            log.error("execute action error", exception);
        } finally {
            // Release resources anywaysafeRelease(); }}}Copy the code
  • There are a few things to note about this code:
  1. The object responsible for fetching data from the camera is the OpenCVFrameGrabber, or frame grabber
  2. In initGrabber method, setImageWidth and setImageHeight methods are used to set the width and height of the image for the frame grabber. In fact, the width and height can be set automatically by the frame grabber, but considering that some cameras support multiple resolutions, it is still active to set according to its own actual situation
  3. GrabAndOutput method uses the while loop to continuously get frames, process and output. The end condition of this while loop is the specified time. Such end condition may not meet your needs, please adjust it according to your actual situation (such as whether a key is pressed or not).
  4. In the grabAndOutput method, the obtained frame is converted to Mat object, and then text is added to Mat object, the content is the current time. Then the Mat object is converted to frame object, and this frame object is passed to the output method of the subclass. In this way, when the subclass does processing and output, all the frames it gets have time watermark
  • At this point, the parent class has been completed, the next practice, we just focus on using the subclass processing and output frame data

Deploying a Media Server

  • “JavaCV camera Combat” series of combat involves the push stream and remote playback, which will use the streaming media server, the role of the streaming media server as shown in the following figure, we have also deployed in advance in this article:

  • As for the media server type, I chose the common nginx-rtmp. For simplicity, I found a Linux computer and used docker to deploy on it, which is a one-line command:
Docker run -d --name nginx_rtmp -p 195:1935 -p 18080:80 alFG /nginx-rtmp:v1.3.1Copy the code
  • In another special case, I have a spare Raspberry PI 3B, which can also be used as a media server and is shamelesscookie/nginx-rtmp-ffmpeg:latest, and it has an ARM64 version. Suitable for use on raspberry PI:
docker run -d --name nginx_rtmp -p 1935:1935 -p 18080:80 shamelesscookie/nginx-rtmp-ffmpeg:latest
Copy the code
  • At this point, “JavaCV camera actual combat” series of preparation work has been completed, the next article, start a wonderful experience of the journey, Xinchen original, will not let you down ~

You are not alone, Xinchen original accompany all the way

Github.com/zq2599/blog…