Make writing a habit together! This is the second day of my participation in the “Gold Digging Day New Plan · April More text challenge”. Click here for more details
Welcome to my GitHub
Here classification and summary of xinchen all original (including supporting source code) : github.com/zq2599/blog…
This paper gives an overview of
- This article is the eighth “JavaCV camera combat”, the front of the operation to consolidate the basic skills of frame and stream processing, next to start to realize some common CV capabilities, from this human detection do not start it
- OpenCV commonly used face detection is based on Haar feature cascade classifier, this article with the help of JavaCV to use the classifier to achieve face detection
Simple design
- Before you start coding, sort out what to do:
- DetectService can be used in multiple scenarios: window preview, push stream, save file can be used, so the code of DetectService should be separate from the code of preview and push stream, as shown in the following figure. DetectService interface can be used as a member variable of each application:
- Testing services is not only the face detection, the future and the human body detection, object detection, and so on, so to design a testing service interface DetectService, face detection, human body detection, objects of these classes are the implementation of this interface, as shown in the figure below, for preview, pushing flow, save files the application code, using the interface API can be directly, The implementation classes that are tested can be determined at initialization time
3. If you’re smart, you might think That Chen’s level of sophistication is too primitive: isn’t the above design Spring’s dependency injection? Why not? The focus of this series is JavaCV, so keep the code simple and don’t introduce the Spring framework
- Overall, today to write the code as shown in the figure below, the green piece AbstractCameraApplication class has been “JavaCV camera one of actual combat: basis” the article is completed, the remaining three all in this writing, including two Java classes, an interface:
- That’s it. Let’s start coding and write the DetectService interface first
DetectService interface
- New interface, DetectService.java, with three method definitions:
/** * Initializes operations, such as model download *@throws Exception
*/
void init(a) throws Exception;
/** * get the original frame, do the check, add box select *@param frame
* @return* /
Frame convert(Frame frame);
/** * Release resources */
void releaseOutputResource(a);
Copy the code
- There are also two static methods in detectService.java. The first is buildGrayImage, which creates a new Mat object based on the size of the entry Mat. The new object is used to store grayscale images, because detectService.java uses grayscale images instead of the original ones:
static Mat buildGrayImage(Mat src) {
return new Mat(src.rows(), src.cols(), CV_8UC1);
}
Copy the code
- The second method is detect. The first method is detect, which is very important: the original image is converted to gray scale image, and then detected by the specified classifier. The detection result is marked on the original image.
static Frame detect(CascadeClassifier classifier, OpenCVFrameConverter.ToMat converter, Frame rawFrame, Mat grabbedImage, Mat grayImage) {
// The current image is converted to grayscale image
cvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
// The container to store the test results
RectVector objects = new RectVector();
// Start the test
classifier.detectMultiScale(grayImage, objects);
// Total number of test results
long total = objects.size();
// If no result is detected, the original frame is returned
if (total<1) {
return rawFrame;
}
// If there is a test result, construct a rectangle based on the result data and draw it on the original image
for (long i = 0; i < total; i++) {
Rect r = objects.get(i);
int x = r.x(), y = r.y(), w = r.width(), h = r.height();
rectangle(grabbedImage, new Point(x, y), new Point(x + w, y + h), Scalar.RED, 1, CV_AA, 0);
}
// Release the detection result resource
objects.close();
// Convert the labeled image into a frame, and return
return converter.convert(grabbedImage);
}
Copy the code
- That’s all about the interface, DetectService.java: three method definitions, two static methods, and then the implementation class of the interface
Face detection function implementation class
- The previous DetectService interface defined only three methods: Initialization (init), testing (convert), release (releaseOutputResource) resources, develop the interface implementation class now HaarCascadeDetectService. Java, realize the real face detection function
- Detectservice. detect static method detectService. detect is called to convert the original frame into the frame marked with the detection result:
@Slf4j
public class HaarCascadeDetectService implements DetectService {
/** * The object of each frame of the original image */
private Mat grabbedImage = null;
/** * Grayscale image object corresponding to the original image */
private Mat grayImage = null;
/** * classifiers */
private CascadeClassifier classifier;
/** * converters */
private OpenCVFrameConverter.ToMat converter = new OpenCVFrameConverter.ToMat();
/** ** model file download address */
private String modelFileUrl;
/** * constructor, where to specify the model file download address *@param modelFileUrl
*/
public HaarCascadeDetectService(String modelFileUrl) {
this.modelFileUrl = modelFileUrl;
}
/** * Audio sample object initialization *@throws Exception
*/
@Override
public void init(a) throws Exception {
// Download the model file
URL url = new URL(modelFileUrl);
File file = Loader.cacheResource(url);
// The full address of the model file after downloading
String classifierName = file.getAbsolutePath();
// Instantiate the classifier according to the model file
classifier = new CascadeClassifier(classifierName);
if (classifier == null) {
log.error("Error loading classifier file [{}]", classifierName);
System.exit(1); }}@Override
public Frame convert(Frame frame) {
// Switch from frame to Mat
grabbedImage = converter.convert(frame);
// Grayscale Mat, used for detection
if (null==grayImage) {
grayImage = DetectService.buildGrayImage(grabbedImage);
}
// Face detection, according to the results to do processing to get the preview window display frame
return DetectService.detect(classifier, converter, frame, grabbedImage, grayImage);
}
/** * before the end of the program, release the face detection resources */
@Override
public void releaseOutputResource(a) {
if (null! =grabbedImage) { grabbedImage.release(); }if (null! =grayImage) { grayImage.release(); }if (null==classifier) { classifier.close(); }}}Copy the code
The main program PreviewCameraWithDetect
- In “JavaCV camera one of actual combat: basis” is to create a simple – grab – push ready superclass AbstractCameraApplication in engineering, and so we continue to use the project, create subclasses implement the abstract methods
- Before coding, we should review the basic structure of the parent class, as shown in the figure below. The bold is the method defined by the parent class, and the red block is the method that needs to be subclassed to implement the abstract method. Therefore, we should implement the three red methods with the goal of local window preview:
- New file PreviewCameraWithDetect Java, this is a subclass of AbstractCameraApplication, its code is very simple, the next order, according to the above instructions
- We define a member variable of type CanvasFrame, previewCanvas, which is the local window to display video frames:
protected CanvasFrame previewCanvas
Copy the code
- Also define a member variable of type DetectService for later detections and assign to the changed member variable in the constructor: DetectService
private DetectService detectService;
public PreviewCameraWithDetect(DetectService detectService) {
this.detectService = detectService;
}
Copy the code
- Next comes the initialization. In addition to the instantiation and parameter Settings of the previewCanvas, we also call the initialization method of the detection service:
@Override
protected void initOutput(a) throws Exception {
previewCanvas = new CanvasFrame("Camera Preview", CanvasFrame.getDefaultGamma() / grabber.getGamma());
previewCanvas.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
previewCanvas.setAlwaysOnTop(true);
// Check the initialization of the service
detectService.init();
}
Copy the code
- The output method defines what to do after receiving each frame of video data, which is handed over to the detection service and displayed in the local window:
@Override
protected void output(Frame frame) {
// The original frame is first handed to the detection service, which involves object detection, and then the detection result is marked on the original image.
// Then convert to frame return
Frame detectedFrame = detectService.convert(frame);
// The frame displayed in the preview window is the frame marked with the detection result
previewCanvas.showImage(detectedFrame);
}
Copy the code
- As the detection service will also be time-consuming, the waiting time after each frame output is adjusted here to avoid the lag during preview. Please adjust it according to the CPU performance of your computer. I have changed it to one-eighth of the original time here:
@Override
protected int getInterval(a) {
return super.getInterval()/8;
}
Copy the code
- Finally, we need to close the local window and call releaseOutputResource of the detection service to release its related resources:
@Override
protected void releaseOutputResource(a) {
if (null! = previewCanvas) { previewCanvas.dispose(); }// Instrumentation also frees resources
detectService.releaseOutputResource();
}
Copy the code
- The main method is now ready to preview the camera in a local window. Note that 100 indicates that the preview lasts for 100 seconds and modelFileUrl is the location of the model file on GitHub.
public static void main(String[] args) {
String modelPath = "https://raw.github.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_alt.xml";
// String modelPath = "https://raw.github.com/opencv/opencv/master/data/haarcascades/haarcascade_upperbody.xml";
new PreviewCameraWithDetect(new HaarCascadeDetectService(modelPath)).action(1000);
}
Copy the code
- Run the main method to start the program, as shown in the picture below. If someone looks like you in the preview window, a red frame will appear on your face (in order not to infringe the right of portrait of the extras, the face is manually Mosaic) :
- At this point, the local window preview integrated face detection function is completed, thanks to the powerful JavaCV, the whole process is so easy and happy, please continue to pay attention to Xinchen original, “JavaCV camera actual combat” series will show more rich applications;
- All of the code covered in this article can be found in the GitHub repository described below
Download the source code
- The full source code for JavaCV Camera In Action is available on GitHub at github.com/zq2599/blog…
The name of the | link | note |
---|---|---|
Project home page | Github.com/zq2599/blog… | The project’s home page on GitHub |
Git repository address (HTTPS) | Github.com/zq2599/blog… | The project source warehouse address, HTTPS protocol |
Git repository address (SSH) | [email protected]:zq2599/blog_demos.git | The project source warehouse address, SSH protocol |
- The Git project has multiple folders. The source code for this project is in the Javacv-tutorials folder, as shown in the red box below:
- Javacv-tutorials have many subprojects. The Code for the JavacV Camera Hands-on series was based on the Simply-grab-push project:
Welcome to the Nuggets: programmer Chen Chen
Learning on the road, you are not alone, Xinchen original all the way accompanied by…