This is the fifth day of my participation in the August More text Challenge. For details, see: August More Text Challenge

Hello everyone, my name is [Bean Dried peanut], this time I bring an article about the calculation of mAP target detection accuracy under Keras-Yolo3

College time to do graduation design, spent a day to understand the target detection accuracy is what things, and how to test, how to operate the code (attached code) write a note to record!!

This article is based on the following blogs for integration and personal views: blog.csdn.net/weixin_4479… Blog.csdn.net/plSong_CSDN… Blog.csdn.net/weixin_4124…

Knowledge reserves

1. Concept of IOU

The concept of the IOU should be simple, measuring the overlap between the prediction box and the real box. Here is an example: the green box is the actual box (not very green…) , the red box is the prediction box. When we need to judge the relationship between two boxes, we mainly judge the degree of overlap between the two boxes.

The formula for calculating IOU is:

You can see the IOUIt’s a ratio, the intersection ratio. In the molecular part, the value is the overlapping region between the prediction box and the actual box. In the denominator, the value is the total area occupied by the prediction box and the actual box.The ratio of the intersection to the union is IOU.

2. The concept of TP TN FP FN

TP, TN, FP, FN are four letters in total. They are T, F, P, N.

T is True; F is False; P is Positive; N is Negative.

T or F denotes whether the sample is correctly classified. P or N is the original positive sample or the original negative sample.

A True Positives is being divided into positive samples, and getting it right. TN (True Negatives) means to be divided into a negative sample, which is correct, and FP (False Positives) means to be divided into a positive sample, which is wrong (when in fact the sample is negative). FN (False Negatives) means that you are divided into a negative sample, but incorrectly (when in fact the sample is positive).

In the process of mAP calculation, TP, FP and FN are mainly used.

3. Precision and recall

TP is pointsAn example that the class considers to be a positive sample and is indeed a positive sampleFP isThe classifier thinks it’s a positive sample but it’s not actually a positive sample, Precision translates to “Precision.”The proportion of what the classifier considers to be a positive class and is indeed a positive class to all the classifiers that it considers to be a positive class.”.

The accuracy is the proportion of positive samples that are actually positive.

TP is an example that the classifier considers to be a positive sample and is indeed a positive sample; FN is an example that the classifier considers to be a negative sample but is not actually a negative sample; Recall translated into Chinese is ** “the proportion of the part that the classifier considers to be a positive class and is indeed a positive class in all the classes that are indeed positive” **.

That is, the recall rate is the proportion of a positive sample that is considered to be positive.

4. Concept examples

As the graph shows,The blue box is the real box. The green and red boxes are the prediction box, the green box is the positive sample (the prediction box has a high overlap with the actual box), and the red box is the negative sample (the prediction box has a low overlap with the actual box)..

In general, it is considered to be a positive sample when IOU>=0.5 (that is, the confidence level is set to 0.5) for both the prediction box and the real box.

So for this picture. There are 3 real boxes, 2 positive samples, 2 negative samples. At this time

5. Limitations of a single indicator

In the target detection algorithm, there is a very important concept of Confidence. If the Confidence is set high, the predicted results are very consistent with the actual situation; if the Confidence is low, there will be many false detection.

Suppose there are a total of 3 positive samples in a graph, and there are 10 predicted results of target detection for this graph, among which 3 are actually positive samples and 7 are actually negative samples. The corresponding confidence is as follows.If we set the acceptable confidence level to 0.95, then the target detection algorithm will take the sample with sequence number 1 as a positive sample, and the others are all negative samples. TP = 1, FP = 0, FN = 2.At this point, the Precision is very high, but in fact, we only detected one positive sample, and there are still two undetected samples, so we only usePrecision doesn’t fit.

At this time, if the acceptable confidence level is set to 0.35, the target detection algorithm will take the sample with sequence number 1 as a positive sample, and the others are all negative samples. TP = 3, FP = 3, FN = 0.

At this time, Recall is very high, but in fact, among the positive samples considered by the target detection algorithm, 3 samples are indeed positive, while 3 samples are negative, and there is a very serious false detectionJust Recall is not appropriate.A combination of the two is the right way to evaluate.

What is the AP

Using precision and recall alone is unscientific. So people thought, hey, why don’t we just measure the area under the PR curve? So there you have itAP valuesThis concept. The average here,It’s the same thing as averaging precisionAP actually refers to the area under a curve drawn using a combination of different Precision and Recall points. This is shown in the picture below.When weTake different levels of confidence.Can get different Precision and different Recall,When we get enough confidence density,You get a lot of Precision and Recall.

At this point, Precision and Recall can draw a line on the image, and the area below this line is the AP value of a class.

What is the MAP

Mean Average Precision refers to the Average AP value.MAP is the average of AP values for all classes. Is the average AP value of multiple verification set individuals. The diagram below:

As an index to measure the detection accuracy in object dection. The calculation formula is:MAP = sum of average precision of all categories divided by all categories.

AP measures how good or bad the learned model is in each category, and mAP measures how good or bad the learned model is in all categories. After getting AP, the calculation of mAP becomes very easy, which is to take the average of all aps.

Draw a mAP

It is mainly divided into the following parts

(1) Prepare documents

YOLOV3keras implementation address: github.com/qqwweee/ker…

Train yourself to weight files (.h5),

Put it in \model_data

Create a test folder and place the test images in it

Test and save the test results (yolo_test.py), change coco_classes.txt to your own training category

(2) Preparative MAP calculation requires detection-results, ground-truth and image-optional

(3) Run main.py (mAP project)

1 Download the mAP package

Download the mAP code to github.com/Cartucho/mA…

To calculate your own data, clear the contents of the three folders (Detection_results, ground_truth, image-optional) under map-Master/Input in the compressed package so that you can put your own data later. (If you do not empty the folder, the new data will be added to the folder and the original data will not be overwritten.)

The calculation of mAP is mainly to put the corresponding file into the detection_results, ground_truth, image-optional folders, and then directly generate the result according to the main.py file in the compressed package.

2 images-optional folder

Let’s start with the simplest, image-optional, which is where you put the images you want to test.

In the first way, you just paste in the test images that you have prepared

(HERE I have prepared 188 test pictures of test masks)

In the second way, you can lead the path of IMG_PATH in the main.py file to the folder where the test images are. The following is part of the code in the main.py file.

GT_PATH = os.path.join(os.getcwd(), 'input'.'ground-truth')
DR_PATH = os.path.join(os.getcwd(), 'input'.'detection-results')
# if there are no images then no animation can be shown
# IMG_PATH = os.path.join(os.getcwd(), 'input', 'images - optional) is the default images - optionalIMG_PATH ='/home/bu807 / Downloads/keras - yolo3 - master2 / keras - yolo3 - master2 / test '
# can also lead to its own test path
Copy the code

Once you’ve pasted your images, or changed your path, the image-optional folder is ready. The most simple.

3 Ground_truth folder

Ground_truth is used to put the actual value, which is the size of the box you originally marked, as follows: The type, position and size of the eyes, nose and mouth in a picture.

The format is classname left top right bottom

eye 86 274 146 305
eye 214 273 282 303
nose 138 334 220 385
mouth 139 404 220 437
Copy the code

Here we need to put the original value in the TXT file, which we can generate from the XML file. First, you need to import the XML into the ground_truth folder; Then, through covert_gt_xml.py, it is converted to a TXT file.

3.1 Import XML into the ground_truth folder

First, if you have fewer test images, you can just paste the corresponding XML file into the ground_truth folder. In the second way, your test images are too many/one by one to find, the original blogger simply wrote a code (as follows: Findxml.py, which you can put under map-master /scripts/extra), you can find the corresponding XML file from your Annotations folder and glue it to the ground_truth folder.

import os
import shutil 

testfilepath='/home/bu807/Downloads/keras-yolo3-master2/keras-yolo3-master2/test'
xmlfilepath = '/home/bu807/Downloads/keras-yolo3-master2/keras-yolo3-master2/VOCdevkit/VOC2007/Annotations/'
xmlsavepath = '/home/bu807/Downloads/keras-yolo3-master2/keras-yolo3-master2/mAP-master/input/ground-truth/'
test_jpg = os.listdir(testfilepath) 

num = len(test_jpg)
list = range(num)
L=[] 

for i in list:
    name = test_jpg[i][:-4] + '.xml'    L.append(name) 
for filename in L:
    shutil.copy(os.path.join(xmlfilepath,filename),xmlsavepath)
Copy the code

3.2 Converting XML files into TXT Files

Perform their own mAP – master/scripts/extra/covert_gt_xml py files. After this, your XML file is automatically put into the ground_truth/backup file, which is generated under the ground_truth file as a TXT file.

4 Detection – Results folder

4.1 Saving model test results

Using the yolo_test.py file (github.com/plsong/kera… After execution, the result folder and subordinate files are generated in the home directory. It needs to be improved before it can be used in our mAP calculations.

1) Comment out both file.write

# Save the number of boxes detected
# file.write('find '+ STR (len(out_boxes)) +' target(s) \n'

time_sum = time.time() - t1

Write ('time sum: '+ STR (time_sum) + 's'
print('time sum:', time_sum)file.close()
yolo.close_session() 
Copy the code

2) Modify two file.write files

# Write the detection position
# file.write(
# predicted_class + 'score:' + STR (score) + '\nlocation: top:' + STR (top) + ', bottom: '+ STR (bottom)
# bottom) + ', left: '+ STR (left) +', right: "+ STR (right) + '\ n')
file.write(      predicted_class + ' ' + str(score) + ' ' + str(left) + ' ' + str(             top) + ' ' + str(right) + ' ' + str(bottom) + '; ')  
# file.write(portion[1] + 'detect_result: \n') This is at the end
file.write(portion[1] +' ')    There should be a space here
Copy the code

Then run yolo_test.py to generate result/result.txt **

4.2 Run the py file to obtain the TXT file

Make_dr. Py make_dr. Py make_dr. Py make_dr.

f=open('/home/bu807/Downloads/keras-yolo3-master2/keras-yolo3-master2/result/result.txt',encoding='utf8')
s=f.readlines()
result_path='/home/bu807/Downloads/keras-yolo3-master2/keras-yolo3-master2/mAP-master/input/detection-results/' 

for i in range(len(s)):  The detection content stored by line in # is in the form of list
    r = s[i].split('.jpg ')
    file = open(result_path + r[0] + '.txt'.'w')    
    if len(r[1) >5:
        t = r[1].split('; ')        
        # print('len(t):',len(t))        
        if len(t) == 3:            
            file.write(t[0] + '\n' + t[1] + '\n')  # Two objects have been detected
            elif len(t) == 4:            
                file.write(t[0] + '\n' + t[1] + '\n' + t[2] + '\n')  # Three objects have been detected
            elif len(t) == 5:            
                file.write(t[0] + '\n' + t[1] + '\n' + t[2] + '\n' + t[3] + '\n')  # Four objects have been detected
            elif len(t) == 6:            
                file.write(t[0] + '\n' + t[1] + '\n' + t[2] + '\n' + t[3] + '\n' + t[4] + '\n')  # Five objects are detected
            elif len(t) == 7:            
                file.write(t[0] + '\n' + t[1] + '\n' + t[2] + '\n' + t[3] + '\n' + t[4] + '\n' + t[5] + '\n')  # Six objects detected
            else:            
                file.write(t[0] + '\n')  # has an object
            else:        
                file.write(' ')  # No object detected, create a blank object
Copy the code

After creating the file, execute it to get the TXT file in the detect-Results folder. Copy the generated TXT file to the corresponding location in the mAP project.

At this point, three folders are done! And we’re done!

5 Run the main.py file

Finally, run the main.py file and you can see your mAP under map-master /results.

Here are the results:Generated directoryIdentify the test box and the real box

Mask of the APNO AP classes

The number of correct and incorrect predictionsReal results

Average error rate

MAP values for all classes

All see here is not as good as a like oh ~