OpenCV entry combat – credit card number identification

Part1 preface

Recently in learning OpenCV related knowledge, follow the TEACHING video of B station to try to practice a simple small project — credit card number identification.

Part2 Basic process mind mapping

Part3 Code details

Import the required libraries and packages
from imutils import contours
import numpy as np
import cv2

# For the display of pictures
def cv_show(name, img) :
    cv2.imshow(name, img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

# Proportional resize for images
def resize(image, width=None, height=None, inter=cv2.INTER_AREA) :
    (h, w) = image.shape[:2]
    if width is None and height is None:
        return image
    if width is None:
        r = height / float(h)
        dim = (int(w * r), height)
    else:
        r = width / float(w)
        dim = (width, int(h * r))
    resized = cv2.resize(image, dim, interpolation=inter)
    return resized
Copy the code

The card number consists of 16 0-9 numbers, 4 by 4, which can be template matched to identify the numbers on the card. Template matching is a technique for finding the part of an image that best matches another template image.

First read the template image
img = cv2.imread("template.png")

# Convert to grayscale
ref = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Convert to binary image
ref = cv2.threshold(ref, 10.255, cv2.THRESH_BINARY_INV)[1]

# Contour detection
#-- cv2.retr_external checks external contours
refCnts, hierarchy = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Copy the code

The cv2.findContours() function has three parameters. The first parameter is the image to be searched for contour, the second parameter is the detection method of contour, and the third parameter is the approximate method of contour.

The cv2.findContours() function takes a binary graph, that is, a black and white (not grayscale), so the image read should be converted to grayscale first and then to a binary graph.

The cv2.findContours() function modifies the input image “in place.” So ref. Copy () is passed instead of ref

The cv2.findContours() function returns two values. RefCnts is a list that stores all detected contours, and hierarchy stores the properties of corresponding contours.

cv2.drawContours(img, refCnts, -1, (0.0.255), 3)
cv_show('img', img)
Copy the code

Draw an outline diagram as shown below

Sort the resulting contours from left to right and top to bottom

refCnts = contours.sort_contours(refCnts, method='left-to-right') [0]
Copy the code

Put sorted Outlines and numbers into a dictionary

digits = {}

for (i, c) in enumerate(refCnts):
    (x, y , w, h) = cv2.boundingRect(c)
    roi = ref[y:y+h, x:x+w]
    roi = cv2.resize(roi, (57.88))
    digits[i] = roi
Copy the code

Cv2. boundingRect(img) returns four parameters, the upper-left coordinate (x,y) and the width and height of the rectangle (w,h). Take out and resize the ROI and store it in the dictionary.

The kernel needs to be initialized in order to perform subsequent image morphology operations, such as corrosion, swelling, open operation, close operation, etc.

rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9.3))
sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5.5))
Copy the code

Next, import the picture of the credit card face to be tested

image = cv2.imread("01.jfif")

image = resize(image, width = 300)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT,rectKernel)
Copy the code

After resize, it is transformed into grayscale image and the top hat operation in morphological transformation is carried out to make the highlights in the image more prominent.

Next, the gradient image is calculated and normalized to display the contour of the image

gradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1)
gradX = np.absolute(gradX)
(minVal, maxVal) = (np.min(gradX), np.max(gradX))
gradX = (255 * ((gradX - minVal) / (maxVal - minVal)))
gradX = gradX.astype("uint8")
Copy the code

In order to make the card number group by group display, the image is closed by morphological operation (first expansion and then corrosion). Binarization operation is carried out in the middle.

gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)

thresh = cv2.threshold(gradX, 0.255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]

thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)
Copy the code

After processing the image contour detection, after obtaining many contours, there are many contours that do not conform to our wishes. Therefore, the next step is to screen the contours.

threshCnts, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = threshCnts
cur_img = image.copy()
cv2.drawContours(cur_img, cnts, -1, (0.0.255), 3)
cv_show('img', cur_img)
Copy the code

Contour screening

locs = []
# Walk through the outline
for (i, c) in enumerate(cnts):
    (x, y, w, h) = cv2.boundingRect(c)
    ar = w / float(h)

    if 2.5 < ar < 4.0 and (40 < w < 55) and (10 < h < 20):
        locs.append((x, y, w, h))
# Sort the contours from left to right
locs = sorted(locs, key=lambda ix: ix[0])
Copy the code

Finally, the images in each group of contours are intercepted and the numbers are judged and stored in groupOutput

output = []
# Iterate over the number in each contour
for (i, (gX, gY, gW, gH)) in enumerate(locs):
    groupOutput = []
    
    # Take out part of grayscale image
    group = gray[gY - 5:gY + gH + 5, gX - 5: gX + gW + 5]
    
    # pretreatment
    group = cv2.threshold(group, 0.255, cv2.THRESH_OTSU)[1]
    
    # Count each set of contours
    digitCnts, hierarchy = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    digitCnts = contours.sort_contours(digitCnts, method='left-to-right') [0]
    
    # Calculate each value for each group
    for c in digitCnts:
        (x, y, w, h) = cv2.boundingRect(c)
        roi = group[y: y + h, x: x + w]
        roi = cv2.resize(roi, (57.88))
        #scores stores the result of a match
        scores = []
        for (digit, digitROI) in digits.items():
            result = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)
            (_, score, _, _) = cv2.minMaxLoc(result)
            scores.append(score)
        Select the index with the maximum value of scores
        groupOutput.append(str(np.argmax(scores)))
    Draw the result on the credit card surface
    cv2.rectangle(image, (gX - 5, gY - 5), (gX + gW + 5, gY + gH + 5), (0.0.255), 1)
    cv2.putText(image, "".join(groupOutput), (gX, gY - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0.0.255), 2)
    # get results
    output.extend(groupOutput)
    
cv_show('image', image)
Copy the code

Get the final result

summary

Recognizing credit card numbers is a simple image recognition project, but there are still many details that need to be paid attention to because it is followed by the whole process of the tutorial.