An overview of the

Generally, there are three image similarity algorithms

  • Mean Hash algorithm
  • Differential value hash algorithm
  • Hash aware algorithm

Actually, the three algorithms are pretty much the same. Taking the mean hash algorithm as an example, the steps of the algorithm are as follows:

  1. Zoom n by N image
  2. Remove color, obtain grayscale map
  3. Get the hash fingerprint
  4. Compare the Hamming distances of hash fingerprints and get similar values

The above algorithms differ mainly in the algorithms used to obtain hash fingerprints

Mean hash algorithm

The original picture is as follows:

Zoom to 8 × 8 image

Code:

img = cv2.resize(img, (8.8), interpolation=cv2.INTER_CUBIC)
Copy the code

Scaling of the results

To color

code

gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Copy the code

Results:

Calculate gray mean value

    # Calculate gray mean value
    for i in range(8) :for j in range(8):
            s = s + gray_img[i, j]
    avg = s / 64
Copy the code

The result is 176

Computing Hash Fingerprints

The calculation method is: if the gray value of this point is greater than the average gray value, this is 1; otherwise, it is 0. The code is as follows:

    # 1 if the value is greater than the grayscale value, 0 otherwise. Generate a hash fingerprint
    for i in range(8) :for j in range(8) :if gray_img[i, j] > avg:
                hash_str = hash_str + '1'
            else:
                hash_str = hash_str + '0'
Copy the code

Calculate the Hamming distance

Hamming distance: In information theory, the Hamming distance (English: Hamming distance) between two strings of equal length is the number of different characters at corresponding positions of the two strings. Code:

def cmp_hash(hash_1, hash_2):
    # Calculate hamming distance
    n = 0
    iflen(hash_1) ! = len(hash_2):return - 1
    for i in range(len(hash_1)):
        ifhash_1[i] ! = hash_2[i]: n = n +1
    return n
Copy the code

Differential value hash algorithm

Scaling, grashing, and calculating hamming distance are the same as the average hash algorithm, but the difference is that the algorithm used to compute the hash fingerprint is different. The algorithm is as follows: if the gray value of the previous pixel is greater than the gray value of the latter pixel, it is 1; otherwise, it is 0. The complete algorithm is as follows:

# difference sensing hash
def b_hash(img):
    # Zoom to 8 × 8 image
    img = cv2.resize(img, (9.8), interpolation=cv2.INTER_CUBIC)
    # Get grayscale image
    gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    hash_str = ' '
    # 1 if the previous pixel is larger than the next, 0 otherwise. Generate a hash fingerprint
    for i in range(8) :for j in range(8) :if gray_img[i, j] > gray_img[i, j + 1]:
                hash_str = hash_str + '1'
            else:
                hash_str = hash_str + '0'
    return hash_str
Copy the code

Hash aware algorithm

It is easy to understand the previous two algorithms for comparing similar images. The inductive Hash algorithm has a different idea. Let’s look at an image in a different way. An image is actually a two-dimensional info-tupu, with various frequency changes. Places with high frequency represent relatively large color changes, such as the contour part. We can use DCT transform (discrete cosine transform) to get the frequency domain graph. Compare the low frequencies (why compare the low frequencies?) Whether similar or not is ok.

The original image is still the original image above. I’m going to scale it, 32 by 32

Gray processing

DCT transformation was performed

We get the 8*8 area in the upper left corner, and compare the Hamming distance of this grayscale image according to the average hash algorithm

The complete code is as follows:

# perception hash
def p_hash(img):
    Scale to 32 by 32
    img = cv2.resize(img, (32.32), interpolation=cv2.INTER_CUBIC)
    cv2.imwrite("bb.jpg", img)
    # Get grayscale image
    gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    gray_img = gray_img.astype(np.float32)
    cv2.imwrite("cc.jpg", gray_img)

    The discrete cosine transform is performed to place the low frequency in the upper left corner
    img = cv2.dct(gray_img)
    cv2.imwrite("dd.jpg", img)
    # Get the frequency diagram of 8 by 8 in the upper left corner
    img = img[0:8.0:8]
    avg = 0
    # and get the average grayscale of the low-frequency image
    hash_str = ' '
    for i in range(8) :for j in range(8):
            avg += img[i, j]
    avg = avg / 64

    # If the pixel value of the low-frequency image is greater than the average gray scale, it is 1; otherwise, it is 0. Hash fingerprints are generated in turn
    for i in range(8) :for j in range(8) :if img[i, j] > avg:
                hash_str = hash_str + '1'
            else:
                hash_str = hash_str + '0'
    return hash_str
Copy the code