Under normal circumstances, whether the picture is clear or not is a perceptual understanding. The same picture may be acceptable to you, while others may feel unclear and lack a unified standard. However, there are algorithms that can be used to quantify the sharpness of an image.

The principle of

If you’ve seen signal processing before, you know that the most straightforward way is to compute the Fast Fourier transform of the image, and then look at the high and low frequency distribution. An image can be considered blurred if it has a small amount of high frequency. However, it is very difficult to distinguish the specific threshold of the high frequency quantity, and an inappropriate threshold will lead to poor results.

We expect a single floating point number to represent the sharpness of the image. In the International Conference on Pattern Recognition in 2000, Pece-Pacheco proposed that a certain channel (generally gray value) in the picture should be convolution operation through the Laplacian mask, and then the standard deviation should be calculated, and the value obtained could represent the image sharpness.

The reason this works is because of the definition of the Laplace operator itself. It is used to measure the second derivative of an image and highlight areas in the image where the intensity changes rapidly, much like Sobel and Scharr operators. And, like the above operators, Laplacian operators are often used for edge detection. In addition, the algorithm is based on the assumption that if an image has a high variance, it has a wide frequency response range, representing a normal, accurately focused image. But if the image has a small variance, then it has a narrow frequency response range, meaning that there are few edges in the image. As we all know, the fainter the picture, the fewer edges it has.

With a value that represents sharpness, all that remains is to set the appropriate threshold. If the variance of an image falls below a predefined threshold, the image can be considered fuzzy. If the variance is above the threshold, the image is not fuzzy.

In field

The principle seems to be more complex, involving a lot of signal ah image processing related knowledge, let’s practice, intuitive feeling.

Because life is short, and I’m the number one Python whistler in my circle of friends, I chose to use Python, and the core code is hideously simple:

import cv2

def getImageVar(imgPath):
	image = cv2.imread(imgPath);
	img2gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
	imageVar = cv2.Laplacian(img2gray, cv2.CV_64F).var()

	return imageVar
Copy the code

Life’s too short. There’s only three lines of core code.

Import cv2 uses a well-known image processing library called OpenCV. The installation of OpenCV is not described here, but it depends on Numpy.

Image = cv2.imread(imgPath) Read the image using the method provided by OpenCV. Img2gray = cv2.cvtColor(image, cv2.color_bgr2gray) Converts to a grayscale image. The diagram below:

The original image looks like this:

Cv2.Laplacian(img2gray, cv2.CV_64F) convolved the image with the 3×3 Laplacian, where cv2.

The Principles section says that The Laplace operator is often used for edge detection, so all that’s left after the Laplace operator is the edge that was detected. The image looks like this after this step:

You can see that the figures are still fairly clear.

Cv2.laplacian (img2gray, cv2.cv_64f).var() calculates the variance and finally returns.

That’s how the above figure works out at 3,170 plus, and that’s what we used to judge clarity.

Look for another one:

The original:

After doing the grayscale sum and going through the Laplace operator, you can see that the character part is not very clear.

The variance is only 530

All that remains is to determine the threshold based on the overall picture quality.

limitations

Through the above practical operation, we know that the skill of this algorithm lies in setting an appropriate threshold value. If the threshold value is too low, the normal picture will be wrongly judged as fuzzy picture, while if the threshold value is too high, the fuzzy picture will be wrongly judged as normal picture. The threshold depends on the business scenario you are using and needs to be customized for different scenarios.

There is no real silver bullet. In addition to setting thresholds, some images may be intentionally blurred or blurred, which can easily be killed by mistake.

Such as:

That’s what it looks like. It’s all black in the back.

The foreground of this picture actually looks ok, but the background has a lot of blur and blur, which is easy to be killed by mistake.

So it’s best to use it in real scenarios after understanding the principles.


Finally, I wrote a simple script that evaluates the image of the passed image path and returns a JSON string.

Python getrank. py –imgs=./1.jpg,./2.jpg

Source: github.com/bob-chen/de…

twitter

Record some thoughts and thoughts, write science and technology and humanities, write life conditions, write reading experience, mainly bullshit and feeling. Welcome to pay attention and exchange.

Wechat official number: poem and distance of programmer

Public ID: Monkeycoder-Life