This article has participated in the call for good writing activities, click to view: back end, big front end double track submission, 20,000 yuan prize pool waiting for you to challenge!

If you have ideas or techniques you’d like to share, feel free to leave them in the comments section.

In my last blog, Eraser realized that before learning OpenCV, you need to fill in some basic graphics concepts, such as pixels, coordinates, and bit depth. Don’t ask why you know this.

Basic concepts of images

When the image is enlarged, pixels can be seen, such as the enlarged area of the eye in the image below. The small squares in the image are pixels. If you’ve played in My world, you’ll probably have a better sense of pixels.

Pixel is colored, a pixel is a small image unit, the unit contains a lot of information, the most important is the color related data. The value of the image color is called RGB.

The RGB value ranges from 0 to 255, and the change in the RGB value represents the change in the shade of the color. Since the eraser has some understanding of this part, this part passes faster.

The larger the RGB value, the lighter the color, 0 for black, 255 for white.

In have a look at the composition of the image, for the computer, the image is an array, the array each containing a pixel unit, I encountered in the process of learning the concept of a two-dimensional gray image, but the ability is limited, not too understand, eraser study skills hoso, don’t understand just try to explain as he can understand, behind a complete step by step.

A two-dimensional gray image corresponds to a two-dimensional array, so I draw a two-dimensional coordinate system based on this, which is filled with pixels, and each pixel is a value of 0~255.

Do not know whether the understanding is in place, the case may be solved with learning.

Later, I retrieved the 3D image containing three values of RBG, that is, represented by three two-dimensional data, and suddenly I seemed to understand what some RGB really was.

Baidu found a content to help understand, let’s look together.

Then verify how the two-dimensional grayscale map is generated. Two core concepts: 1. Two-dimensional, 2. 0~255.

Just do it, Python we wrote thief 6. Use the Numpy library to generate an array object.

import numpy as np
import cv2
Generate an array matrix and then convert the matrix
test_array = np.arange(16).reshape(4.4)
print(test_array)

# Use OpenCV to store images
cv2.imwrite('first_img.jpg',test_array)
Copy the code

It looks like an image was actually generated at the root of the file. Open the image, and there it is, eraser the first artificial image. This little thing down here.

And it’s 4×4 pixels, which is exactly the same array I made.

Then make the graph bigger and modify the rules for generating the array. Since the values can only be 0 to 255, the code can only be modified like this.

import numpy as np
import cv2
Generate an array matrix and then convert the matrix
test_array = np.arange(255)
print(test_array)

# Use OpenCV to store images
cv2.imwrite('first_img.jpg',test_array)
Copy the code

The code generates a 1×255 pixel image with a gradient from black to white.

Principles of computer image representation

The following content feel good, directly reprinted over, the original address is: blog.csdn.net/weixin_4241… If the author sees it and does not want eraser to quote, please contact me directly.

Computer images can be divided into two categories: bitmaps and metafiles. Bitmap can be seen as a form, the whole image is composed of many of the rectangular block, each rectangle represents a point, the number of points equal to the bitmap of horizontal rectangular block number multiplied by the number of the vertical rectangular blocks, each point is known as pixels, and each pixel has a certain color, thus formed a complete image. Most commonly used images are bitmaps, such as photos taken by cameras, because bitmaps can represent the details of images and better restore the real scene. The disadvantage of bitmaps is that they are large in size, so many compressed image formats are produced to store bitmap images. At present, JPEG format is the most widely used, and GIF, PNG and so on. Another disadvantage is that bitmaps tend to be "jagged" when enlarged, which is also due to the nature of bitmaps. So in real life, you need to use another image format: vector graphics. Unlike bitmaps, vector maps are drawn from line segments using mathematical formulas, so no matter how large they are, there is no distortion, but vector maps cannot describe very complex images. So all kinds of graphics, CAD software and so on are using vector format to save files. Each pixel uses different bits to represent different colors. The number of colors is as follows: 4-bit image: 2^4=16 8-bit image: 2^8=256 16-bit image: 2^16=65536 24-bit image: 2^24=16777216 therefore, when 1Bytes (8bit) is used to store the pixel value of a pixel point, 256 different results can be obtained. When 3Bytes(24bit) is used, there can be 16777216 different results, and different results represent different colors. That's close to what the human eye can tell. The most commonly used color in computers is 24-bit, and in GDI+ there is a 32-bit color with an extra channel to describe Alpha, the transparent component. In bitmap, RGB three-color space is used to represent color (the color palette should be used when the number of bits is small). In 24-bit color image, 3 bytes respectively represent R, G and B color components. There is no brightness component, because in RGB representation, brightness can also be directly obtained from the color component. The value of each color component ranges from 0 to 255. The larger the value of a certain color component is, the higher the brightness value of this component is. When a true-color pixel point is converted into grayscale image, its brightness value is calculated by psychological grayscale formula: Y= 0.299r + 0.587g + 0.114b The grayscale image obtained by using the above formula is closest to the perception of human eyes on grayscale image. There are only 256 colors (8bit) in the grayscale image, so the converted image is saved in 8-bit format, which can save space. So the transformation from color image to gray image is an irreversible process. You can store 256 colors in the palette, so you can store exactly 256 grayscale colors into the palette.Copy the code

In the above content, I learned bitmap-related knowledge. The maximum value of a pixel is 255, and the binary representation of 255 takes up 8bit space, so it is ok to use 8 bits to save the grayscale image.

Color image needs three two-dimensional array representation, 3×8=24, inexplicably corresponding to the 24-bit color image.

Classification of computer images

Query, get the following knowledge.

  • Gray image, if the picture has 28=2562^8=25628=256 kinds of gray image, it is called 8-bit image;
  • Binary image. Is only 0, 1 black and white two;
  • RGB color image, ARGB, where A stands for alpha transparency;
  • Color model RGB, HIS, YIQ.

Eraser imagination redraw

After reading an image and changing every pixel to zero, would you end up with a pure black image of the same size? It doesn’t look like much, but give it a try.

import numpy as np
import cv2

def main() :
    img = cv2.imread("test_img.jpg")
    Get the width and height of the image
    print(img.shape)
    width,height,n = img.shape

    # Copy an image
    new_img = img.copy()
    for i in range(width):
        for j in range(height):
            Change every pixel to 0
            Print (new_img[I,j]) print(new_img[I,j])
            new_img[i,j] = 0

    cv2.imshow("new_img",new_img)
    cv2.waitKey(0)
Copy the code

Alas, as expected get a black graph, ha ha ha ha. And accidentally know, color image each pixel is an RGB array, at the same time get two knowledge points.

OpenCV ends today

Remember, punch 365 days together, because it is not systematic, just to learn knowledge, so if there is a wrong direction, we can point out in time.


Blogger ID: Dream eraser, hope you like, comment, favorites.