The project address

Github.com/ListerChen/…

1. Basic introduction to Bitmap

A Bitmap, also called a Bitmap, is an in-memory representation of an image, any image (JPEG, PNG, WEBP…). When loaded into memory, it is a Bitmap object. A Bitmap is actually a collection of pixels. Assuming that its width and height are width and height, the Bitmap contains width*height pixels, and the amount of memory it occupies is (width*height* single pixel memory).

In order to reduce the space occupied by images on disk, bitmaps are compressed when saved on disk. The file format of images actually represents different compression methods and compression rates, and files on disk are decompressed when loaded into memory.

1.1 Picture Format introduction

Common static image formats are JPEG, PNG, and WEBP, which have different compression methods and take up different space after being saved locally.

  1. JPEG is a lossy compression format that stores individual bitmaps in 24-bit color compression, but does not support transparency. When using JPEG compression, choose the appropriate compression rate to avoid poor image quality.
  2. PNG is a lossless compression format that supports all colors. Since it is lossless, PNG is usually used for images that require line or clarity, such as APP ICONS. Due to the large space occupied by PNG, PNG is generally used for WEBP.
  3. WEBP supports lossless compression and lossy compression, and its lossless compression rate is better than PNG, lossy compression rate is better than JPEG, while it supports all colors, and supports multi-frame animation, the only disadvantage is that the compression efficiency is not as good as JPEG and PNG.

1.2 Bitmap color depth

The essence of Bitmap is A collection of pixels. It forms the whole picture by describing the ARGB information of each pixel, where A represents the transparency channel, RGB represents the three color channels of red, green and blue, and the value of each channel ranges from 0 to 255. Therefore, 8bit can completely represent one channel. So 4×8=32bit can represent a full pixel. However, if each Bitmap uses 32 bits to represent a pixel, it is a large burden on memory, so bitmaps with low quality requirements can use less memory to represent a pixel.

The color depth refers to the number of bits used by each pixel to store the ARGB value. The larger the color depth is, the richer the color of the image will be. Generally speaking, the color depth can be 8bit, 16bit, 32bit, etc.

ALPHA_8: this scheme only stores the transparency channel, the color depth is 8bit, the use of special scenes, such as setting the covering effect. ARGB_8888: ARGB Each channel value is represented by 8bit, the color depth is 32bit, each pixel needs 4byte memory to store data, the image quality is high, and the memory is large. ARGB_4444: ARGB each channel has 4 bits and 16 bits of color depth, which has been officially deprecated by Android due to poor image quality. RGB_565: color depth is 16bit, RGB channel value respectively accounted for 5, 6, 5 bits, but there is no storage of A channel value, so it does not support transparency, can be used for the clarity of the photo is not high requirements. RGBA_F16: the color depth is 64bit. This configuration is mainly used for wide color gamut and HDR content. HARDWARE: This is a special configuration in which bitmaps are stored in video memory and are immutable. This configuration only works if the only action the Bitmap has is to draw itself on the screen.

We can create bitmaps of different color depths according to the image quality requirements. ARGB_8888 can be used if the display quality requirements are high. Generally speaking, RGB_565 can be used to reduce the probability of OOM.

Bitmap b = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Copy the code

1.2 Loading large images through sampling

The ImageView is the carrier for displaying bitmaps. Generally, the width and height of the ImageView is smaller than that of the Bitmap. Loading a full Bitmap into a smaller ImageView will waste memory. Android has an optimized approach to this problem.

This method compares the size of ImageView and Bitmap and calculates the sampling rate, and finally loads the sampled image into memory. The process is as follows: Bitmapfactory.decodexxx (res, resId, bitmapFactory.options) to resolve images, set bitmapFactory.options to true for inJustDecodeBounds. Instead of loading the image into memory, you get the width and height of the Bitmap, which is then used to calculate the sampling rate. After getting the sampling rate, set inJustDecodeBounds to false and then load the Bitmap to get a larger sample map.

public static Bitmap decodeSampledBitmapFromResource(
            Resources res, int resId, int reqWidth, int reqHeight) {
    final BitmapFactory.Options options = new BitmapFactory.Options();
    // This property defaults to false. When it is true, images are not loaded into memory, but only the width and height are calculated
    options.inJustDecodeBounds = true;
    BitmapFactory.decodeResource(res, resId, options);
    // Calculate the sampling rate
    options.inSampleSize = calculateInSampleSize(
            options, reqWidth, reqHeight);
    // Set inJustDecodeBounds to false and load the image into memory
    options.inJustDecodeBounds = false;
    return BitmapFactory.decodeResource(res, resId, options);
}

public static int calculateInSampleSize(
            BitmapFactory.Options options, int reqWidth, int reqHeight) {
    // Get the Bitmap width and height
    final int height = options.outHeight;
    final int width = options.outWidth;
    // Calculate the sampling rate
    int inSampleSize = 1;
    if (height > reqHeight || width > reqWidth) {
        final int halfHeight = height / 2;
        final int halfWidth = width / 2;
        while ((halfHeight / inSampleSize) >= reqHeight
                && (halfWidth / inSampleSize) >= reqWidth) {
            inSampleSize *= 2; }}return inSampleSize;
}
Copy the code

If you use an image loading framework such as Glide to load a Bitmap into the ImageView, the framework will automatically do the sampling for you. However, if you need to get a thumbnail of a Bitmap, the above method is useful.

Second, bitmap-related applications

2.1 Picture Cropping

Image clipping is a basic function of image processing. Image clipping is required to set the profile picture function in APP. Users can clip patterns of specific areas on the original image through a preview box. The system itself provides the function of clipping. We can start an Activity of clipping pictures on page A by following the code, and clipping ends in the onActivityResult(…) of page A. To get the cropped picture.

// the uri is the address of the image
public void startCropPicture(Uri uri) {
    Intent intent = new Intent("com.android.camera.action.CROP");
    intent.setDataAndType(uri, "image/*");
    intent.putExtra("crop"."true");
    intent.putExtra("aspectX".1); // Trim the size of the box
    intent.putExtra("aspectY".1);
    intent.putExtra("outputX".300); // Output the image size
    intent.putExtra("outputY".300);
    intent.putExtra("scale".true);
    intent.putExtra("return-data".true);
    startActivityForResult(intent, REQUEST_CODE);
}
Copy the code

Bitmap.createbitmap (originBitmap, left, top, width, height) is used to crop the specified Bitmap from the originBitmap. Therefore, we can implement a cropping avatar function by ourselves. Bitmap.createbitmap (…) The coordinates and width passed in the method parameters are based on the original Bitmap, and an exception will be thrown if the parameters passed in exceed the width of the original Bitmap.

This Demo realizes the basic functions of clipping pattern. Drag the lower right corner of the clipping box to change the size of the box, drag other areas to move the clipping box, and download the code through the link at the beginning of this article.

Bitmap.createbitmap (originBitmap, left, top, width, height) cuts out the specified Bitmap, or specifies the region of the Bitmap to draw. DrawBitmap (Bitmap Bitmap, Rect SRC, RectF DST, Paint Paint), where SRC represents the clipping area of the original image and DST represents the area that the clipped image is drawn to the View.

2.2 Image Stitching

Photo splicing is commonly seen when sharing all pictures to achieve the effect of “seeing everything in one picture”. The code example below is a splitBitmap of four identical squares for demonstration purposes.

private Bitmap getJointBitmap(Bitmap splitBitmap) {
    int width = splitBitmap.getWidth();
    int height = splitBitmap.getHeight();
    Bitmap bitmap = Bitmap.createBitmap(
                width * 2, height * 2, Bitmap.Config.RGB_565);
    Canvas canvas = new Canvas(bitmap);
    canvas.drawBitmap(splitBitmap, 0.0.null);
    canvas.drawBitmap(splitBitmap, width, 0.null);
    canvas.drawBitmap(splitBitmap, 0, height, null);
    canvas.drawBitmap(splitBitmap, width, height, null);
    return bitmap;
}
Copy the code

Before stitching images, a Bitmap twice as wide and high as splitBitmap is created as a container. Then, four images of the same size are drawn at four positions of the container through Canvas. The final effect is as follows.

2.3 Matrix Transformation

Bitmap is a collection of pixels. We can change the position of each pixel through matrix operation to achieve the effect of graphic transformation. Android can be transformed by Matrix class, Matrix itself is a 3×3 Matrix, you can create an identity Matrix by Matrix m = new Matrix(), the value of the original Matrix is shown below.

[1 0 0]
[0 1 0]
[0 0 1]
Copy the code

The transformation information of each position in the Matrix is shown as follows: Scale represents scaling, Skew represents wrong cutting, trans represents translation, and PERSp represents perspective parameters. Each pixel in the Bitmap can be represented by a 3×1 matrix, where X represents the abscissa of the current pixel and y represents the ordinate. This matrix is multiplied by all the pixels in the Bitmap to get the transformed image.

[scaleX  skewX   transX]     [x]     [scaleX * x + skewX * y + transX]
[skewY   scaleY  transY]  x  [y]  =  [skewY * x + scaleY * y + transY]
[persp0  persp1  persp2]     [1]     [persp0 * x + persp1 * y + persp2]
Copy the code

To put it simply, Matrix is a container that holds the Matrix transformation information expected by users. Before applying the Matrix to the Bitmap, we can perform various operations on it to save transformation information. Matrix operation can achieve translation, rotation, scaling, and tangent, so Matrix also provides a similar method.

setTranslate(float dx,floatDy): Control Matrix for displacement. setSkew(float kx,floatKy): Control Matrix to tilt, kx and ky are the proportions in X and Y directions. setSkew(float kx,float ky,float px,floatPy): Control Matrix to tilt px, py as axis, kx, ky as tilt ratio in X, Y direction setRotate(float(degrees): Control the Matrix to rotate the depress Angle, and the axis is (0.0)
setRotate(float degrees,float px,floatPy): Control the Matrix to rotate the depress Angle, and the axis is (px,py) setScale(float sx,floatSy): set Matrix for scaling, sx, sy is the scaling ratio in X and Y direction. setScale(float sx,float sy,float px,floatPy): set Matrix to scale with (px,py) as the axis, and sx and SY as the scaling ratio in X and Y directionsCopy the code

Most of the time, the matrix transformation is not a single translation, rotation or scaling, and these transformations are often used together, so setXXX() method cannot meet the requirements. Therefore, Matrix provides preXXX() and postXXX() methods to combine multiple Matrix operations by multiplication. Since matrix multiplication does not satisfy the commutative law, it is necessary to pay attention to the order of multiplication during matrix operation.

With preXXX() and postXXX(), we can think of all the calculations of the matrix transformation as a multiplication list, where the preXXX() method is called to add operations to the head of the list, and the postXXX() method is called to add operations to the tail of the list. For example, the order of matrix multiplication in the following code is 2->1->3->4. Note that the setXXX() method resets the transformation of the Matrix. If you call setTranslate() on the Matrix that has performed the following four operations, the Matrix will only have translation effects.

Matrix matrix = newMatrix(); matrix.preScale(...) ;/ / 1matrix.preTranslate(...) ;/ / 2matrix.postTranslate(...) ;/ / 3matrix.postRotate(...) ;/ / 4
Copy the code

2.4 Color Transformation

ColorFilter is used for color transformation. Paint.setcolorfilter (ColorFilter filter) is used to set the ColorFilter, which will filter the color of each pixel to obtain the final image. ColorFilter has 3 subclasses, here mainly introduces ColorMatrixColorFilter.

Against 2.4.1 ColorMatrixColorFilter

Android colors are stored in the form of ARGB. We can change the color values by using ColorMatrix, which defines a 4×5 float matrix. The 4 rows of the matrix represent the vectors on RGBA, with values ranging from 0f to 2f, if 1 is the original effect. The fifth column of each row shows the offset, which is how much you increase or decrease on the current channel.

ColorMatrix colorMatrix = new ColorMatrix(new float[] {1.0.0.0.0.0.1.0.0.0.0.0.1.0.0.0.0.0.1.0});Copy the code

The operation between ColorMatrix and color is shown below, which is actually Matrix operation, similar to the Matrix in the previous section.

[a, b, c, d, e]     [R]     [a*R + b*G + c*B + d*A + e]
[f, g, h, i, j]     [G]     [f*R + g*G + h*B + i*A + j]
[k, l, m, n, o]  x  [B]  =  [k*R + l*G + m*B + n*A + o]
[p, q, r, s, t]     [A]     [p*R + q*G + r*B + s*A + t]
                    [1]
Copy the code

With the ColorMatrix, you can modify all the colors on the Bitmap. For example, you can darnish the Bitmap by adjusting the value of each channel by changing the initial value of 1 to 0.5. However, I do not know much about colors and have not actually used them in the project. If you are interested, please refer to Reference 1.

2.5 Image Mixing

Image blending refers to combining the contents of two original images (we call DST and SRC) according to some rules to form a new image that contains the characteristics of DST and SRC. For example, DST for a circular image and SRC for a photo can be combined into a circular photo.

Android realizes image mixing through PorterDuffXfermode, which actually conducts ARGB calculation on all pixels of two images on Canvas through formula, and finally obtains new ARGB value at each pixel point. It should be noted that in the onDraw(Canvas) method for image mixing, DST is drawn first and SRC is drawn later, so you need to pay attention to the drawing order of the image.

PorterDuffXfermode provides a total of 18 mixing modes, their calculation formula is as follows, Sa represents SRC ALPHA channel, Sc represents SRC color; Da represents DST’s ALPHA channel, and Dc represents DST’s color. Using CLEAR as an example, this mode clears all content in the SRC area.

Synthetic model The formula
CLEAR [0, 0]
SRC [Sa, Sc]
DST [Da, Dc]
SRC_OVER [Sa + (1 – Sa)*Da, Rc = Sc + (1 – Sa)*Dc]
DST_OVER [Sa + (1 – Sa)*Da, Rc = Dc + (1 – Da)*Sc]
SRC_IN [Sa * Da, Sc * Da]
DST_IN [Sa * Da, Sa * Dc]
SRC_OUT [Sa * (1 – Da), Sc * (1 – Da)]
DST_OUT [Da * (1 – Sa), Dc * (1 – Sa)]
SRC_ATOP [Da, Sc * Da + (1 – Sa) * Dc]
DST_ATOP [Sa, Sa * Dc + Sc * (1 – Da)]
XOR [Sa + Da – 2 * Sa * Da, Sc * (1 – Da) + (1 – Sa) * Dc]
DARKEN [Sa + Da – Sa * Da, Sc * (1 – Da) + Dc * (1 – Sa) + min(Sc, Dc)]
LIGHTEN [Sa + Da – Sa * Da, Sc * (1 – Da) + Dc * (1 – Sa) + max(Sc, Dc)]
MULTIPLY [Sa * Da, Sc * Dc]
SCREEN [Sa + Da – Sa * Da, Sc + Dc – Sc * Dc]
ADD Saturate(S + D)

If you’ve ever used PorterDuffXfermode, you’ve probably seen the following image. The Android official sample looks like this, but there are only 16 examples of mixing modes. I added ADD and OVERLAY in the Demo as well.

Of course, you may have seen this picture.

At first glance, both the DST and SRC raw images look the same, but why do they show different results when using the same blend mode?

The key is the size of DST and SRC. DST and SRC in the first image are bitmaps that are the same size as the Canvas, but draw circles and rectangles in one area of the Bitmap. MakeSrc () creates bitmaps equal to the entire View(or Canvas). This also explains why the first image is blank in CLEAR mode, because SRC is the size of the entire View, and CLEAR mode clears the contents of the SRC area, which clears the entire View.

public class XFerModeView extends View {

    private Paint mPaint;
    private PorterDuffXfermode mPorterDuffXfermode;
    private int mWidth;
    private int mHeight;

    // omit the constructor......

    private void init(a) {
        setLayerType(LAYER_TYPE_SOFTWARE, null);
        mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
        mPorterDuffXfermode = new PorterDuffXfermode(PorterDuff.Mode.CLEAR);
    }

    @Override
    protected void onSizeChanged(int w, int h, int oldw, int oldh) {
        super.onSizeChanged(w, h, oldw, oldh);
        if(mWidth ! = w || mHeight ! = h) { mWidth = w; mHeight = h; invalidate(); }}@Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        drawBackground(canvas);
        int sc = canvas.saveLayer(0.0, mWidth, mHeight, null);
        drawCompositionInFullSize(canvas);
        canvas.restoreToCount(sc);
    }

    private void drawBackground(Canvas canvas) {
        mPaint.setStyle(Paint.Style.STROKE);
        mPaint.setStrokeWidth(5);
        mPaint.setColor(Color.BLACK);
        canvas.drawRect(0.0, mWidth, mHeight, mPaint);
    }

    private void drawCompositionInFullSize(Canvas canvas) {
        mPaint.setStyle(Paint.Style.FILL);
        Bitmap dst = makeDst();
        Bitmap src = makeSrc();
        / / draw the DST
        canvas.drawBitmap(dst, 0.0, mPaint);
        // Set image blending mode
        mPaint.setXfermode(mPorterDuffXfermode);
        / / draw the SRC
        canvas.drawBitmap(src, 0.0, mPaint);
        // Clear image blending mode
        mPaint.setXfermode(null);
    }

    private Bitmap makeDst(a) {
        Bitmap bm = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
        Canvas c = new Canvas(bm);
        mPaint.setColor(0xFFFFCC44);
        c.drawOval(10.10, mWidth * 3f / 4, mHeight * 3f / 4, mPaint);
        return bm;
    }

    private Bitmap makeSrc(a) {
        Bitmap bm = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
        Canvas c = new Canvas(bm);
        mPaint.setColor(0xFF66AAFF);
        c.drawRect(mWidth * 1f / 3, mHeight * 1f / 3,
                mWidth * 19f / 20, mHeight * 19f / 20, mPaint);
        return bm;
    }
Copy the code

In the second figure, the size of the image created when SRC and DST are drawn is the size of a circle or rectangle. The final result is also different from the first figure, with the modified code shown below. Using CLEAR mode as an example, only the SRC image is cleared. You can see that the part of DST that intersects SRC is cleared.

public class XFerModeView extends View {

    // omit duplicate code......

    @Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        drawBackground(canvas);
        int sc = canvas.saveLayer(0.0, mWidth, mHeight, null);
        drawCompositionInSelfSize(canvas);
        canvas.restoreToCount(sc);
    }

    /** * The size of the blended image is only the size of the visible area */
    private void drawCompositionInSelfSize(Canvas canvas) {
        mPaint.setStyle(Paint.Style.FILL);
        mPaint.setColor(0xFFFFCC44);
        canvas.drawOval(10.10, mWidth * 3f / 4, mHeight * 3f / 4, mPaint);
        mPaint.setXfermode(mPorterDuffXfermode);
        mPaint.setColor(0xFF66AAFF);
        canvas.drawRect(mWidth * 1f / 3, mHeight * 1f / 3,
                mWidth * 19f / 20, mHeight * 19f / 20, mPaint);
        mPaint.setXfermode(null); }}Copy the code

One more thing to note when using image blending: the code above calls int sc = Canvas.savelayer (…) when drawing the blended image in the onDraw(Canvas) method. Generate a new Layer (Layer), sc represents the Layer number, then draw DST and SRC on the new Layer, and add this Layer to the Canvas after drawing. So why do we need a new Layer to draw DST and SRC instead of drawing directly on the Canvas?

Layer can be understood as a Layer of Canvas. By default, Canvas has only one Layer and all the drawings are on the same Layer. When you need to draw multilayer images, you can use Canvas.savelayer (…) Generate a new Layer, the content drawn on the new Layer is independent of the content of other layers, the call to Canvas. RestoreToCount (int SC) will overwrite this Layer on the existing canvas image. Canvas manages Layer in the form of stack, as shown in the diagram below.

As mentioned earlier, when blending images, DST is drawn first and SRC is drawn later. If you do not create a new Layer, everything on the Canvas will be treated as DST when drawing SRC, so the background and other content will also be involved in the image blending, which is easy to get the wrong effect.

The above is the basic introduction of image mixing. Image mixing has a wide range of application scenarios, and several common scenarios are introduced here.

2.5.1 Image cutting

Image cutting is used to cut an image into a specific shape. It can be a common shape such as a round or rounded rectangle, or it can be cut into an unconventional shape such as a five-pointed star, which requires a base map of the shape.

It is relatively easy to cut the image to a rounded rectangle. Create a new layer in onDraw(Canvas), draw the rounded rectangle as DST, and then draw the original image as SRC. At this time, the image blend mode should be set to SRC_IN, and the code is as follows: decodeSampledBitmapFromResource(…) This is the big picture sampling in section 1.2.

public class RoundCornerView extends View {

    private Paint mPaint;
    private PorterDuffXfermode mFerMode;
    private Bitmap mBitmap;
    private Rect mBitmapRect;
    private int mWidth;
    private int mHeight;

    // omit constructor...

    private void init(a) {
        mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
        mFerMode = new PorterDuffXfermode(PorterDuff.Mode.SRC_IN);
    }

    @Override
    protected void onSizeChanged(int w, int h, int oldw, int oldh) {
        super.onSizeChanged(w, h, oldw, oldh);
        if(mWidth ! = w || mHeight ! = h) { mWidth = w; mHeight = h; mBitmap = BitmapUtils.decodeSampledBitmapFromResource( getContext().getResources(), R.drawable.compress_test, mWidth, mHeight); mBitmapRect =new Rect(0.0, mWidth, mHeight); invalidate(); }}@Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        int sc = canvas.saveLayer(0.0, mWidth, mHeight, null);
        canvas.drawRoundRect(0.0, mWidth, mHeight, 50.50, mPaint);
        mPaint.setXfermode(mFerMode);
        canvas.drawBitmap(mBitmap, null, mBitmapRect, mPaint);
        mPaint.setXfermode(null); canvas.restoreToCount(sc); }... }Copy the code

The final effect is as follows. Similarly, you can cut the image into basic shapes such as circles.

If you want to cut the image into a pattern like a pentagram, you need to use a base image of the pentagram. It should be noted that the part of the base image other than the pentagram should be transparent, otherwise the cut will still be the original shape. The code is pretty much the same as if you cut a rounded rectangle, you just change the part where you draw a rounded rectangle to draw a pentagram.

public class StarPicView extends View {

    private Paint mPaint;
    private PorterDuffXfermode mMode;
    private Bitmap mBgBitmap;
    private Bitmap mBitmap;
    private int mWidth, mHeight;
    private Rect mDrawRect;

    // omit the constructor......

    private void init(a) {
        mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
        mMode = new PorterDuffXfermode(PorterDuff.Mode.SRC_IN);
    }

    @Override
    protected void onSizeChanged(int w, int h, int oldw, int oldh) {
        super.onSizeChanged(w, h, oldw, oldh);
        if(mWidth ! = w || mHeight ! = h) { mWidth = w; mHeight = h; mBgBitmap = BitmapUtils.decodeSampledBitmapFromResource( getContext().getResources(), R.drawable.star4, mWidth, mHeight); mBitmap = BitmapUtils.decodeSampledBitmapFromResource( getContext().getResources(), R.drawable.icon3, mWidth, mHeight); mDrawRect =new Rect(0.0, mWidth, mHeight); invalidate(); }}@Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        int sc = canvas.saveLayer(0.0, mWidth, mHeight, null);
        canvas.drawBitmap(mBgBitmap, null, mDrawRect, mPaint);
        mPaint.setXfermode(mMode);
        canvas.drawBitmap(mBitmap, null, mDrawRect, mPaint);
        mPaint.setXfermode(null); canvas.restoreToCount(sc); }}Copy the code

The end result is as follows.

2.5.2 Color synthesis

Color compositing can add new effects to an image, changing the overall tone of the image when mixed with a solid color. Yellow, for example, gives the image a nostalgic yellowing effect, while red makes the image warmer. The following code mixes the translucent red color with the image in SCREEN Blend mode.

public class ColorComposeView extends View {...private void init(a) {
        setLayerType(LAYER_TYPE_SOFTWARE, null);
        mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
        mMode = newPorterDuffXfermode(PorterDuff.Mode.SCREEN); }...@Override
    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        canvas.drawColor(Color.WHITE);
        int sc = canvas.saveLayer(0.0, mWidth, mHeight, null);
        canvas.drawColor(0x44FF0000);
        mPaint.setXfermode(mMode);
        canvas.drawBitmap(mBitmap, null, mRect, mPaint);
        mPaint.setXfermode(null); canvas.restoreToCount(sc); }}Copy the code

How’s it going? Does little Sister look a little softer?

For solid color blending, two images can also be combined, for example, through a frosted glass base image, you can add a certain blur effect to the photo, as shown below.

When drawing, base map is used as DST and photos as SRC. OVERLAY is used in mixing mode. The code is similar to that of drawing and cutting five-pointed star. The result is the following.

2.5.3 Image alpha gradient

I encountered an interesting UI requirement that, given a base image, required the opacity of the image to be gradient 1-0 from top to bottom, as shown below. Because the background is white, the transparent part is white.

This is as simple as creating a local gradient drawable and mixing it with XOR mode. See the code at the beginning of this article.

Three, picture compression

3.1 Quality compression

Quality compression reduces the size of images on disk. When saving bitmap.press (CompressFormat, quality, outputStream), you can choose the corresponding file format and quality standard. The file can be JPEG, PNG, or WEBP. The quality ranges from 0 to 100. 0 indicates the worst quality, and 100 indicates the highest quality. The WEBP format will be deprecated in API30 in favor of WEBP_LOSSLESS and WEBP_LOSSY for clearer descriptions of lossless and lossy compression.

The code for saving a Bitmap as a 60 quality jpeg is as follows.

private void qualityCompressJPG(a) {
    OutputStream os = getOutputStreamByName("jpgFile60.jpeg");
    if(os ! =null) {
        mOriginBitmap.compress(Bitmap.CompressFormat.JPEG, 60, os); }}private OutputStream getOutputStreamByName(String fileName) {
    BufferedOutputStream bos = null;
    File dir = new File(FILE_DIR);
    boolean dirExist = true;
    if(! dir.exists()) { dirExist = dir.mkdirs(); }if (dirExist) {
        File file = new File(dir, fileName);
        if (file.exists()) {
            file.delete();
        }
        try {
            boolean fileExist;
            fileExist = file.createNewFile();
            if (fileExist) {
                bos = new BufferedOutputStream(newFileOutputStream(file)); }}catch(IOException e) { e.printStackTrace(); }}return bos;
}
Copy the code

PNG is lossless compression, so the bitmap.press (CompressFormat, quality, outputStream) method ignores the quality parameter when selecting PNG as the file format. The picture below shows the comparison of different formats and qualities when saving the same picture. It can be found that the space occupied by WEBP format is relatively ideal. If the size of APP is relatively large, you can consider converting resource files into WEBP to save space.

3.2 Size compression

Size compression refers to the compression of the width and height of the original Bitmap and the reduction of the number of pixels to reduce the space occupied by the Bitmap. In this compression mode, both the memory occupied by the Bitmap and the space occupied by the image saved to the disk will be significantly reduced.

Dimension compression can be achieved by Bitmap. CreateScaledBitmap (Bitmap SRC, int dstWidth, int dstHeight, Boolean filter) to create the Bitmap after compression, The filter parameter can simply be interpreted as saying that if true it takes longer to get better image quality, while false does the opposite.

private void sizeCompress1(int scale) {
    int width = mOriginBitmap.getWidth() / scale;
    int height = mOriginBitmap.getHeight() / scale;
    Bitmap b = Bitmap.createScaledBitmap(
                    mOriginBitmap, width, height, false);
    OutputStream os = getOutputStreamByName("sizeCompress1.webp");
    if(os ! =null) {
        b.compress(Bitmap.CompressFormat.WEBP, 100, os); }}Copy the code

Through the Bitmap b = Bitmap. CreateScaledBitmap (SRC, w, h, filter) to create the Bitmap, its Config is based on the original Bitmap, If the original Bitmap Config is ARGB_8888 and the compressed Bitmap does not need such high resolution, you can choose to create a new RGB_565 Bitmap and draw the compressed image on the Canvas.

private void sizeCompress2(int scale) {
    int width = mOriginBitmap.getWidth() / scale;
    int height = mOriginBitmap.getHeight() / scale;
    Bitmap b = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
    Canvas canvas = new Canvas(b);
    Rect rect = new Rect(0.0, width, height);
    canvas.drawBitmap(mOriginBitmap, null, rect, null);
    OutputStream os = getOutputStreamByName("sizeCompress2_565.webp");
    if(os ! =null) {
        b.compress(Bitmap.CompressFormat.WEBP, 100, os); }}Copy the code

If ARGB_8888 is selected in the second method, the Bitmap created has the same amount of memory (calculated by bitmap.getBytecount ()) and the size saved to disk as in the first method. If RGB_565 is selected, the Bitmap takes up half of the memory compared to the first method, and the size of the Bitmap after saving to disk is also reduced.

Reference & Recommended reading

  1. Android custom controls are pretty simple
  2. Research on the implementation of Android dynamic blur