0, write first

This article deals with the discussion of screen density, and it is important to understand the two variables of DisplayMetrics, excerpted from the official documentation:

  • Density: The logical density of the display. This is a scaling factor for the Density Independent Pixel unit, Where one DIP is one pixel on an approximately 160 Dpi screen (for example a 240×320, 1.5 “x2” screen), Providing the baseline of the system’s display. Thus on a 160dpi screen this density value will be 1; on a 120 dpi screen it would be .75; etc.

    This value does not exactly follow the real screen size (as given by xdpi and ydpi, but rather is used to scale the size of the overall UI in steps based on gross changes in the display dpi. For example, A 240×320 screen will have a density of 1 even if its width is 1.8 “, 1.3 “, etc. However, If the screen resolution was increased to 320×480 but the screen size remained 1.5 “x2” then the density would be Increased (probably to 1.5).

  • DensityDpi: The screen density expressed as dot-per-inch.

Density = 1dp=density px; DensityDpi is the number of dots (not pixels) per inch of the screen. In DisplayMetrics, the relationship between the two is linear:

density 1 1.5 2 3 3.5 4
densityDpi 160 240 320 480 560 640

To avoid confusion, all references to density in this article refer to densityDpi unless otherwise specified, although density can also be used if you wish.

In addition, this article is mainly based on android 5.0 source code, other versions may be slightly different. The article inevitably omissions, welcome to correct ~

1, how much memory?

Friends who do mobile client development must have a headache because of the graph. Speaking of which, there was once a leader who lost his temper because a friend in the group added a JPG graph in the project, haha.

Why do I have a headache? Eat ram and give you an OOM laugh every once in a while to spice up your day. So every time a picture is added to the project, we need to be concerned about how much pit this product will occupy, how much? The Android API has a handy way of doing this,

Public final int getByteCount() {// int result permits bitmaps up to 46,340 x 46,340 return getRowBytes() * getHeight(); }Copy the code

Using this method, we can find out how much memory a Bitmap takes up at runtime.

For example

A 522×686 PNG image, I put it in the drawable-xxhdpi directory, load on Samsung S6, occupy memory 2547360B, can use this method to obtain.

Give me a picture and I’ll tell you how much memory you take up

Always ask Bitmap how old you are. It feels weird, because we can’t keep asking without figuring out why it’s big. Is there any way to tell how old it is? Of course you can. It’s easy. We’ll just follow the trail, find out who did it. No, find out.

2.1 getByteCount

GetHeight is the height of the image (unit: px). GetRowBytes is the height of the image (unit: px).

public final int getrowBytes() { if (mRecycled) { Log.w(TAG, "Called getRowBytes() on a recycle()'d bitmap! This is undefined behavior!" ); } return nativeRowBytes(mFinalizer.mNativeBitmap); }Copy the code

Ugh, it feels so right. It’s JNI. Since I don’t use C++ very much, every time I think of JNI, please imagine the scene of banging your head against a wall. However, as grandpa MAO once said, all reactionaries are paper tigers.

Bitmap.cpp

static jint Bitmap_rowBytes(JNIEnv* env, jobject, jlong bitmapHandle) {
     SkBitmap* bitmap = reinterpret_cast(bitmapHandle)
     return static_cast(bitmap->rowBytes());
}Copy the code

Wait, we seem to have found something. It turns out that a Bitmap is essentially an SkBitmap. And this SkBitmap also has a big origin, believe it or not: Skia. Don’t say anything, just take a look at SkBitmap.

SkBitmap.h

/** Return the number of bytes between subsequent rows of the bitmap. */
size_t rowBytes() const { return fRowBytes; }Copy the code

SkBitmap.cpp

size_t SkBitmap::ComputeRowBytes(Config c, int width) {
    return SkColorTypeMinRowBytes(SkBitmapConfigToColorType(c), width);
 }
SkImageInfo.h

static int SkColorTypeBytesPerPixel(SkColorType ct) {
   static const uint8_t gSize[] = {
    0,  // Unknown
    1,  // Alpha_8
    2,  // RGB_565
    2,  // ARGB_4444
    4,  // RGBA_8888
    4,  // BGRA_8888
    1,  // kIndex_8
  };
  SK_COMPILE_ASSERT(SK_ARRAY_COUNT(gSize) == (size_t)(kLastEnum_SkColorType + 1),
                size_mismatch_with_SkColorType_enum);

   SkASSERT((size_t)ct < SK_ARRAY_COUNT(gSize));
   return gSize[ct];
}

static inline size_t SkColorTypeMinRowBytes(SkColorType ct, int width) {
    return width * SkColorTypeBytesPerPixel(ct);
}Copy the code

Ok, tracking down here, we see that one pixel of ARGB_8888 (the format of our most common Bitmap) takes up 4 bytes, so rowBytes are actually 4*width bytes.

ARGB_8888: ARGB_8888: ARGB_8888: ARGB_8888: ARGB_8888: ARGB_8888

bitmapInRam = bitmapWidth*bitmapHeight *4 bytes

You think that’s the end of the story? If you have the ability to try, the calculated and you get will always be a multiple of the difference, why?

Remember that example we gave at the beginning?

A 522*686 PNG image, I put it in the drawable-xxhdpi directory, load on Samsung S6, occupy memory 2547360B, can use this method to obtain.

But the formula came up with 1432368B…

2.2 Density

Do you know why I’m trying so hard to say “put it in XXX directory” when I’m using XXX phone? Do you think Bitmap loading is all about width and height? Naive.

To start with the code, we read the image below the drawable directory using the decodeResource method, which is essentially a two-step method:

  • The Resource. OpenRawResource method is called. After the method is called, TypedValue is assigned, which contains the density of the original Resource.

  • Call decodeResourceStream to decode and adapt the raw resource. This process is essentially a mapping of the density of the raw resource to the density of the screen.

The density of the original resource depends on the directory in which the resource is stored (e.g. xxhdpi corresponds to 480), and the density of the screen is assigned as follows:

BitmapFactory.java

public static Bitmap decodeResourceStream(Resources res, TypedValue value, InputStream is, Rect pad, Options opts) {// Actually, we have opts null here, so we initialize it here. if (opts == null) { opts = new Options(); } if (opts.inDensity == 0 && value ! = null) { final int density = value.density; if (density == TypedValue.DENSITY_DEFAULT) { opts.inDensity = DisplayMetrics.DENSITY_DEFAULT; } else if (density ! = TypedValue.DENSITY_NONE) { opts.inDensity = density; }} if (opts.intargetDensity == 0 && res! InTargetDensity = res.getDisplayMetrics().denSitydpi; } return decodeStream(is, pad, opts); }Copy the code

We see that the value opts is initialized and its construction is surprisingly simple:

public Options() {
   inDither = false;
   inScaled = true;
   inPremultiplied = true;
}Copy the code

So it’s easy to see that option. inScreenDensity is not initialized, and we’ll see later that it’s not used at all; What should we be most concerned about? InDensity and inTargetDensity correspond to the density and targetDensity in the CPP file below — again, inDensity is the density of the original resource, InTargetDensity is the density of the screen.

Then, the nativeDecodeStream method is used, and the code of the most critical doDecode function is given directly after the unimportant code is skipped:

BitmapFactory.cpp

static jobject doDecode(JNIEnv* env, SkStreamRewindable* stream, jobject padding, jobject options) { ...... if (env->GetBooleanField(options, gOptions_scaledFieldID)) { const int density = env->GetIntField(options, gOptions_densityFieldID); Const int targetDensity = env->GetIntField(options, gOptions_targetDensityFieldID); // screenDensity = env->GetIntField(options, gOptions_screenDensityFieldID); if (density ! = 0 && targetDensity ! = 0 && density ! = screenDensity) { scale = (float) targetDensity / density; } } } const bool willScale = scale ! F = 1.0; . SkBitmap decodingBitmap; if (! decoder->decode(stream, &decodingBitmap, prefColorType,decodeMode)) { return nullObjectReturn("decoder->decode returned false"); ScaledWidth = decodingBitmap.width(); scaledWidth = decodingBitmap.width(); int scaledHeight = decodingBitmap.height(); if (willScale && decodeMode ! = SkImageDecoder: : kDecodeBounds_Mode) {scaledWidth = int (scaledWidth * scale + 0.5 f); ScaledHeight = int(scaledHeight * scale + 0.5f); scaledHeight = int(scaledHeight * scale + 0.5f); } if (willScale) { const float sx = scaledWidth / float(decodingBitmap.width()); const float sy = scaledHeight / float(decodingBitmap.height()); // TODO: avoid copying when scaled size equals decodingBitmap size SkColorType colorType = colorTypeForScaledOutput(decodingBitmap.colorType()); // FIXME: If the alphaType is kUnpremul and the image has alpha, the // colors may not be correct, since Skia does not yet support drawing // to/from unpremultiplied bitmaps. outputBitmap->setInfo(SkImageInfo::Make(scaledWidth, scaledHeight, colorType, decodingBitmap.alphaType())); if (! outputBitmap->allocPixels(outputAllocator, NULL)) { return nullObjectReturn("allocation failed for scaled bitmap"); } // If outputBitmap's pixels are newly allocated by Java, there is no need // to erase to 0, since the pixels were initialized to 0. if (outputAllocator ! = &javaAllocator) { outputBitmap->eraseColor(0); } SkPaint paint; paint.setFilterLevel(SkPaint::kLow_FilterLevel); SkCanvas canvas(*outputBitmap); canvas.scale(sx, sy); Canvas. DrawBitmap (decodingBitmap, 0.0 f, 0.0 f, & paint); }... }Copy the code

The former is the density of the decodingBitmap. This value depends on the directory where the image is placed (e.g. hdPI is 240, xxhdpi is 480). I followed this part of the code, it is too long. I’m not going to list it; TargetDensity is actually the targetDensity of the image that we load. The source of this value was given earlier, which is the densityDpi of DisplayMetrics. For Samsung s6, this value is 640. Sx and SY are actually approximately equal to scale, because scaledWidth and scaledHeight are given by width and height times scale. We see that the Canvas enlarges by a factor of scale, and then draws the bitmap that reads into memory, which is equivalent to enlarging the bitmap by a factor of scale.

Let’s look at our example:

A 522*686 PNG image, I placed it in the drawable-xxhdpi directory, loaded on Samsung S6, occupied memory 2547360B, where density = XXhdPI = 480, targetDensity = Samsung S6 density = 640:

522/480 * 640 * 686/480 * 640 * 4 = 2546432B

2.3 accuracy

It’s getting more and more interesting, isn’t it? You’ll notice that we’re doing all this careful work with the numbers we’re getting

No! A!!! The sample!

Why is that? Since the results were so close, it was natural to think about precision. Take another line from the code above:

outputBitmap->setInfo(SkImageInfo::Make(scaledWidth, scaledHeight,
            colorType, decodingBitmap.alphaType()));Copy the code

The size of the outputBitmap output is scaledWidth*scaledHeight.

if (willScale && decodeMode ! = SkImageDecoder: : kDecodeBounds_Mode) {scaledWidth = int (scaledWidth * scale + 0.5 f); ScaledHeight = int(scaledHeight * scale + 0.5f); scaledHeight = int(scaledHeight * scale + 0.5f); }Copy the code

In our case,

ScaledWidth = int(522 * 640/480F + 0.5) = INT (696.5) = 80f

ScaledHeight = int(686 * 640/480F + 0.5) = 80F (915.16666…) = 915

Here are the moments of wonder:

915 * 696 * 4 = 2547360,

Not very excited! Not very excited!!

When I wrote here, I suddenly remembered the title page of the book “STL Source Code Analysis”. Mr. Hou Jie only wrote one sentence:

“Before source code, no secrets”.

2.4 summary

In fact, from the previous code trace, we can see that the size of Bitmap memory is actually determined by:

  • The color format, as we mentioned earlier, is 4 bytes per pixel for ARGB8888 and 2 bytes for RGB565

  • The directory where the original file is stored (hdPI or xxHDPI)

  • Target screen density (so redmi definitely consumes less memory in terms of resources than Samsung S6, all things being equal)

3. Find ways to reduce Bitmap memory footprint

3.1 Jpg and Png

At this point, some people will say, let’s use JPG, shouldn’t JPG be smaller than PNG?

This is a good question, because JPG is somewhat smaller (or even much smaller) than PNG for the same image, for the simple reason that JPG is lossless and PNG is lossless. JPG is obviously smaller than PNG, and the cost is obvious.

However, this is a matter of file storage, which exists only in the file system, not in memory or video memory. To put it more simply, I have a need for Speed no-install hard drive zip package on my disk, the game is unplayable, I need to decompress before I can play – JPG or PNG is a zip package concept, and we are talking about the memory footprint from a usage perspective.

Therefore, JPG images should not be any different from PNG images in memory.

“Crack!!”

“Who is so wicked!! Don’t hit the face!”

There must be people who have opinions, JPG images read to the memory is small, but also give me examples. Of course, he’s not necessarily wrong. JPG images have no alpha channel!! The RGB565 format is only half the size of ARGB8888.

However, regardless of the Android platform, JPG images are not necessarily smaller than PNG images from a graphical perspective, depending on the content of the image:

JPG is not suitable for simpler images that contain very little color, have large areas of similar color, or have significant brightness differences. For more complex images requiring high fidelity, PNG can be lossless compression, but image files are larger.

There is no real difference between JPG and PNG, if only for the sake of Bitmap reading the size in memory. The differences between the two are mainly reflected in:

  • Do you really need alpha? If alpha channels are needed, there is no other option, use PNG.

  • Are your pictures rich or flat? As mentioned earlier, JPG for rich color values, PNG for button background.

  • Is the size of the installation package very strict? If your app has very few resources and the package size is not a big issue, choose JPG or PNG depending on the situation. (Although, I think there are very few apps these days that are not too picky about resource files.)

  • Is the target user’s CPU strong? JPG image compression algorithm takes longer than PNG. In this aspect, we still need to choose according to our discretion. A few years ago, we made Cocos2dx for a period of time. Due to the huge amount of resources, the project team required the unified use of PNG, probably for this consideration.

Well,, digress, what we really want to talk about is how to reduce memory footprint. This section is just to say that there is no way to reduce memory footprint in this way… XD

3.2 use inSampleSize

Some of my friends must have laughed when they saw this. Sampling,, I used to study signal processing, whenever I saw Sample, I would draw. Haha just kidding, this sampling is actually the same as the sampling in statistics. On the premise of ensuring the final effect meets the requirements, the sample size is reduced to facilitate subsequent data collection and processing.

This method is mainly used when the image resource itself is large or proper sampling does not affect the visual effect. At this time, the target we export may be relatively small and the image resolution and size are not very strict.

For example

We now have a requirement to blur an image and present it to the user as the SRC of the ImageView. Our original image size is 1080*1920. If we blur the image directly, on the one hand, the blur process is laborious, on the other hand, the resulting image takes up memory. In fact, there may be a coexistence of inputs and outputs during the fuzzy operation, and there will be a temporary peak in memory.

There are three letters that stick in your head. They are “OOM”.

Since the image will eventually be blurred, it is better to use a sample image, if the sample rate is 2, then the read image is only 1/4 the size of the original image, why not?

BitmapFactory.Options options = new Options();
options.inSampleSize = 2;
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), resId, options);Copy the code

3.3 Using matrices

Where you use Bitmap, you always see Matrix. What comes to mind at this point?

“Gay Friends”

“I lost.”

Actually think about it, Bitmap pixel array, is not just a matrix, really you have me, I have your friendship ah. So when do we use matrices?

Big picture small use sample, small picture big use matrix.

Using the previous example of the blurry image, didn’t we sample it? The memory is small, but the size of the picture is also small. What if I want to draw this picture with Canvas? Using matrices, of course:

A:

Matrix matrix = new Matrix(); matrix.preScale(2, 2, 0f, 0f); // hardware accelerated canvas.concat(matrix) must be turned off on Nexus6 5.1.1 to use direct replacement matrices; Canvas. DrawBitmap (bitmap, 0, 0, paint);Copy the code

It should be noted that during the test of Nexus6 equipped with 5.1.1 native system, it is found that the setMatrix method of Canvas may cause problems in the drawing of elements related to matrix. In this case, if setMatrix method is used, The bitmap will not appear on the screen. So try to use methods like canvas scale, rotate, or concat.

Method 2:

Matrix matrix = new Matrix();
matrix.preScale(2, 2, 0, 0);
canvas.drawBitmap(bitmap, matrix, paint);Copy the code

In this way, the resulting image is magnified, but still uses the same amount of memory as the sample.

What if I want to put an image in ImageView? Yes, please see:

Matrix matrix = new Matrix();
matrix.postScale(2, 2, 0, 0);
imageView.setImageMatrix(matrix);
imageView.setScaleType(ScaleType.MATRIX);
imageView.setImageBitmap(bitmap);Copy the code

3.4 Reasonable selection of Bitmap pixel format

In fact, we have mentioned this problem many times before. Images in ARGB8888 format take up 4 bytes per pixel compared to 2 bytes for RGB565. Let’s take a look at how many formats are available:

format describe
ALPHA_8 There is only one alpha channel
ARGB_4444 This is not recommended starting with API 13 due to poor quality
ARGB_8888 ARGB has four channels, 8bit each
RGB_565 Each pixel accounts for 2 bytes, with red accounting for 5 bits, green accounting for 6 bits, and blue accounting for 5 bits

Of those,

ALPHA8 doesn’t have to be used, because we can just use any color.

The e ARGB4444 takes up half the memory of the ARGB8888, but has fallen out of favor. “To occupy the province to save, and to see the cool, I can’t do ah T T.”

ARGB8888 is the most common and should be the most familiar.

RGB565 see this, I see resource optimization allocation everywhere, this green. In fact, this format is ideal if you don’t need an alpha channel, especially if the resource itself is in JPG format.

3.5 High Energy: Indexed Bitmap

Indexed bitmaps, 1 Byte per pixel, support not only RGB, but also alpha, and it looks good! Hold your tongue, Android doesn’t officially support this. Yes, you read that right, it’s not officially endorsed.

 public enum Config {
    // these native values must match up with the enum in SkBitmap.h

    ALPHA_8     (2),
    RGB_565     (4),
    ARGB_4444   (5),
    ARGB_8888   (6);

    final int nativeInt;
}Copy the code

However, the Skia engine is supported, so look again:

enum Config { kNo_Config, //! < bitmap has not been configured kA8_Config, //! < 8-bits per pixel, with only alpha Specified (0 is transparent, 0xFF is opaque) Left left left left left kIndex8_Config down down down down, down / /! < 8-bits per pixel, using SkColorTable to specify the colors kRGB_565_Config, //! < 16-bits per pixel, (see SkColorPriv.h for packing) kARGB_4444_Config, //! < 16-bits per pixel, (see SkColorPriv.h for packing) kARGB_8888_Config, //! < 32-bits per pixel, (see SkColorPriv.h for packing) kRLE_Index8_Config, kConfigCount };Copy the code

In fact, the nativeInt of the Java layer enumeration variable corresponds to the index value of the Skia library enumeration, so if we can get the index, isn’t it ok? Sorry, I can’t get it.

I can’t. I’m gonna get a brick for all this crap, T, T.

However, in PNG’s decoder library there is this code:

bool SkPNGImageDecoder::getBitmapColorType(png_structp png_ptr, png_infop info_ptr, SkColorType* colorTypep, bool* hasAlphap, SkPMColor* SK_RESTRICT theTranspColorp) { png_uint_32 origWidth, origHeight; int bitDepth, colorType; png_get_IHDR(png_ptr, info_ptr, &origWidth, &origHeight, &bitDepth, &colorType, int_p_NULL, int_p_NULL, int_p_NULL); #ifdef PNG_sBIT_SUPPORTED // check for sBIT chunk data, in case we should disable dithering because // our data is not truely 8bits per component png_color_8p sig_bit; if (this->getDitherImage() && png_get_sBIT(png_ptr, info_ptr, &sig_bit)) { #if 0 SkDebugf("----- sBIT %d %d %d %d\n", sig_bit->red, sig_bit->green, sig_bit->blue, sig_bit->alpha); #endif // 0 seems to indicate no information available if (pos_le(sig_bit->red, SK_R16_BITS) && pos_le(sig_bit->green, SK_G16_BITS) && pos_le(sig_bit->blue, SK_B16_BITS)) { this->setDitherImage(false); } } #endif if (colorType == PNG_COLOR_TYPE_PALETTE) { bool paletteHasAlpha = hasTransparencyInPalette(png_ptr, info_ptr); *colorTypep = this->getPrefColorType(kIndex_SrcDepth, paletteHasAlpha); // Now see if we can upscale to their requested colortype // Now see if we can upscale to their requested colorType // Now see if we can upscale to their requested ColorType canUpscalePaletteToConfig(*colorTypep, paletteHasAlpha)) { *colorTypep = kIndex_8_SkColorType; } } else { ...... } return true; }Copy the code

If canUpscalePaletteToConfig function returns false, so wait for kIndex_8_SkColorType colorType.

static bool canUpscalePaletteToConfig(SkColorType dstColorType, bool srcHasAlpha) {
  switch (dstColorType) {
    case kN32_SkColorType:
    case kARGB_4444_SkColorType:
        return true;
    case kRGB_565_SkColorType:
        // only return true if the src is opaque (since 565 is opaque)
        return !srcHasAlpha;
    default:
        return false;
 }
}Copy the code

If the dstColorType is kRGB_565_SkColorType and the image has an alpha channel, return false. This is the inPreferredConfig of Options that we passed in while decode.

Here is the experiment time ~

Preparation: A 192 by 192 file called index.png is placed in assets. This file is an indexed image created by PhotoShop editing.

Code:

try {
   Options options = new Options();
   options.inPreferredConfig = Config.RGB_565;
Bitmap bitmap = BitmapFactory.decodeStream(getResources().getAssets().open("index.png"), null, options);
   Log.d(TAG, "bitmap.getConfig() = " + bitmap.getConfig());
   Log.d(TAG, "scaled bitmap.getByteCount() = " + bitmap.getByteCount());
   imageView.setImageBitmap(bitmap);
} catch (IOException e) {
    e.printStackTrace();
}Copy the code

The program runs on Nexus6. Since reading from assets does not involve the scale problem discussed above, the theoretical size of the image after reading to memory (ARGB8888) is 192 * 192 *4=147456

Ok, run our code and look at the output of Config and ByteCount:

D/MainActivity: bitmap.getConfig() = null
D/MainActivity: scaled bitmap.getByteCount() = 36864Copy the code

First of all, why is the size only 36864? We know that if there is no problem in the previous discussion, the Bitmap decoded this time should be in index format, so it is expected that the memory occupied is only 1/4 of ARGB 8888. Config is null. The forehead.. Black.. The official said.

public final Bitmap.Config getConfig ()

Added in API level 1

If the bitmap’s internal config is in one of the public formats, return that config, otherwise return null.

Say it again, Blackdoor. XD.

It seems that this method is really good ah, take up a lot less memory. However, this method has many limitations because it is not officially supported, for example, it cannot be configured directly in XML, and the generated Bitmap cannot be used to build Canvas, etc.

3.6 Live up to… ‘Oh, no uncle!

In fact, we have been complaining about the resources, sometimes some scenes can be completed without pictures. For example, in development, we often encounter Loading, which is usually just a few frames of pictures. The pictures are relatively simple, and only black, white, grey and alpha are required.

“The schedule is too tight. Give me a series of pictures.”

“Ok, but each image is 300* 300 PNG ha, a total of 5, in order to adapt to different resolutions, need to produce xxHDPI and XXXHDPI two sets of images.”

Orz…

If so, you might as well create a custom View, overwrite onDraw and draw it yourself…

4, endnotes

In this article, we discuss how to calculate the size of memory used by bitmaps at runtime. We also provide some common suggestions for using bitmaps. Suddenly, it’s time to look at the Skia engine.

How to do, it seems to throw several years of C++ or to pick back yao. Poof…

If you think our content is good, just forward it to moments and share it with your friends

This article is the exclusive content of Tencent Bugly, please indicate prominently at the beginning of the articleNote the author and source “Tencent Bugly(http://bugly.qq.com)”