preface

We are all familiar with the.9 figure. We may need to use the.9 figure when we need to adapt the content size to the background of the control in development. As shown in the picture below, it is a.9 picture. Here’s the official definition:

NinePatchDrawable graph is a stretchable bitmap that can be used as a background for a view. Android automatically resizes the graphics to fit the contents of the view. NinePatch graphics are standard PNG images and include an extra 1-pixel border. It must be saved in the project’s res/drawable/ directory using the 9.png extension name.

So one might say, well, what’s the point? I’ve been using it since I made Android. But often, the simpler something is, the easier it is to ignore it. Let’s take these questions step by step:

  1. How does Android recognize a.9 graph?
  2. .9 Does the image have to be in the res/drawable directory, how does Android handle it, why is the black border missing when it is displayed on the phone?
  3. Do I have to use.9 images to get the adaptive effect?

PNG

define

According to the official description, the.9 image is a standard PNG image with a few extra pixels added, so first we need to know what PNG is.

Portable Network Graphics (PNG) is a bitmap Graphics format that supports lossless compression. It supports index, gray scale, RGB color scheme and Alpha channel. PNG is widely used on the Internet and beyond because it was developed to improve and replace GIF as a web-friendly format that does not require patent licensing.

File structure

Documents, like agreements, are presented in data. So since a protocol has a protocol header to identify it, so does a file. PNG file signature consists of eight bytes (89 50 4E 47 0D 0A 1A 0A, hexadecimal), and the system identifies PNG files according to these eight bytes.

Immediately after the file header comes the data block. PNG data blocks are divided into two types. One is the critical chunk that PNG files must contain and read/write software must support. The other type, called ancillary chunks, PNG allows software to ignore ancillary chunks that it does not recognize. This block-based design allows the PNG format to remain compatible with older versions when extended.

Format of data block:

The name of the The number of bytes instructions
Length 4 bytes Specifies the length of the data field in the data block, which cannot exceed (2^{31}-1) bytes
Chunk Type Code 4 bytes The data block type code consists of ASCII letters (A-Z and A-Z)
Chunk Data(actual contents of Data blocks) Actual content length Stores data specified by Chunk Type Code
CRC(Cyclic redundancy Detection) 4 bytes Stores cyclic redundancy codes used to detect errors

There are four standard data blocks in the key block:

  • Header chunk (IHDR) : contains basic image information and appears only once as the first data block.
  • Palette Chunk: Must be placed before the image chunk.
  • Image data chunk (IDAT) : stores actual image data. PNG data allows for multiple contiguous image data blocks.
  • Image Trailer Chunk (IEND) : Placed at the end of the file, indicating the end of the PNG data stream.

Of course, there’s more to PNG than that, but if you’re interested in learning more, you can read RFC 2083.

Therefore, it is not difficult to guess that the.9 image is a special processing of the image when it is displayed by adding self-identifiable data blocks to the auxiliary block of PNG.

How does Android load a.9 image

In Android, an image corresponds to a Bitmap. We can see how to read a Bitmap from a file

//BitmapFactory.java
​
public static Bitmap decodeFile(String pathName) {
    return decodeFile(pathName, null);
}
​
private static native Bitmap nativeDecodeStream(InputStream is, byte[] storage,
            Rect padding, Options opts, long inBitmapHandle, long colorSpaceHandle);
Copy the code

To read an image from a file path, we need to call the BitmapFactory decodeFile method. I have omitted some of the process here, but we will eventually call the nativeDecodeStream method, which is a native method. Then see how C++ is implemented

//BitmapFactory.cpp static jobject nativeDecodeStream(JNIEnv* env, jobject clazz, jobject is, jbyteArray storage, jobject padding, jobject options, jlong inBitmapHandle, jlong colorSpaceHandle) { ... if (stream.get()) { ... bitmap = doDecode(env, std::move(bufferedStream), padding, options, inBitmapHandle, colorSpaceHandle); } return bitmap; } static jobject doDecode(JNIEnv* env, std::unique_ptr<SkStreamRewindable> stream, jobject padding, jobject options, jlong inBitmapHandle, jlong colorSpaceHandle) { ... NinePatchPeeker peeker; std::unique_ptr<SkAndroidCodec> codec; {... std::unique_ptr<SkCodec> c = SkCodec::MakeFromStream(std::move(stream), &result, &peeker); . }... jbyteArray ninePatchChunk = NULL; if (peeker.mPatch ! = NULL) { size_t ninePatchArraySize = peeker.mPatch->serializedSize(); ninePatchChunk = env->NewByteArray(ninePatchArraySize); jbyte* array = (jbyte*) env->GetPrimitiveArrayCritical(ninePatchChunk, NULL); memcpy(array, peeker.mPatch, peeker.mPatchSize); env->ReleasePrimitiveArrayCritical(ninePatchChunk, array, 0); } // now create the java bitmap return bitmap::createBitmap(env, defaultAllocator.getStorageObjAndReset(), bitmapCreateFlags, ninePatchChunk, ninePatchInsets, -1); }Copy the code

The doDecode method is long, and only the key parts are extracted here. We look at the key NinePatchPeeker and pass its pointer to the MakeFromStream method. Then copy NinePatchPeeker’s mPatch to Bitmap as a construction parameter. Let’s move on:

// skcodec.cpp // NinePatchPeeker passed the third parameter to this method, STD ::unique_ptr<SkCodec> SkCodec::MakeFromStream(STD ::unique_ptr<SkStream> stream, Result* outResult, SkPngChunkReader* chunkReader, SelectionPolicy selectionPolicy) { ... #ifdef SK_HAS_PNG_LIBRARY if (SkPngCodec::IsPng(buffer, bytesRead)) { return SkPngCodec::MakeFromStream(std::move(stream), outResult, chunkReader); } else #endif ... } // SkPngCodec.cpp std::unique_ptr<SkCodec> SkPngCodec::MakeFromStream(std::unique_ptr<SkStream> stream, Result* result, SkPngChunkReader* chunkReader) { SkCodec* outCodec = nullptr; *result = read_header(stream.get(), chunkReader, &outCodec, nullptr, nullptr); if (kSuccess == *result) { // Codec has taken ownership of the stream. SkASSERT(outCodec); stream.release(); } return std::unique_ptr<SkCodec>(outCodec); } static SkCodec::Result read_header(SkStream* stream, SkPngChunkReader* chunkReader, SkCodec** outCodec, png_structp* png_ptrp, png_infop* info_ptrp) { ... #ifdef PNG_READ_UNKNOWN_CHUNKS_SUPPORTED // Hookup our chunkReader so we can see any user-chunks the caller may be interested in. // This needs to be installed before we read the png header. Android may store ninepatch // chunks in the  header. if (chunkReader) { png_set_keep_unknown_chunks(png_ptr, PNG_HANDLE_CHUNK_ALWAYS, (png_byte*)"", 0); png_set_read_user_chunk_fn(png_ptr, (png_voidp) chunkReader, sk_read_user_chunk); } #endif ... }Copy the code

Focus on the png_set_read_user_chunk_fn method, which passes in the chunkReader and sk_read_user_chunk methods

#ifdef PNG_READ_USER_CHUNKS_SUPPORTED
void PNGAPI
png_set_read_user_chunk_fn(png_structrp png_ptr, png_voidp user_chunk_ptr,
    png_user_chunk_ptr read_user_chunk_fn) {
   ...
   png_ptr->read_user_chunk_fn = read_user_chunk_fn;
   png_ptr->user_chunk_ptr = user_chunk_ptr;
}
#endif
Copy the code

This method essentially assigns values to two variables of png_ptr, which is a pointer to a PNG structure. The read_user_chunk_fn method is then called in pngrutil.c

// pngrutil.c
void png_handle_unknown(png_structrp png_ptr, png_inforp info_ptr,
    png_uint_32 length, int keep) {
    ...
#  ifdef PNG_READ_USER_CHUNKS_SUPPORTED
   if (png_ptr->read_user_chunk_fn != NULL) {
      if (png_cache_unknown_chunk(png_ptr, length) != 0) {
         /* Callback to user unknown chunk handler */
         int ret = (*(png_ptr->read_user_chunk_fn))(png_ptr,
             &png_ptr->unknown_chunk);
      }
   }
  ...
}
Copy the code

Libpng calls read_user_chunk_fn to read user-defined chunks of unknown data. And read_user_chunk_fn is the sk_read_user_chunk above

// SkPngCodec.cpp #ifdef PNG_READ_UNKNOWN_CHUNKS_SUPPORTED static int sk_read_user_chunk(png_structp png_ptr, png_unknown_chunkp chunk) { SkPngChunkReader* chunkReader = (SkPngChunkReader*)png_get_user_chunk_ptr(png_ptr); // readChunk() returning true means continue decoding return chunkReader->readChunk((const char*)chunk->name, chunk->data, chunk->size) ? 1:1; } #endif // pngget.c #ifdef PNG_USER_CHUNKS_SUPPORTED png_voidp PNGAPI png_get_user_chunk_ptr(png_const_structrp png_ptr) { return (png_ptr ? png_ptr->user_chunk_ptr : NULL); } #endifCopy the code

Get an SkPngChunkReader, and its implementation is NinePatchPeeker

// NinePatchPeeker.cpp bool NinePatchPeeker::readChunk(const char tag[], const void* data, size_t length) { if (! strcmp("npTc", tag) && length >= sizeof(Res_png_9patch)) { Res_png_9patch* patch = (Res_png_9patch*) data; size_t patchSize = patch->serializedSize(); if (length ! = patchSize) { return false; } // You have to copy the data because it is owned by the png reader Res_png_9patch* patchNew = (Res_png_9patch*) malloc(patchSize); memcpy(patchNew, patch, patchSize); Res_png_9patch::deserialize(patchNew); patchNew->fileToDevice(); free(mPatch); mPatch = patchNew; mPatchSize = patchSize; } else if (! strcmp("npLb", tag) && length == sizeof(int32_t) * 4) { mHasInsets = true; memcpy(&mOpticalInsets, data, sizeof(int32_t) * 4); } else if (! strcmp("npOl", tag) && length == 24) { // 4 int32_ts, 1 float, 1 int32_t sized byte mHasInsets = true; memcpy(&mOutlineInsets, data, sizeof(int32_t) * 4); mOutlineRadius = ((const float*)data)[4]; mOutlineAlpha = ((const int32_t*)data)[5] & 0xff; } return true; }Copy the code

After all this searching, we finally found our destination. Figure 9 Shows three data blocks: npTc, npLb, and npOl. The stretched data block is npTc. It is encapsulated here in a structure called Res_png_9patch. We can tell a lot from the comments of this structure. If you don’t want to read the comments, you can skip it and read my explanation below:

/** * This chunk specifies how to split an image into segments for * scaling. * * There are J horizontal and K vertical segments. These segments divide * the image into J*K regions as follows (where J=4 and K=3): * * F0 S0 F1 S1 * +-----+----+------+-------+ * S2| 0 | 1 | 2 | 3 | * +-----+----+------+-------+ * | | | | | * | | | | | * F2| 4 | 5 | 6 | 7 | * | | | | | * | | | | | * +-----+----+------+-------+ * S3| 8 | 9 | 10 | 11 | * +-----+----+------+-------+ * * Each horizontal and vertical segment is considered to by either * stretchable (marked by  the Sx labels) or fixed (marked by the Fy * labels), in the horizontal or vertical axis, respectively. In the * above example, the first is horizontal segment (F0) is fixed, the * next is stretchable and then they continue to alternate. Note that * the segment list for each axis can begin or end with a stretchable * or fixed segment. * * ... * * The colors array contains hints for each of the regions. They are * ordered according left-to-right and top-to-bottom as indicated above. * For each segment that is a solid color the array entry will contain * that color value; otherwise it will contain NO_COLOR. Segments that * are completely transparent will always have the value TRANSPARENT_COLOR. * * The PNG chunk type is "npTc". */ struct alignas(uintptr_t) Res_png_9patch { int8_t wasDeserialized; uint8_t numXDivs, numYDivs, numColors; uint32_t xDivsOffset, yDivsOffset, colorsOffset; Int32_t paddingLeft, paddingRight, paddingTop, paddingBottom; enum { // The 9 patch segment is not a solid color. NO_COLOR = 0x00000001, // The 9 patch segment is completely transparent. TRANSPARENT_COLOR = 0x00000000 }; . inline int32_t* getXDivs() const { return reinterpret_cast<int32_t*>(reinterpret_cast<uintptr_t>(this) + xDivsOffset); } inline int32_t* getYDivs() const { return reinterpret_cast<int32_t*>(reinterpret_cast<uintptr_t>(this) + yDivsOffset);  } inline uint32_t* getColors() const { return reinterpret_cast<uint32_t*>(reinterpret_cast<uintptr_t>(this) + colorsOffset); }}Copy the code

The comments tell us several things:

  • An image is divided into blocks, and the stretch supporting block coordinates are stored in xDivs and yDivs arrays respectively.

  • The “S” indicates that it can be stretched (actually the area marked by the 1-pixel black line next to the.9 figure), and the “F” indicates that it cannot be stretched.

    Following the example in the annotation, the image is divided into 12 pieces, such as S0, which indicates that numbers 1, 5, and 9 are stretchable on the horizontal axis, and S1, which indicates that labels 3, 7, and 11 are stretchable. So xDivs and yDivs store data like this:

    xDivs = [S0.start, S0.end, S1.start, S1.end]

    yDivs = [S2.start, S2.end, S3.start, S3.end]

  • Colors describes the colors of each block, from left to right and top to bottom. Normally, NO_COLOR = 0x00000001 as defined in the source code will do the job

    colors = [c1, c2, c3, …. c11]

  • When there are more than one stretch block horizontally (or vertically), their stretch length is calculated in proportion to their marked range. Add S0 is 1 pixel, S1 is 3 pixel, then their stretching length is 1:3

The data structure

Then, from the serialization method of Res_png_9patch, we can infer the data structure of this chunk

void Res_png_9patch::serialize(const Res_png_9patch& patch, const int32_t* xDivs,
                               const int32_t* yDivs, const uint32_t* colors, void* outData) {
    uint8_t* data = (uint8_t*) outData;
    memcpy(data, &patch.wasDeserialized, 4);     // copy  wasDeserialized, numXDivs, numYDivs, numColors
    memcpy(data + 12, &patch.paddingLeft, 16);   // copy paddingXXXX
    data += 32;
​
    memcpy(data, xDivs, patch.numXDivs * sizeof(int32_t));
    data +=  patch.numXDivs * sizeof(int32_t);
    memcpy(data, yDivs, patch.numYDivs * sizeof(int32_t));
    data +=  patch.numYDivs * sizeof(int32_t);
    memcpy(data, colors, patch.numColors * sizeof(uint32_t));
}
Copy the code
The name of the bytes instructions
wasDeserialized 1 A value of -1 indicates that the block is not a.9 graph
numXDivs 1 XDivs array length
numYDivs 1 YDivs array length
numColors 1 Colors array length
4 meaningless
4 meaningless
paddingLeft 4 To the left of the horizontal content area
paddingRight 4 To the right of the horizontal content area
paddingTop 4 The top of the vertical content area
paddingBottom 4 Bottom of the vertical content area
meaningless
xDivs numXDivs * 4 Horizontal stretch area (black line at top of image)
yDivs numYDivs * 4 Longitudinal stretch area (black line on left side of image)
colors numColors * 4 Color of each block

summary

So, here Android loads a.9 map into a Bitmap to clear things up. After reading PNG to header information, npTc data block is found, and it is read from chunk to Bitmap to be used as construction parameters. Now let’s look at drawing

draw

.9 Figure is drawn with NinePatchDrawable, using the way is as follows:

Val bitmap = bitmapFactory.decodefile (absolutePath) // Check whether ninePatchChunk of bitmap belongs to. WasDeserialized (first byte) of this chunk is equal to -1,  val isNinePatch = NinePatch.isNinePatchChunk(bitmap.ninePatchChunk) if (isNinePatch) { // NinePatchDrawable val background = NinePatchDrawable(context. Resources, bitmap, bitmap.ninePatchChunk, Rect(), null) imageView.background = background }Copy the code

NinePatchDrawable in the drawing method, and will call the native method, due to space reasons, here is a simple column down stack, you can go to see the source code if you are interested:

NinePatchDrawable.java -> draw() NinePatch.java -> draw() Canvas.java -> drawPatch() BaseCanvas.java -> drawPatch() -> NDrawNinePatch () // this is a native method, Skiacanvas.cpp -> drawNinePatch() // Canvas all native methods correspond to the SkiaCanvas of the native layer. CPP -> drawImageLattice() skdevice.cpp -> onDrawImageLattice() drawImageRect() skbitmapDevice.cpp -> drawBitmapRect()Copy the code

At this point, we’ve covered everything from loading to drawing, but we’ve left out one piece, which is compilation.

compile

Where does the data block npTc come from? Why do we need to save in res/drawable/?

In fact, when compiling, AAPT will compile the res/drawable/ image and find that it is.9 image, then extract the black pixels around the image and organize them into npTc data blocks and put them into the PNG header.

We can use Vim to open an uncompiled.9 diagram and see

Here we can see some basic data blocks such as IHDR and IEND. Let’s compile the.9 diagram using aapt as follows:

./aapt s -i xxx_in.png -o xxx_out.png
Copy the code

After opening it with Vim, you can see a lot more information, and you can also see the corresponding figure of.9npTcData blocks, open the picture can also find those around the black line is missing.

The last

Answer the questions at the beginning of this article:

9 Do images have to be in the RES /drawable directory

This is not necessary, if you need to read.9 images from assets directory, sdcard, or network, you can also do this. It just needs to use AAPT to process the pictures by hand

Do you have to use.9 diagrams to achieve adaptive effects? Ordinary pictures are ok

After understanding how the.9 figure works, the answer is yes. We can manually construct ninePatchChunk ourselves, and then pass NinePatchDrawable can, here does not write code demo.