About the author

Guo Xiaoxing, programmer and guitarist, is mainly engaged in the infrastructure of Android platform. Welcome to exchange technical questions. You can go to my Github to raise an issue or send an email to [email protected] to communicate with me.

The article directories

  • Mass compression
    • 1.1 Implementation Method
    • 1.2 Implementation Principles
  • Two dimensional compression
    • 2.1 Adjacent sampling
    • 2.2 Bilinear sampling

This article is used to introduce the image compression scheme of Android platform and the general understanding of image codec. In fact, the image processing of Android platform is finally implemented by the bottom layer. Due to the limited space, we will not analyze the details of the bottom layer, but we will mention the overview of the bottom implementation scheme. Some ideas for those who want to expand further.

Before introducing the image compression scheme, let’s take a look at the basics of images related to compression. This will also help us understand the meaning of some of the variables defined in Bitmap.java.

Pixel density

Pixel density refers to the number of pixels per inch. In Bitmap, use mDensity/mTargetDensity. MDensity is the pixel density of the device screen by default, mTargetDensity is the target pixel density of the image, and is the pixel density of the drawable directory when loading the image.

Color pattern

Color mode is an algorithm for representing colors in the digital world, represented by Config in Bitmap.

  • ARGB_8888: Four bytes per pixel, eight bits each for A, R, G, and B.
  • RGB_565: Each pixel has two bytes, R component has 5 bits, G component has 6 bits, B component has 5 bits;
  • ARGB_4444: Each pixel occupies two bytes, A, R, G and B components each occupy four bits, and the imaging effect is relatively poor;
  • Alpha_8: only saves transparency, a total of 8 bits, 1 byte;

Another bit about how bitmaps calculate size.

Bitamp Memory size = Width Pixel x (inTargetDensity) x Height Pixel x (inTargetDensity) x Memory occupied by a pixel

There are two ways to get memory footprint in a Bitmap.

  • GetByteCount () : API12 is added to represent the minimum amount of memory required to store Bitmap pixels.
  • GetAllocationByteCount () : Added by API19 to represent the size of memory allocated for bitmaps in memory, replacing the getByteCount() method.

GetByteCount () and getAllocationByteCount return the same result when the Bitmap is not reused. When decoding images by multiplexing bitmaps, getByteCount() represents the size of memory occupied by the newly decoded image, and getAllocationByteCount() represents the actual size of memory occupied by the multiplexed Bitmap (i.e. the length of the mBuffer).

In addition to these concepts, we will mention some member variables in Bitmap.java, which you may also encounter frequently.

  • Private byte[] mBuffer: image array used to store images. This Java layer array is actually created in the C++ layer, as explained below.
  • Private final Boolean mIsMutable: Whether images are mutable is a bit abstract. It is similar to the relationship between Strings and StringBuffers, where strings are unchangeable and stringbuffers can be modified.
  • Private Boolean mRecycled: whether or not images have been recycled. Recycling is also done in the C++ layer.

Knowing the basic concepts, let’s analyze the methods of compressing images.

The Android platform usually compresses images in one of two ways:

  • The quality of compressed
  • Size of the compression

Mass compression

1.1 Implementation Method

The key to quality compression is the bitmap.press () function, which does not change the size of the image, but can reduce the quality of the image, thereby reducing the storage size, and thus achieving the purpose of compression.

compress(CompressFormat format, int quality, OutputStream stream)Copy the code

It takes three parameters

  • CompressFormat format: Compressed format, it has JPEG, PNG, WEBP three options, JPEG is lossy compression, PNG is lossless compression, the size of the compressed image will not change (that is, there is no compression effect), WEBP is launched by Google image format, compared with JPEG will save about 30% of the space, We generally choose JPEG for compatibility and space saving reasons.
  • Int quality: 0~100 Optional, the higher the value, the higher the quality, the larger the image.
  • OutputStream: OutputStream of compressed images.

So let’s write an example to verify that.

File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM)
                            , "timo_compress_quality_100.jpg");
if(! file.exists()) {try {
        file.createNewFile();
    } catch (IOException e) {
        e.printStackTrace();
    }
}
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.timo);
BufferedOutputStream bos = null;
try {
    bos = new BufferedOutputStream(new FileOutputStream(file));
    bitmap.compress(Bitmap.CompressFormat.JPEG, 100, bos);
    bitmap.recycle();
} catch (FileNotFoundException e) {
    e.printStackTrace();
}finally {
    try {
        if(bos ! =null){ bos.close(); }}catch(IOException e) { e.printStackTrace(); }}Copy the code

quality = 100

1823 x1076 1.16 m

quality = 50

1823 x1076 124.52 k

quality = 0

1823 x1076 35.80 k

It can be seen that with the decrease of quality, the image quality has changed significantly, but the image size has not changed.

1.2 Implementation Principles

The encoding of Android images is done by the Skia library.

Skia is an open source two-dimensional graphics library that provides a variety of commonly used apis and runs on a variety of hardware and software platforms. Google Chrome, Chrome OS, Android, Firefox, Firefox OS, and many other products use it as a graphics engine.

Skia is in the external/ Skia package. Although Skia is not directly used in our daily development, it is too important to us. It is an important part of the Android system, and many important operations, such as image codec and Canvas drawing, are completed through Skia at the bottom. It is also widely used in other Google products.

Skia defines codecs for images in various formats under the SRC/Images package.

kImageEncoder.cpp

  • Skjpegencoder. CPP: JPEG decoder
  • SkPngEncoder. CPP: PNG decoder
  • Skwebpencoder. CPP: WEBP decoder

Skia itself provides basic drawing and codec functions, but it also mounts other third party codec libraries such as libpng. So, libjpeg. So, libgif. Also mentioned above, we do image compression, the general choice of JPEG, we focus on the JPEG encoding and decoding.

Libjpeg is a free library written entirely in C that handles the JPEG image data format. It contains an algorithmic implementation of the JPEG codec and a variety of utilities for processing JPEG data.

Instead of using libjpeg, Android has made some changes, specifically:

  • Modified the memory management mode
  • Added support to output compressed data to output streams

Libjpeg source code in external/ JPEG package, next we look at the details of jpeg compression implementation.

Let’s look at the implementation process of the whole source code from top to bottom.

public boolean compress(CompressFormat format, int quality, OutputStream stream) {
    checkRecycled("Can't compress a recycled bitmap");
    // do explicit check before calling the native method
    if (stream == null) {
        throw new NullPointerException();
    }
    if (quality < 0 || quality > 100) {
        throw new IllegalArgumentException("quality must be 0.. 100");
    }
    Trace.traceBegin(Trace.TRACE_TAG_RESOURCES, "Bitmap.compress");
    boolean result = nativeCompress(mNativePtr, format.nativeInt,
            quality, stream, new byte[WORKING_COMPRESS_STORAGE]);
    Trace.traceEnd(Trace.TRACE_TAG_RESOURCES);
    return result;
}Copy the code

You can see that internally it calls a native method, nativeCompress(), which is a function defined in bitmap.java and implemented in bitmap.cpp

It ends up calling the Bitmap_compress() function in bitmap.cpp. Let’s take a look at its implementation.

static bool Bitmap_compress(JNIEnv* env, jobject clazz, SkBitmap* bitmap,
                            int format, int quality,
                            jobject jstream, jbyteArray jstorage) {
    SkImageEncoder::Type fm;

    // Select SkImageEncoder based on encoding type
    switch (format) {
    case kJPEG_JavaEncodeFormat:
        fm = SkImageEncoder::kJPEG_Type;
        break;
    case kPNG_JavaEncodeFormat:
        fm = SkImageEncoder::kPNG_Type;
        break;
    case kWEBP_JavaEncodeFormat:
        fm = SkImageEncoder::kWEBP_Type;
        break;
    default:
        return false;
    }

    // Check whether the current bitmap pointer is null
    bool success = false;
    if (NULL! = bitmap) {SkAutoLockPixels alp(*bitmap);

        if (NULL == bitmap->getPixels()) {
            return false;
        }

        // Create SkWStream to output compressed data to the output stream
        SkWStream* strm = CreateJavaOutputStreamAdaptor(env, jstream, jstorage);
        if (NULL == strm) {
            return false;
        }

        // Create the corresponding encoder according to the encoding type, compress the image data pointed by the bitmap pointer and output it to the output stream
        SkImageEncoder* encoder = SkImageEncoder::Create(fm);
        if (NULL! = encoder) {// Call encodeStream to encode
            success = encoder->encodeStream(strm, *bitmap, quality);
            delete encoder;
        }
        delete strm;
    }
    return success;
}Copy the code

As you can see, the function selects the SkImageEncoder according to the encoding format, creates the corresponding image encoder, and finally calls encodeStream(STRM, *bitmap, quality) to complete the encoding. tong

The code above creates the SkJpegEncoder and eventually calls the make() method inside it, as shown below:

std: :unique_ptr<SkEncoder> SkJpegEncoder::Make(SkWStream* dst, const SkPixmap& src,
                                               const Options& options) {
    if(! SkPixmapIsValid(src, options.fBlendBehavior)) {return nullptr;
    }
    std: :unique_ptr<SkJpegEncoderMgr> encoderMgr = SkJpegEncoderMgr::Make(dst);
    if (setjmp(encoderMgr->jmpBuf())) {
        return nullptr;
    }
    if(! encoderMgr->setParams(src.info(), options)) {return nullptr;
    }
    // Set the compression quality
    jpeg_set_quality(encoderMgr->cinfo(), options.fQuality, TRUE);
    // Start compression
    jpeg_start_compress(encoderMgr->cinfo(), TRUE);
    sk_sp<SkData> icc = icc_from_color_space(src.info());
    if (icc) {
        // Create a contiguous block of memory with the icc signature followed by the profile.
        sk_sp<SkData> markerData =
                SkData::MakeUninitialized(kICCMarkerHeaderSize + icc->size());
        uint8_t* ptr = (uint8_t*) markerData->writable_data();
        memcpy(ptr, kICCSig, sizeof(kICCSig));
        ptr += sizeof(kICCSig);
        *ptr++ = 1; // This is the first marker.
        *ptr++ = 1; // Out of one total markers.
        memcpy(ptr, icc->data(), icc->size());
        jpeg_write_marker(encoderMgr->cinfo(), kICCMarker, markerData->bytes(), markerData->size());
    }
    return std: :unique_ptr<SkJpegEncoder>(new SkJpegEncoder(std::move(encoderMgr), src));
}Copy the code

Above is the whole image compression process.

Libjpeg is usually available on Android. If your business needs high quality and low storage, consider the following two libraries:

  • Libjpeg-turbo: Enhanced libjpeg, which is a JPEG image codec that uses SIMD instructions (MMX, SSE2, NEON, AltiVec) to accelerate benchmark JPEG compression and decompression on x86, x86-64, ARM, and PowerPC systems. On such systems, libjpeg-Turbo is usually 2-6 times faster than libjpeg, all else equal. On other types of systems, libjpeg-Turbo can still outperform libjpeg with its highly optimized Huffman coding routines. In many cases, libjPEg-Turbo’s performance is comparable to proprietary high-speed JPEG codecs.
  • Mozilla/MozJPEG: Based on libjpeg-Turbo. To ensure jpeg compression without degrading image quality and compatible with mainstream codecs.

Two dimensional compression

In essence, size compression is a process of resampling. The enlarged image is called up-sampling, and the reduced image is called down-sampling. Android provides two image sampling methods, adjacent sampling and bilinear sampling.

2.1 Adjacent sampling

Adjacent sampling adopts adjacent point interpolation algorithm, which substitutes one pixel for the adjacent pixels.

Its implementation code is also very familiar.

BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 1;
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.blue_red, options);
String savePath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM).getAbsolutePath()
        + "/timo_BitmapFactory_1.png";
ImageUtils.save(bitmap, savePath, Bitmap.CompressFormat.PNG);Copy the code

inSampleSize = 1

inSampleSize = 32

You can see that the key to this approach is the selection of inSampleSize, which determines the size of the compressed image.

For example, if inSampleSize is 4, the width and height of the compressed image will be 1/4 of the original, and the number of pixels will be 1/16 of the original. InSampleSize usually selects an exponent of 2. If not, It’s also going to be close to the exponent of two when it comes to internal calculations.

For inSampleSize calculation, Luban provides a good idea, and the author also gives the algorithm idea.

Algorithm ideas

1. Determine whether the image proportion value is in the following range; - [1, 0.5625) that is, the image is in the proportion range of [1:1 ~ 9:16) - [0.5625, 0.5) that is, the image is in the proportion range of [9:16 ~ 1:2) - [0.5, 2. Determine whether the longest edge of the image passes the boundary value; - [1, 0.5625) boundary value is: The boundary values of 1664 * n (n=1), 4990 * n (n=2), 1280 * POw (2, n-1) (n≥3) - [0.5625, 0.5) are: 1280 * POw (2, n-1) (n≥1) - [0.5, 0) 3. Calculate the actual side length value of compressed image, and take the calculation result in step 2 as the criterion. If the value exceeds a certain boundary value: Width /pow(2, n-1), height/pow(2, n-1) 4. Calculate the actual file size of the compressed image based on the results in steps 2 and 3. Size = (newW * newH)/(width * height) * m; - [1, 0.5625) - [0.5625, 0.5) width = 1440, height = 2560, m = 200; Height = 1280 / scale, m = 500; 5. Determine whether the size of step 4 is too small - [1, 0.5625), then the minimum size corresponds to 60,60, 100 - [0.5625, 0.5); 100 - [0.5, 0); 100 - [0.5, 0); 100 - [0.5, 0); 100 - [0.5, 0); 100 - [0.5, 0)Copy the code

The specific implementation

private int computeSize(a) {
    int mSampleSize;

    mSourceWidth = mSourceWidth % 2= =1 ? mSourceWidth + 1 : mSourceWidth;
    mSourceHeight = mSourceHeight % 2= =1 ? mSourceHeight + 1 : mSourceHeight;

    mSourceWidth = mSourceWidth > mSourceHeight ? mSourceHeight : mSourceWidth;
    mSourceHeight = mSourceWidth > mSourceHeight ? mSourceWidth : mSourceHeight;

    double scale = ((double) mSourceWidth / mSourceHeight);

    if (scale <= 1 && scale > 0.5625) {
      if (mSourceHeight < 1664) {
        mSampleSize = 1;
      } else if (mSourceHeight >= 1664 && mSourceHeight < 4990) {
        mSampleSize = 2;
      } else if (mSourceHeight >= 4990 && mSourceHeight < 10240) {
        mSampleSize = 4;
      } else {
        mSampleSize = mSourceHeight / 1280= =0 ? 1 : mSourceHeight / 1280; }}else if (scale <= 0.5625 && scale > 0.5) {
      mSampleSize = mSourceHeight / 1280= =0 ? 1 : mSourceHeight / 1280;
    } else {
      mSampleSize = (int) Math.ceil(mSourceHeight / (1280.0 / scale));
    }

    return mSampleSize;
}Copy the code

The core idea is to calculate the appropriate sampling value by comparing the width and height of the original image.

As well as looking at the underlying implementation of this approach, BitmapFactory has many decode methods that end up calling native methods.

private static native Bitmap nativeDecodeStream(InputStream is, byte[] storage,
        Rect padding, Options opts);
private static native Bitmap nativeDecodeFileDescriptor(FileDescriptor fd, Rect padding, Options opts);
private static native Bitmap nativeDecodeAsset(long nativeAsset, Rect padding, Options opts);
private static native Bitmap nativeDecodeByteArray(byte[] data, int offset,
        int length, Options opts);Copy the code

These native methods are implemented in bitmapFactory.cpp and ultimately call the doDecode() method

static jobject doDecode(JNIEnv* env, SkStream* stream, jobject padding,
        jobject options, bool allowPurgeable, bool forcePurgeable = false.bool applyScale = false.float scale = 1.0 f) {
    int sampleSize = 1;
    // Image decoding mode, here is the pixel mode
    SkImageDecoder::Mode mode = SkImageDecoder::kDecodePixels_Mode;
    // Parameter initialization
    SkBitmap::Config prefConfig = SkBitmap::kARGB_8888_Config;
    bool doDither = true;
    bool isMutable = false;
    boolwillScale = applyScale && scale ! =1.0 f;
    boolisPurgeable = ! willScale && (forcePurgeable || (allowPurgeable && optionsPurgeable(env, options)));bool preferQualityOverSpeed = false;

    / / javaBitmap object
    jobject javaBitmap = NULL;
    // Initialize the parameters in options
    if(options ! =NULL) {
        sampleSize = env->GetIntField(options, gOptions_sampleSizeFieldID);
        if (optionsJustBounds(env, options)) {
            mode = SkImageDecoder::kDecodeBounds_Mode;
        }
        // initialize these, in case we fail later on
        env->SetIntField(options, gOptions_widthFieldID, - 1);
        env->SetIntField(options, gOptions_heightFieldID, - 1);
        env->SetObjectField(options, gOptions_mimeFieldID, 0);
        jobject jconfig = env->GetObjectField(options, gOptions_configFieldID);
        prefConfig = GraphicsJNI::getNativeBitmapConfig(env, jconfig);
        isMutable = env->GetBooleanField(options, gOptions_mutableFieldID);
        doDither = env->GetBooleanField(options, gOptions_ditherFieldID);
        preferQualityOverSpeed = env->GetBooleanField(options,
                gOptions_preferQualityOverSpeedFieldID);
        javaBitmap = env->GetObjectField(options, gOptions_bitmapFieldID);
    }
    if(willScale && javaBitmap ! =NULL) {
        return nullObjectReturn("Cannot pre-scale a reused bitmap");
    }

    // Create the image decoder and set the parameters passed from the Java layer, such as sampleSize, doDither, etc
    SkImageDecoder* decoder = SkImageDecoder::Factory(stream);
    if (decoder == NULL) {
        return nullObjectReturn("SkImageDecoder::Factory returned null");
    }
    decoder->setSampleSize(sampleSize);
    decoder->setDitherImage(doDither);
    decoder->setPreferQualityOverSpeed(preferQualityOverSpeed);
    NinePatchPeeker peeker(decoder);
    //Java pixel allocator
    JavaPixelAllocator javaAllocator(env);
    SkBitmap* bitmap;
    if (javaBitmap == NULL) {
        bitmap = new SkBitmap;
    } else {
        if(sampleSize ! =1) {
            return nullObjectReturn("SkImageDecoder: Cannot reuse bitmap with sampleSize ! = 1");
        }
        bitmap = (SkBitmap*) env->GetIntField(javaBitmap, gBitmap_nativeBitmapFieldID);
        // config of supplied bitmap overrules config set in options
        prefConfig = bitmap->getConfig();
    }
    SkAutoTDelete<SkImageDecoder> add(decoder);
    SkAutoTDelete<SkBitmap> adb(bitmap, javaBitmap == NULL);
    decoder->setPeeker(&peeker);
    if(! isPurgeable) { decoder->setAllocator(&javaAllocator); }AutoDecoderCancel adc(options, decoder);
    // To fix the race condition in case "requestCancelDecode"
    // happens earlier than AutoDecoderCancel object is added
    // to the gAutoDecoderCancelMutex linked list.
    if(options ! =NULL && env->GetBooleanField(options, gOptions_mCancelID)) {
        return nullObjectReturn("gOptions_mCancelID");
    }
    SkImageDecoder::Mode decodeMode = mode;
    if (isPurgeable) {
        decodeMode = SkImageDecoder::kDecodeBounds_Mode;
    }

    / / decoding
    SkBitmap* decoded;
    if (willScale) {
        decoded = new SkBitmap;
    } else {
        decoded = bitmap;
    }
    SkAutoTDelete<SkBitmap> adb2(willScale ? decoded : NULL);
    if(! decoder->decode(stream, decoded, prefConfig, decodeMode, javaBitmap ! =NULL)) {
        return nullObjectReturn("decoder->decode returned false");
    }

    // Zoom
    int scaledWidth = decoded->width();
    int scaledHeight = decoded->height();
    if(willScale && mode ! = SkImageDecoder::kDecodeBounds_Mode) { scaledWidth =int(scaledWidth * scale + 0.5 f);
        scaledHeight = int(scaledHeight * scale + 0.5 f);
    }
    // Update the option parameters
    if(options ! =NULL) {
        env->SetIntField(options, gOptions_widthFieldID, scaledWidth);
        env->SetIntField(options, gOptions_heightFieldID, scaledHeight);
        env->SetObjectField(options, gOptions_mimeFieldID,
                getMimeTypeString(env, decoder->getFormat()));
    }

    // In justBounds mode, instead of creating a Bitmap object, return the Bitmap
    / / options. InJustDecodeBounds = true, straight parsing size, not the actual loading images
    if (mode == SkImageDecoder::kDecodeBounds_Mode) {
        return NULL;
    }
    jbyteArray ninePatchChunk = NULL;
    if (peeker.fPatchIsValid) {
        if (willScale) {
            scaleNinePatchChunk(peeker.fPatch, scale);
        }
        size_t ninePatchArraySize = peeker.fPatch->serializedSize();
        ninePatchChunk = env->NewByteArray(ninePatchArraySize);
        if (ninePatchChunk == NULL) {
            return nullObjectReturn("ninePatchChunk == null");
        }
        jbyte* array = (jbyte*) env->GetPrimitiveArrayCritical(ninePatchChunk, NULL);
        if (array= =NULL) {
            return nullObjectReturn("primitive array == null");
        }
        peeker.fPatch->serialize(array);
        env->ReleasePrimitiveArrayCritical(ninePatchChunk, array.0);
    }
    // detach bitmap from its autodeleter, since we want to own it now
    adb.detach();

    // Handle scaling
    if (willScale) {
        // This is weird so let me explain: we could use the scale parameter
        // directly, but for historical reasons this is how the corresponding
        // Dalvik code has always behaved. We simply recreate the behavior here.
        // The result is slightly different from simply using scale because of
        Pawnchess's task is usually bias applied when computing the target image size
        const float sx = scaledWidth / float(decoded->width());
        const float sy = scaledHeight / float(decoded->height());
        bitmap->setConfig(decoded->getConfig(), scaledWidth, scaledHeight);
        bitmap->allocPixels(&javaAllocator, NULL);
        bitmap->eraseColor(0);
        SkPaint paint;
        paint.setFilterBitmap(true);
        SkCanvas canvas(*bitmap);
        canvas.scale(sx, sy);
        canvas.drawBitmap(*decoded, 0.0 f.0.0 f, &paint);
    }

    // Handle the margins of the image
    if (padding) {
        if (peeker.fPatchIsValid) {
            GraphicsJNI::set_jrect(env, padding,
                    peeker.fPatch->paddingLeft, peeker.fPatch->paddingTop,
                    peeker.fPatch->paddingRight, peeker.fPatch->paddingBottom);
        } else {
            GraphicsJNI::set_jrect(env, padding, - 1.- 1.- 1.- 1);
        }
    }
    SkPixelRef* pr;
    if (isPurgeable) {
        pr = installPixelRef(bitmap, stream, sampleSize, doDither);
    } else {
        // if we get here, we're in kDecodePixels_Mode and will therefore
        // already have a pixelref installed.
        pr = bitmap->pixelRef();
    }
    if(! isMutable) {// promise we will never change our pixels (great for sharing and pictures)
        pr->setImmutable();
    }
    if(javaBitmap ! =NULL) {
        // If a java bitmap was passed in for reuse, pass it back
        return javaBitmap;
    }
    // Create a Bitmap object and return it
    return GraphicsJNI::createBitmap(env, bitmap, javaAllocator.getStorageObj(),
            isMutable, ninePatchChunk);
}Copy the code

We notice that the createBitmap() method is called at the end to create a Bitmap object. This method is defined in graphics.cpp. Let’s see how it creates a Bitmap.

jobject GraphicsJNI::createBitmap(JNIEnv* env, SkBitmap* bitmap, jbyteArray buffer,
                                  bool isMutable, jbyteArray ninepatch, int density)
{
    SkASSERT(bitmap);
    SkASSERT(bitmap->pixelRef());
    // Call the Java method to create an object
    jobject obj = env->NewObject(gBitmap_class, gBitmap_constructorMethodID,
            static_cast<jint>(reinterpret_cast<uintptr_t>(bitmap)),
            buffer, isMutable, ninepatch, density);
    hasException(env); // For the side effect of logging.
    // Return a Bitmap object
    return obj;
}Copy the code

It can be seen that the C++ layer finally calls the JNI method to create the Java layer Bitmap object. At this point, the whole BitmapFactory decoding process is analyzed.

2.2 Bilinear sampling

Bilinear sampling adopts bilinear interpolation algorithm. Compared with neighboring sampling, it is simple and crude to select one pixel point to replace other pixels. Bilinear sampling refers to the value of 2×2 points around the corresponding position of the source pixel, and takes the corresponding weight according to the relative position to obtain the target image through calculation.

It is also simple to implement

Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.blue_red);
Matrix matrix = new Matrix();
matrix.setScale(0.5 f.0.5 f);
Bitmap sclaedBitmap = Bitmap.createBitmap(bitmap, 0.0, bitmap.getWidth()/2, bitmap.getHeight()/2, matrix, true);
String savePath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM).getAbsolutePath() + "/timo_BitmapFactory_1.png";
ImageUtils.save(bitmap, savePath, Bitmap.CompressFormat.PNG);Copy the code

CreateBitmap (Bitmap source, int x, int y, int width, int height, Matrix M, Boolean filter)

This method takes seven parameters:

  • Bitmap source: source image
  • Int x: The x coordinate of the first pixel of the target image
  • Int y: the y coordinate of the first pixel of the target image
  • Int width: target image width (number of pixels)
  • Int height: target image height (number of pixels)
  • Matrix M: transformation Matrix
  • Boolean filter: Indicates whether filtering is enabled

Let’s look at the implementation.

  public static Bitmap createBitmap(Bitmap source, int x, int y, int width, int height,
            Matrix m, boolean filter) {

        // Check parameters.int neww = width;
        int newh = height;
        Canvas canvas = new Canvas();
        Bitmap bitmap;
        Paint paint;

        Rect srcR = new Rect(x, y, x + width, y + height);
        RectF dstR = new RectF(0.0, width, height);

        // Select the image encoding format consistent with the source image
        Config newConfig = Config.ARGB_8888;
        final Config config = source.getConfig();
        // GIF files generate null configs, assume ARGB_8888
        if(config ! =null) {
            switch (config) {
                case RGB_565:
                    newConfig = Config.RGB_565;
                    break;
                case ALPHA_8:
                    newConfig = Config.ALPHA_8;
                    break;
                //noinspection deprecation
                case ARGB_4444:
                case ARGB_8888:
                default:
                    newConfig = Config.ARGB_8888;
                    break; }}if (m == null || m.isIdentity()) {
            bitmap = createBitmap(neww, newh, newConfig, source.hasAlpha());
            paint = null;   // not needed
        } else {
            final booleantransformed = ! m.rectStaysRect();// Get the new image width by Matrix transform
            RectF deviceR = new RectF();
            m.mapRect(deviceR, dstR);

            neww = Math.round(deviceR.width());
            newh = Math.round(deviceR.height());

            // Pass the image parameters to the bottom layer to create a Bitmap object
            bitmap = createBitmap(neww, newh, transformed ? Config.ARGB_8888 : newConfig,
                    transformed || source.hasAlpha());

            canvas.translate(-deviceR.left, -deviceR.top);
            canvas.concat(m);

            paint = new Paint();
            paint.setFilterBitmap(filter);
            if (transformed) {
                paint.setAntiAlias(true); }}// The new bitmap was created from a known bitmap source so assume that
        // they use the same density
        bitmap.mDensity = source.mDensity;
        bitmap.setHasAlpha(source.hasAlpha());
        bitmap.setPremultiplied(source.mRequestPremultiplied);

        canvas.setBitmap(bitmap);
        canvas.drawBitmap(source, srcR, dstR, paint);
        canvas.setBitmap(null);

        return bitmap;
    }Copy the code

As you can see, the method calls its namesake createBitmap(neww, newh, Transformed? Config. ARGB_8888: newConfig, transformed | | source. HasAlpha ()), of course, this method is also borrowed by the underlying native method to realize the creation of a Bitmap.

private static native Bitmap nativeCreate(int[] colors, int offset,
                                              int stride, int width, int height,
                                              int nativeConfig, boolean mutable);Copy the code

This method corresponds to the Bitmap_creator() method in bitmap.cpp.

static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors,
                              int offset, int stride, int width, int height,
                              SkBitmap::Config config, jboolean isMutable) {
    if (NULL! = jColors) {size_t n = env->GetArrayLength(jColors);
        if (n < SkAbs32(stride) * (size_t)height) {
            doThrowAIOOBE(env);
            return NULL; }}/ / SkBitmap object
    SkBitmap bitmap;

    // Set the image configuration information
    bitmap.setConfig(config, width, height);

    // Create an image array, which corresponds to mBuffers in bitmap. Java
    jbyteArray buff = GraphicsJNI::allocateJavaPixelRef(env, &bitmap, NULL);
    if (NULL == buff) {
        return NULL;
    }

    if(jColors ! =NULL) {
        GraphicsJNI::SetPixels(env, jColors, offset, stride,
                               0.0, width, height, bitmap);
    }

    // Create a Bitmap object and return it
    return GraphicsJNI::createBitmap(env, new SkBitmap(bitmap), buff, isMutable, NULL);
}Copy the code

You can see above that the allocateJavaPixelRef() method is called to create the image array, which is defined in graphics.cpp.

jbyteArray GraphicsJNI::allocateJavaPixelRef(JNIEnv* env, SkBitmap* bitmap,
                                             SkColorTable* ctable) {
    Sk64 size64 = bitmap->getSize64();
    if(size64.isNeg() || ! size64.is32()) { jniThrowException(env,"java/lang/IllegalArgumentException"."bitmap size exceeds 32bits");
        return NULL;
    }
    size_t size = size64.get32();
    // Call a Java layer method to create a Java array
    jbyteArray arrayObj = env->NewByteArray(size);
    if (arrayObj) {
        // TODO: make this work without jniGetNonMovableArrayElements
       // Get the array address
        jbyte* addr = jniGetNonMovableArrayElements(&env->functions, arrayObj);
        if (addr) {
            SkPixelRef* pr = new AndroidPixelRef(env, (void*) addr, size, arrayObj, ctable);
            bitmap->setPixelRef(pr)->unref();
            // since we're already allocated, we lockPixels right away
            // HeapAllocator behaves this way toobitmap->lockPixels(); }}return arrayObj;
}Copy the code

Bitmap.createbitmap () creates a Java layer Bitmap object by calling createBitmap().

These are the two sampling methods native to Android, and if these do not meet your business needs, consider the following two methods.

  • Double cubic/double cubic sampling: Double cubic/double cubic sampling uses the double cubic/double cubic interpolation algorithm. In the adjacent interpolation algorithm, the target pixel value is determined by a single pixel on the source graph. In the bilinear interpolation algorithm, the target pixel value is obtained by 2×2 pixels around a point of the source pixel according to a certain weight. In the bicube/bicubic interpolation algorithm, the target pixel value is further referenced by 4×4 pixels around a point of the source pixel. Libswscale /swscale.c: libswscale/swscale.c: libswscale/swscale.c: libswscale/swscale.c: libswscale/swscale.c FFmpeg Scaler Documentation.
  • Lanczos sampling: Lanczos sampling and Lanczos filtering are two common applications of the Lanczos algorithm. It can be used as a low-pass filter or to smoothly insert digital signals between samples. Lanczos sampling is generally used to increase the sampling rate of digital signals, or to interval sampling to reduce the sampling rate.

Ok, that’s all about the Android platform to deal with image compression, the next article we will analyze the implementation of video compression. In addition, The Phoenix project fully realized the compression of pictures and videos, in which the compression of pictures is realized by the algorithm of Luban mentioned above. You can make a reference when doing the project.