Numerical representation of the image

RGB

RGB color model is the red, green and blue color model. It is produced by imitating three kinds of cone cells of biological retina, and then color image is displayed by superposition of three primary colors. Display different colors by superimposing three primary colors on top of black. RGB is generated as the XYZ coordinate system in the Cartesian coordinate system respectively in RGB color space. Each color ranges from 0 to 256.

RGB is designed based on the principle of color luminescence. Generally speaking, its color mixing mode is just like that of red, green and blue lights. When their lights overlap with each other, the colors are mixed, and the brightness is equal to the sum of the brightness of the two lights.

The red, green and blue channels each have 256 levels of brightness, with the “light” weakest – off – at 0 and the “light” brightest at 255. When the gray values of the three colors are the same, different gray values are produced, that is, when the gray values of the three colors are 0, it is the darkest black color. When the grayscale of all three colors is 255, it is the brightest white tone.

Methods of encoding a color are collectively called color space or gamut.

In the simplest terms, the “color space” of any color in the world can be defined as a fixed number or variable. RGB (red, green, blue) is just one of many color Spaces. In this way, each color can be represented by three variables – red, green and blue intensity. RGB is the most common solution for recording and displaying color images.

So every image can be composed of RGB, so how to represent the RGB of a pixel? Each sample in audio is represented by 16 bits. What about sub-pixels in pixels? There are several commonly used expressions.

  • Floating-point representation: The value ranges from 0.0 to 1.0. For example, OpenGL ES uses this representation for each sub-pixel.

  • The value is an integer ranging from 0 to 255 or from 00 to FF. Eight bits represent one sub-pixel, and 32 bits represent one pixel

The representation method of RGB_565 on Android platform is 16-bit mode to represent a pixel, R is represented by 5 bits, G is represented by 6 bits, and B is represented by 5 bits.

For an image, integer representation is generally used to describe it, for example, to calculate the size of a 1280×720 RGBA_8888 image, which can be used as follows:

1280 * 720 * 4 = 3.516MB
Copy the code

This is also the amount of memory a bitmap occupies, so the raw data for each image is large. For the naked data of the image, it is also impossible to transmit it directly on the network, so there is the compressed format of the image.

Android image engine decoding rules, in THE JNI resolution is ABGR order, to obtain RGB data to pay attention to.

Color values in Android generally follow the RGB/ARGB standard and are usually expressed in 8-bit hexadecimal notation starting with the ** “#” character. The prefix 0x indicates the hexadecimal number (base 16), where ARGB** represents the transparency (Alpha), Red (Red), Green (Green), and Blue (Blue) in sequence. The value ranges from 0 to 255 (0x00 to 0xFF in hexadecimal). A from 0x00 to 0xFF indicates transparent to opaque, and RGB from 0x00 to 0xFF indicates light to dark color. The color is black when RGB is all minimal (0 or 0x000000) and white when RGB is all maximum (255 or 0xFFFFFF).

  • Red :(255,0,0) or 0x00FF0000
  • Green :(0,255,0) or 0x0000FF00
  • Blue :(255,255,255) or 0x00FFFFFF

Small endian byte order

This is a problem when using RGB data in Android.

For example, the memory 0x0A0B0C0D (0x prefix indicates hexadecimal) can be stored in the following ways:

Small end of the sequence

  • “Little-endian” is also known as “little-endian”.

Data in 8bit units:

Address growth direction
. 0x0D 0x0C 0x0B 0x0A .

The lowest byte is 0x0D and is stored at the lowest memory address. The following bytes are stored at the following address.

The big end sequence

  • (big-endian) is also known as the big tail sequence.

Data in 8bit units:

Address growth direction
. 0x0A 0x0B 0x0C 0x0D .

The highest bit byte is 0x0A ‘stored at the lowest memory address. The next byte 0x0B exists at the following address. Similar to the left-to-right reading order of hexadecimal bytes.

Mixed order

  • (middle-endian) has a more complex order.

In the PDP-11 example, 0x0A0B0C0D is stored as:

32bit storage mode in PDP-11

Address growth direction
. 0x0B 0x0A 0x0D 0x0C .

The high and low 16bits are stored in big-endian order, but the internal 16bits are stored in small-endian order.

Bitmap pixel arrangement

Java/Kotlin uses big-endian by default in Android, wySIWYG is WHAT you get, and C/C++ uses small-endian by default in NDK.

This is easy to verify:

import java.nio.ByteOrder
......
/ / call
ByteOrder.nativeOrder()
....
/ / get
LITTLE_ENDIAN
Copy the code

When we create Bitmap on Android:

Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888)
Copy the code

The bitmap.config. ARGB_8888 comment indicates this:

int color = (A & 0xff) << 24 | (B & 0xff) << 16 | (G & 0xff) << 8 | (R & 0xff);
Copy the code

The byte compliance here should be ABGR.

But there are two ways to read pixel values in a Bitmap on Android that are not in this order. Why?

GetPixel () specifies the value order

Methods:

public void getPixels(@ColorInt int[] pixels, int offset, int stride,
                          int x, int y, int width, int height) {... nativeGetPixels(mNativePtr, pixels, offset, stride, x, y, width, height); }Copy the code

Finally, the native method nativeGetPixels is called. We don’t care how Native processes it.

The pixel data in the Bitmap is copied into the pixels array, which is arranged according to the rule ColorSpace.Named#SRGB.

That is, each pixel is an int value compressed according to the 8-bit arrangement of four ARGB components.

Pixel assembly:

int color = (A & 0xff) < <24 | (R & 0xff) < <16 | (G & 0xff) < <8 | (B & 0xff);
Copy the code

Get a single pixel value:

 int A = (color >> 24) & 0xff; // or color >>> 24
 int R = (color >> 16) & 0xff;
 int G = (color >>  8) & 0xff;
 int B = (color      ) & 0xff;
Copy the code

CopyPixelsToBuffer () Specifies the value sequence

Take a look at the specific method:

/**
     * <p>Copy the pixels from the buffer, beginning at the current position,
     * overwriting the bitmap's pixels. The data in the buffer is not changed
     * in any way (unlike setPixels(), which converts from unpremultipled 32bit
     * to whatever the bitmap's native format is. The pixels in the source
     * buffer are assumed to be in the bitmap's color space.</p>
     * <p>After this method returns, the current position of the buffer is
     * updated: the position is incremented by the number of elements read from
     * the buffer. If you need to read the bitmap from the buffer again you must
     * first rewind the buffer.</p>
     * @throws IllegalStateException if the bitmap's config is {@link Config#HARDWARE}
     */
public void copyPixelsFromBuffer(Buffer src) {... nativeCopyPixelsFromBuffer(mNativePtr, src); . }Copy the code

The data in The buffer is not changed.

In other words, the native layer operation changes the arrangement of bitmap to RGBA, and the buffer does not change the order

Let’s verify this briefly:

 val tempBitmap = Bitmap.createBitmap(1.1, Bitmap.Config.ARGB_8888)
val canvas = Canvas(tempBitmap)
val paint = Paint()
paint.style = Paint.Style.FILL
paint.color = Color.rgb(0x11.0x22.0x33)
canvas.drawRect(0f.0f, tempBitmap.width.toFloat(), tempBitmap.height.toFloat(), paint)

val byteSize = tempBitmap.allocationByteCount
val byteBuffer: ByteBuffer = ByteBuffer.allocateDirect(byteSize)
tempBitmap.copyPixelsToBuffer(byteBuffer)
byteBuffer.rewind()
val out = ByteArray(4)
byteBuffer[out.0.out.size]
val pixel = tempBitmap.getPixel(0.0)
val a = Color.alpha(pixel)
val r = Color.red(pixel)
val g = Color.green(pixel)
val b = Color.blue(pixel)
Log.d("pixel = "."${pixel}")
Log.d("pixel = "."a= ${a},r= ${r},g=${g}, b=${b}")
Log.d("pixel 16 = "."a= ${a.toString(16)},r= ${r.toString(16)},g=${g.toString(16)}, b=${b.toString(16)}")
for(element in out){
    Log.d("out = ", element.toString(16))}Copy the code

View the printed values

pixel =:		{ -15654349 } 

pixel =:  	{ a= 255,r= 17,g=34, b=51 }
// ARGB
pixel 16=:  { a= ff,r= 11,g=22, b=33 }
// RGBA
out   =  		{ 11, 22 ,33 , -1 }
Copy the code

-1 Takes the absolute value of binary inverse +1, which is the hexadecimal FF.

JNI value sequence

ARGB_8888 corresponds to a Bitmap byte order ABRG.

So does ANDROID_BITMAP_FORMAT_RGBA_8888 in JNI.

A simple verification:

Similarly, a pixel 0X112233 above is used:

Note here that we use paint. Color = color.rgb (0x11, 0x22, 0x33)alpha is the default value.

0xff000000 | (red << 16) | (green << 8) | blue;
Copy the code

Kotlin:

external fun handleBitmapForSinglePixel(bitmap: Bitmap)
Copy the code

Define macros in ABGR order:

#define MAKE_ABGR(a, b, g, r) (((a&0xff)<<24) | ((b & 0xff) << 16) | ((g & 0xff) << 8 ) | (r & 0xff))

#define BGR_8888_A(p) ((p & (0xff24)) < < >> 24)
#define BGR_8888_B(p) ((p & (0xff < < 16)) >> 16)
#define BGR_8888_G(p) ((p & (0xff << 8))  >> 8)
#define BGR_8888_R(p) (p & (0xff) )
Copy the code

Corresponding JNI method:

extern "C"
JNIEXPORT void JNICALL
Java_tt_reducto_ndksample_BitmapOps_handleBitmapForSinglePixel(JNIEnv *env, jobject thiz, jobject bitmap) {
    AndroidBitmapInfo bitmapInfo;
// memset(&bitmapInfo , 0 , sizeof(bitmapInfo));
    int ret = AndroidBitmap_getInfo(env, bitmap, &bitmapInfo);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGE("AndroidBitmap_getInfo() bitmap failed ! error=%d", ret)
    }
    // Get Bitmap pixel cache pointer: iterate to read BGRA data from Bitmap memory addrPtr
    void *addrPtr;
    ret = AndroidBitmap_lockPixels(env, bitmap, &addrPtr);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGE("AndroidBitmap_lockPixels() bitmap failed ! error=%d", ret)
    }

    // The logic to perform the image operation
    // Get the width and height
    uint32_t mWidth = bitmapInfo.width;
    uint32_t mHeight = bitmapInfo.height;
    // Get native data
    auto pixelArr = ((uint32_t *) addrPtr);

    LOGE("bitmap width = %d", mWidth)
    LOGE("bitmap height = %d", mHeight)
    LOGE("bitmap format = %d", bitmapInfo.format)
    int a,r, g, b;
    for (int x = 0; x < mWidth; ++x) {

        for (int y = 0; y < mHeight; ++y) {
            LOGE("handleBitmapForSinglePixel %d", pixelArr[0])
            void *pixel = nullptr;
            // Move the pixel pointer
            pixel = pixelArr + y * mWidth + x;
            // Obtain the corresponding pointer value according to ABGR storage sequence value
            uint32_t v = *((uint32_t *) pixel);
            // 
            a = RGB8888_A(v);
            r = RGB8888_R(v);
            g = RGB8888_G(v);
            b = RGB8888_B(v);
            //
            LOGD("bitmapInfo a %d", a)
            LOGD("bitmapInfo r %d", r)
            LOGD("bitmapInfo g %d", g)
            LOGD("bitmapInfo b %d", b)

        }
    }
    // Release the cache pointer
    AndroidBitmap_unlockPixels(env, bitmap);
}
Copy the code

View print values:

The 2020-08-19 16:58:55. 374, 9562-9562 / tt. Reducto. Ndksample E/TTNative: HandleBitmapForSinglePixel 16:58:55-13426159-2020-08-19. 374. 9562-9562 / tt reducto. Ndksample D/TTNative: BitmapInfo a 255 2020-08-19 16:58:55. 374, 9562-9562 / tt. Reducto. Ndksample D/TTNative: BitmapInfo r 17 2020-08-19 16:58:55. 374, 9562-9562 / tt. Reducto. Ndksample D/TTNative: 34 2020-08-19 16:58:55 bitmapInfo g. 374, 9562-9562 / tt. Reducto. Ndksample D/TTNative: bitmapInfo 51 bCopy the code

-13426159 converted to binary:

1100, 1100, 1101, 1101, 1110, 1111 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- take the 0011 0011 0010 0010 0001 0000 ----------------------------- +1 0011 0011 0010 0010 0001 0001 b g rCopy the code

Skia processing

Android bitmap processing process:

Java layer functions -- Native layer functions -- Skia library functions -- corresponding third-party library functions (libjpeg)Copy the code

All bitmap.createBitmap () corresponding native operations in.. Android/graphics/Bitmap. CPP in:

static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors, jint offset, jint stride, jint width, jint height, jint configHandle, jboolean isMutable, jlong colorSpacePtr) {
    // Convert the gamut
    SkColorType colorType = GraphicsJNI::legacyBitmapConfigToColorType(configHandle);
    if (NULL! = jColors) {size_t n = env->GetArrayLength(jColors);
        if (n < SkAbs32(stride) * (size_t)height) {
            doThrowAIOOBE(env);
            return NULL; }}// ARGB_4444 is a deprecated format, convert automatically to 8888
    if (colorType == kARGB_4444_SkColorType) {
        // Change ARGB_4444 to kN32_SkColorType
        colorType = kN32_SkColorType;
    }
    sk_sp<SkColorSpace> colorSpace;
    if (colorType == kAlpha_8_SkColorType) {
        colorSpace = nullptr;
    } else {
        colorSpace = GraphicsJNI::getNativeColorSpace(colorSpacePtr);
    }
    // 
    SkBitmap bitmap;
    bitmap.setInfo(SkImageInfo::Make(width, height, colorType, kPremul_SkAlphaType,
                colorSpace));
    // After 8.0 bitmap creation memory allocation is native
    sk_sp<Bitmap> nativeBitmap = Bitmap::allocateHeapBitmap(&bitmap);
    if(! nativeBitmap) {ALOGE("OOM allocating Bitmap with dimensions %i x %i", width, height);
        doThrowOOME(env);
        return NULL;
    }
    // Fill the color value
    if(jColors ! =NULL) {
        GraphicsJNI::SetPixels(env, jColors, offset, stride, 0.0, width, height, &bitmap);
    }
    return createBitmap(env, nativeBitmap.release(), getPremulBitmapCreateFlags(isMutable));
}
Copy the code

The first step here is to convert bitmap.config. ARGB_8888 to the color type of the SKia field:

 SkColorType colorType = GraphicsJNI::legacyBitmapConfigToColorType(configHandle);
Copy the code

Take a look at the corresponding method definition in Graphicsjni.h:

/*
 *  LegacyBitmapConfig is the old enum in Skia that matched the enum int values
 *  in Bitmap.Config. Skia no longer supports this config, but has replaced it
 *  with SkColorType. These routines convert between the two.
 */
static SkColorType legacyBitmapConfigToColorType(jint legacyConfig);
Copy the code

Go back to graphicsjni.cpp to see the implementation:

SkColorType GraphicsJNI::legacyBitmapConfigToColorType(jint legacyConfig) {
    const uint8_t gConfig2ColorType[] = {
        kUnknown_SkColorType,
        kAlpha_8_SkColorType,
        kUnknown_SkColorType, // Previously kIndex_8_SkColorType,
        kRGB_565_SkColorType,
        kARGB_4444_SkColorType,
        kN32_SkColorType,
        kRGBA_F16_SkColorType,
        kN32_SkColorType
    };
    if (legacyConfig < 0 || legacyConfig > kLastEnum_LegacyBitmapConfig) {
        legacyConfig = kNo_LegacyBitmapConfig;
    }
    return static_cast<SkColorType>(gConfig2ColorType[legacyConfig]);
}

Copy the code

Since the bitmap.config. ARGB_8888 value we passed in the Java layer is ARGB_8888(5)

KN32_SkColorType;

Next let’s look at the SkColorType in skimageInfo.h:

/** \enum SkImageInfo::SkColorType Describes how pixel bits encode color. A pixel may be an alpha mask, a grayscale, RGB, or ARGB. kN32_SkColorType selects the native 32-bit ARGB format. On little endian processors, pixels containing 8-bit ARGB components pack into 32-bit kBGRA_8888_SkColorType. On big endian processors, pixels pack into 32-bit kRGBA_8888_SkColorType. */
enum SkColorType {
    kUnknown_SkColorType,      / /! < uninitialized
    kAlpha_8_SkColorType,      / /! < pixel with alpha in 8-bit byte
    kRGB_565_SkColorType,      / /! < pixel with 5 bits red, 6 bits green, 5 bits blue, in 16-bit word
    kARGB_4444_SkColorType,    / /! < pixel with 4 bits for alpha, red, green, blue; in 16-bit word
    kRGBA_8888_SkColorType,    / /! < pixel with 8 bits for red, green, blue, alpha; in 32-bit word
    kRGB_888x_SkColorType,     / /! < pixel with 8 bits each for red, green, blue; in 32-bit word
    kBGRA_8888_SkColorType,    / /! < pixel with 8 bits for blue, green, red, alpha; in 32-bit word
    kRGBA_1010102_SkColorType, / /! < 10 bits for red, green, blue; 2 bits for alpha; in 32-bit word
    kRGB_101010x_SkColorType,  / /! < pixel with 10 bits each for red, green, blue; in 32-bit word
    kGray_8_SkColorType,       / /! < pixel with grayscale level in 8-bit byte
    kRGBA_F16Norm_SkColorType, / /! < pixel with half floats in [0,1] for red, green, blue, alpha; in 64-bit word
    kRGBA_F16_SkColorType,     / /! < pixel with half floats for red, green, blue, alpha; in 64-bit word
    kRGBA_F32_SkColorType,     / /! < pixel using C float for red, green, blue, alpha; in 128-bit word
    kLastEnum_SkColorType     = kRGBA_F32_SkColorType,/ /! < last valid value
#if SK_PMCOLOR_BYTE_ORDER(B,G,R,A)
    kN32_SkColorType          = kBGRA_8888_SkColorType,/ /! < native ARGB 32-bit encoding
#elif SK_PMCOLOR_BYTE_ORDER(R,G,B,A)
    kN32_SkColorType          = kRGBA_8888_SkColorType,/ /! < native ARGB 32-bit encoding
#else
    #error "SK_*32_SHIFT values must correspond to BGRA or RGBA byte order"
#endif
};
Copy the code

Let’s move on to the next

KN32_SkColorType Determines the value of kN32_SkColorTypef according to the byte order. The macro SK_PMCOLOR_BYTE_ORDER is defined in skpostconfig. h:


/** * SK_PMCOLOR_BYTE_ORDER can be used to query the byte order of SkPMColor at compile time. The * relationship between  the byte order and shift values depends on machine endianness. If the shift * order is R=0, G=8, B=16, A=24 then ((char*)&pmcolor)[0] will produce the R channel on a little * endian machine and the A channel on a big endian  machine. Thus, given those shifts values, * SK_PMCOLOR_BYTE_ORDER(R,G,B,A) will be true on a little endian machine and * SK_PMCOLOR_BYTE_ORDER(A,B,G,R) will be true on a big endian machine. */
#ifdef SK_CPU_BENDIAN
#  define SK_PMCOLOR_BYTE_ORDER(C0, C1, C2, C3)     \
        (SK_ ## C3 ## 32_SHIFT == 0  &&             \
         SK_ ## C2 ## 32_SHIFT == 8  &&             \
         SK_ ## C1 ## 32_SHIFT == 16 &&             \
         SK_ ## C0 ## 32_SHIFT == 24)
#else
#  define SK_PMCOLOR_BYTE_ORDER(C0, C1, C2, C3)     \
        (SK_ ## C0 ## 32_SHIFT == 0  &&             \
         SK_ ## C1 ## 32_SHIFT == 8  &&             \
         SK_ ## C2 ## 32_SHIFT == 16 &&             \
         SK_ ## C3 ## 32_SHIFT == 24)
#endif

Copy the code

So the little endian order corresponds to:

#  define SK_PMCOLOR_BYTE_ORDER(C0, C1, C2, C3)     \
        (SK_ ## C0 ## 32_SHIFT == 0  &&             \
         SK_ ## C1 ## 32_SHIFT == 8  &&             \
         SK_ ## C2 ## 32_SHIFT == 16 &&             \
         SK_ ## C3 ## 32_SHIFT == 24)
Copy the code

SK_A32_SHIFT, SK_R32_SHIFT, SK_G32_SHIFT, SK_B32_SHIFT:

/** * We check to see if the SHIFT value has already been defined. * if not, we define it ourself to some default values. We default to OpenGL * order (in memory: r,g,b,a) */
#ifndef SK_A32_SHIFT
#  ifdef SK_CPU_BENDIAN
#    define SK_R32_SHIFT    24
#    define SK_G32_SHIFT    16
#    define SK_B32_SHIFT    8
#    define SK_A32_SHIFT    0
#  else
#    define SK_R32_SHIFT    0
#    define SK_G32_SHIFT    8
#    define SK_B32_SHIFT    16
#    define SK_A32_SHIFT    24
#  endif
#endif
Copy the code

So little endian processing:

#    define SK_R32_SHIFT    0
#    define SK_G32_SHIFT    8
#    define SK_B32_SHIFT    16
#    define SK_A32_SHIFT    24
Copy the code

Return to SkColorType:

#if SK_PMCOLOR_BYTE_ORDER(B,G,R,A) kN32_SkColorType = kBGRA_8888_SkColorType, #elif SK_PMCOLOR_BYTE_ORDER(R,G,B,A) kN32_SkColorType = kRGBA_8888_SkColorType, // SK_PMCOLOR_BYTE_ORDER(R,G,B,A) SK_R32_SHIFT == 0 && SK_G32_SHIFT == 8 && SK_B32_SHIFT == 16 && SK_A32_SHIFT == 24 // The expression returns trueCopy the code

In conclusion:

This means that bitmap.config. ARGB_8888 is converted to the color type kRGBA_8888_SkColorType in the Skia field and stored internally in that format. When RGBA is written to small-endian memory, it becomes ABGR.

ABGR is also the order in which we get bitmap pixels in JNI.

Read on:

typedef uint32_t 	SkPMColor
Copy the code
sk_sp<Bitmap> Bitmap::allocateHeapBitmap(size_t size, const SkImageInfo& info, size_t rowBytes) {
    void* addr = calloc(size, 1);
    if(! addr) {return nullptr;
    }
    return sk_sp<Bitmap>(new Bitmap(addr, size, info, rowBytes));
}

Copy the code
bool GraphicsJNI::SetPixels(JNIEnv* env, jintArray srcColors, int srcOffset, int srcStride,
        int x, int y, int width, int height, const SkBitmap& dstBitmap) {
    SkAutoLockPixels alp(dstBitmap);
    void* dst = dstBitmap.getPixels(a); FromColorProc proc =ChooseFromColorProc(dstBitmap);
    if (NULL == dst || NULL == proc) {
        return false;
    }
    const jint* array = env->GetIntArrayElements(srcColors, NULL);
    const SkColor* src = (const SkColor*)array + srcOffset;
    // reset to to actual choice from caller
    dst = dstBitmap.getAddr(x, y);
    // now copy/convert each scanline
    for (int y = 0; y < height; y++) {
        proc(dst, src, width, x, y);
        src += srcStride;
        dst = (char*)dst + dstBitmap.rowBytes(a); } dstBitmap.notifyPixelsChanged(a); env->ReleaseIntArrayElements(srcColors, const_cast<jint*>(array),
                                 JNI_ABORT);
    return true;
}
Copy the code

ChooseFromColorProc:

// can return NULL
static FromColorProc ChooseFromColorProc(const SkBitmap& bitmap) {
    switch (bitmap.colorType()) {
        case kN32_SkColorType:
            return bitmap.alphaType() == kPremul_SkAlphaType ? FromColor_D32 : FromColor_D32_Raw;
        case kARGB_4444_SkColorType:
            return bitmap.alphaType() == kPremul_SkAlphaType ? FromColor_D4444 :
                    FromColor_D4444_Raw;
        case kRGB_565_SkColorType:
            return FromColor_D565;
        default:
            break;
    }
    return NULL;
}
Copy the code

CPP code

#include <android/bitmap.h>
#include <android/graphics/Bitmap.h>
int AndroidBitmap_getInfo(JNIEnv* env, jobject jbitmap, AndroidBitmapInfo* info) {
    if (NULL == env || NULL == jbitmap) {
        return ANDROID_BITMAP_RESULT_BAD_PARAMETER;
    }
    if (info) {
        android::bitmap::imageInfo(env, jbitmap, info);
    }
    return ANDROID_BITMAP_RESULT_SUCCESS;
}
int AndroidBitmap_lockPixels(JNIEnv* env, jobject jbitmap, void** addrPtr) {
    if (NULL == env || NULL == jbitmap) {
        return ANDROID_BITMAP_RESULT_BAD_PARAMETER;
    }
    void* addr = android::bitmap::lockPixels(env, jbitmap);
    if(! addr) {return ANDROID_BITMAP_RESULT_JNI_EXCEPTION;
    }
    if (addrPtr) {
        *addrPtr = addr;
    }
    return ANDROID_BITMAP_RESULT_SUCCESS;
}
int AndroidBitmap_unlockPixels(JNIEnv* env, jobject jbitmap) {
    if (NULL == env || NULL == jbitmap) {
        return ANDROID_BITMAP_RESULT_BAD_PARAMETER;
    }
    bool unlocked = android::bitmap::unlockPixels(env, jbitmap);
    if(! unlocked) {return ANDROID_BITMAP_RESULT_JNI_EXCEPTION;
    }
    return ANDROID_BITMAP_RESULT_SUCCESS;
}

Copy the code

JNI operation Bitmap

To prepare

Android through JNI call Bitmap, through CMake to compile so dynamic link library need to add jnigraphics image library.

Target_link_libraries (# create your own dynamic library TTNative # operate bitmap jnigraphics # link log library ${log-lib})Copy the code

Then import the header file:

#include <android/bitmap.h>
Copy the code

Create a Bitmap

JNI can only call Java or Kotlin methods to create bitmaps.

The first is directly in a Bitmap

jclass bitmapCls;
jmethodID createBitmapFunction;
jmethodID getBitmapFunction;

Public static bitmap createBitmap (int width,int height, bitmap.config Config)

jobject createBitmap(JNIEnv *env, uint32_t width, uint32_t height) {
    bitmapCls = env->FindClass("android/graphics/Bitmap");
    createBitmapFunction = env->GetStaticMethodID(bitmapCls,
                                                            "createBitmap"."(IILandroid/graphics/Bitmap$Config;) Landroid/graphics/Bitmap;");
    // Declaration format
    jstring configName = env->NewStringUTF("ARGB_8888");
    jclass bitmapConfigClass = env->FindClass("android/graphics/Bitmap$Config");
    getBitmapFunction = env->GetStaticMethodID(
            bitmapConfigClass, "valueOf"."(Ljava/lang/String;) Landroid/graphics/Bitmap$Config;");

    jobject bitmapConfig = env->CallStaticObjectMethod(bitmapConfigClass,
                                                       getBitmapFunction, configName);

    jobject newBitmap = env->CallStaticObjectMethod(bitmapCls, createBitmapFunction,
                                                    width, height, bitmapConfig);
    return newBitmap;
}
Copy the code

Retrieves Bitmap object information

The functions defined in the header file allow native code to retrieve Bitmap object information, such as its size, pixel format, etc., function signatures:

/** * Given a java bitmap object, fill out the {@link AndroidBitmapInfo} struct for it. * If the call fails, the info parameter will be ignored. */
int AndroidBitmap_getInfo(JNIEnv* env, jobject jbitmap, AndroidBitmapInfo* info);
Copy the code

The first argument is a JNI interface pointer, the second argument is a reference to the Bitmap object, and the third argument is a pointer to the AndroidBitmapInfo structure.

AndroidBitmapInfo structure is as follows:

/** Bitmap info, see AndroidBitmap_getInfo(). */
typedef struct {
    /** The bitmap width in pixels. */
    uint32_t    width;
    /** The bitmap height in pixels. */
    uint32_t    height;
    /** The number of byte per row. */
    uint32_t    stride;
    /** The bitmap pixel format. See {@link AndroidBitmapFormat} */
    int32_t     format;
    /** Bitfield containing information about the bitmap. * * 

Two bits are used to encode alpha. Use {@link ANDROID_BITMAP_FLAGS_ALPHA_MASK} * and {@link ANDROID_BITMAP_FLAGS_ALPHA_SHIFT} to retrieve them.

* *

One bit is used to encode whether the Bitmap uses the HARDWARE Config. Use * {@link ANDROID_BITMAP_FLAGS_IS_HARDWARE} to know.

* *

These flags were introduced in API level 30.

*/
uint32_t flags; } AndroidBitmapInfo; Copy the code

Where, width is the width of the Bitmap, height is the height, format is the format of the image, and stride is the number of bytes for each line.

Image formats are supported as follows:

/** Bitmap pixel format. */
enum AndroidBitmapFormat {
    /** No format. */
    ANDROID_BITMAP_FORMAT_NONE      = 0./** Red: 8 bits, Green: 8 bits, Blue: 8 bits, Alpha: 8 bits. **/
    ANDROID_BITMAP_FORMAT_RGBA_8888 = 1./** Red: 5 bits, Green: 6 bits, Blue: 5 bits. **/
    ANDROID_BITMAP_FORMAT_RGB_565   = 4./** Deprecated in API level 13. Because of the poor quality of this configuration, it is advised to use ARGB_8888 instead. **/
    ANDROID_BITMAP_FORMAT_RGBA_4444 = 7./** Alpha: 8 bits. */
    ANDROID_BITMAP_FORMAT_A_8       = 8./** Each component is stored as a half float. **/
    ANDROID_BITMAP_FORMAT_RGBA_F16  = 9};Copy the code

AndroidBitmap_getInfo returns 0 if executed successfully, otherwise returns a negative number representing the list of error codes executed as follows:

/** AndroidBitmap functions result code. */
enum {
    /** Operation was successful. */
    ANDROID_BITMAP_RESULT_SUCCESS           = 0./** Bad parameter. */
    ANDROID_BITMAP_RESULT_BAD_PARAMETER     = - 1./** JNI exception occured. */
    ANDROID_BITMAP_RESULT_JNI_EXCEPTION     = 2 -./** Allocation failed. */
    ANDROID_BITMAP_RESULT_ALLOCATION_FAILED = - 3};Copy the code

Manipulate the native pixel cache

access

In the header file, the AndroidBitmap_lockPixels function decodes the image and obtains the address pointer addrPtr stored in the memory of the decoded pixels, locking the pixel cache to ensure that the memory of the pixels will not be moved.

If the Native layer wants to access pixel data and manipulate it, this method returns a Native pointer to the pixel cache:

/** * Given a java bitmap object, attempt to lock the pixel address. * Locking will ensure that the memory for the pixels will not move * until the unlockPixels call, and ensure that, if the pixels had been * previously purged, they will have been restored. * * If this call succeeds, it must be balanced by a call to * AndroidBitmap_unlockPixels, after which time the address of the pixels should * no longer be used. * * If this succeeds, *addrPtr will be set to the pixel address. If the call * fails, addrPtr will be ignored. */
int AndroidBitmap_lockPixels(JNIEnv* env, jobject jbitmap, void** addrPtr);
Copy the code

The first two arguments are the same as above, and the third argument is a two-dimensional pointer to the pixel cache address.

This function takes the cache addresses of all pixels and then operates on each pixel value to change the Bitmap information.

The function returns 0 on success, a negative number otherwise, same as above.

The release of

AndroidBitmap_unlockPixels should be called once after calling AndroidBitmap_unlockPixels to release the native pixel cache.

After reading or writing to the native pixel cache, it should be released. Once released, Bitmap’s Java objects can be used in the Java layer.

/** * Call this to balance a successful call to AndroidBitmap_lockPixels. */
int AndroidBitmap_unlockPixels(JNIEnv* env, jobject jbitmap);
Copy the code

Returns 0 on success, 1 otherwise.

Rotation, mirroring

When we define Bitmap images in both Kotlin and JNI, we need to define width and height, which is relative to a two-dimensional image

Images are two-dimensional data, but data can only be stored in one dimension in memory.

There are different corresponding ways for two-dimensional to one-dimensional, and there are only two common ways:

Top down or bottom up in pixel “rows”;

Bitmap pixels in Android are arranged in rows from left to right and columns from top to bottom.

The starting point is just like the screen origin, in the upper left corner.

Here’s an example:

If the original bitmap pixel information is expanded to a two-digit array, it looks like this:

[1, 2, 3] [4, 5, 6] [7, 8, 9]Copy the code

The pixel data store is:

123 456 789
Copy the code

We want to rotate the Bitmap by creating a new Bitmap and filling it with pixel values

According to the pixel arrangement rules above, if we need to rotate 90 degrees clockwise, we need to store the pixels in order:

[[7, 4, 1] [8, 5, 2] [9, 6, 3] // Storage sequence 741 852 963Copy the code

Everything is based on matrices.

But what we’re going to do here is we’re just going to evaluate it in the order that we need to do it and write it.

In the AndroidBitmap_lockPixels method, the *addrPtr pointer points to the pixel address of a Bitmap. Its length is the product of the width and height of the Bitmap.

uint32_t mWidth = bitmapInfo.width;
uint32_t mHeight = bitmapInfo.height;
// Get native data
auto pixelArr =((uint32_t *) addrPtr);
// Create a new array pointer to fill the pixel value
auto *newBitmapPixels = new uint32_t[mWidth * mHeight];
LOGE("bitmap width = %d", (uint32_t)mWidth)
LOGE("bitmap height = %d", mHeight)
LOGE("bitmap format = %d", bitmapInfo.format)
Copy the code

Here we deal with RGBA_8888 format, A, R, G, B components each account for 8 bits,8 bits is 1 byte, A pixel is 4 words to store 32-bit ARGB value

Binary: 2^32=16777216 (true color)

// The pointer is offset
int tmp = 0;
// Scan in clockwise rotation of 90 degrees
for (int x =0 ; x < mWidth; x++) {
        for (int y = mHeight- 1; y >=0 ; --y) {
          	// Start from the lower left corner
            uint32_t pixel = pixelArr[mWidth * y+x];
          	/ / writenewBitmapPixels[tmp++] =pixel; }}Copy the code

** Scan from the lower left corner of the original matrix from bottom to top along the Y-axis, and then scan the X-axis from left to right. ** and so on

If you rotate the bitmap by 90 degrees, note that you need to change the width and height when creating the bitmap

 jobject newBitmap = createBitmap(env, mHeight, mWidth);
Copy the code

Complete code:

extern "C"
JNIEXPORT jobject JNICALL
Java_tt_reducto_ndksample_jni_BitmapOps_rotateBitmap(JNIEnv *env, jobject thiz, jobject bitmap, jint ops) {
    if (bitmap == nullptr) {
        LOGD("rotateBitmap - the bitmap is null ")
        return nullptr;
    }

    // Retrieve bitmap information
    AndroidBitmapInfo bitmapInfo;
    int ret = AndroidBitmap_getInfo(env, bitmap, &bitmapInfo);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGD("AndroidBitmap_getInfo() bitmap failed ! error=%d", ret)
        return nullptr;
    }
    // Get Bitmap's pixel cache pointer: iterate to read pixel data from Bitmap memory addrPtr
    void *addrPtr;
    ret = AndroidBitmap_lockPixels(env, bitmap, &addrPtr);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGD("AndroidBitmap_lockPixels() bitmap failed ! error=%d", ret)
        return nullptr;
    }

    // The logic to perform the image operation
    // Get the width and height
    int mWidth = bitmapInfo.width;
    int mHeight = bitmapInfo.height;
    // Get native data
    auto pixelArr = ((uint32_t *) addrPtr);
    // Matrix creates a new array pointer to fill the pixel value
    auto *newBitmapPixels = new uint32_t[mWidth * mHeight];
    LOGD("bitmap width = %d", mWidth)
    LOGD("bitmap height = %d", mHeight)
    LOGD("bitmap format = %d", bitmapInfo.format)
    int temp = 0;
    switch (ops) {
        case 0:
            // Walk through the matrix, scanning 90 degrees clockwise
            for (int x = 0; x < mWidth; x++) {
                for (int y = mHeight - 1; y >= 0; --y) { newBitmapPixels[temp++] = pixelArr[mWidth * y + x]; }}break;
        case 1:
            // flip up and down
            for (int y = 0; y < mHeight; ++y) {
                for (int x = 0; x < mWidth; x++) {
                    uint32_t pixel = pixelArr[temp++];
                    newBitmapPixels[mWidth * (mHeight - 1- y) + x] = pixel; }}break;
        case 2:
            / / the mirror
            for (int y = 0; y < mHeight; ++y) {
                for (int x = mWidth - 1; x >= 0; x--) {
                    uint32_tpixel = pixelArr[temp++]; newBitmapPixels[mWidth * y + x] = pixel; }}break;
        default:
            break;
    }


    // Create a bitmap. Note this because the width and height of the matrix also change after 90 degrees
    jobject newBitmap;
    int size = mWidth * mHeight;
    if (ops == 0) {
        newBitmap = createBitmap(env, mHeight, mWidth);
        void *resultBitmapPixels;
        //
        ret = AndroidBitmap_lockPixels(env, newBitmap, &resultBitmapPixels);
        if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGD("AndroidBitmap_lockPixels() newBitmap failed ! error=%d", ret)
            return nullptr;
        }

        // Write a new value
        memcpy((uint32_t *) resultBitmapPixels, newBitmapPixels, sizeof(uint32_t) * size);
        // Release the cache pointer
        AndroidBitmap_unlockPixels(env, newBitmap);
        // Free memory
        delete[] newBitmapPixels;

        return newBitmap;
    } else {
        memcpy((uint32_t *) addrPtr, newBitmapPixels, sizeof(uint32_t) * size);
        delete[] newBitmapPixels;
        // Release the cache pointer
        AndroidBitmap_unlockPixels(env, bitmap);
        returnbitmap; }}Copy the code

Gray scale and relief

Average value method: that is, the new color value

R is equal to G is equal to B is equal to R plus G plus B over 3Copy the code

Or the weighted average method:

 (r * 0.3 + g * 0.59 + b * 0.11)
Copy the code

Corresponding JNI functions:

extern "C"
JNIEXPORT void JNICALL
Java_tt_reducto_ndksample_jni_BitmapOps_addBitmapFilter(JNIEnv *env, jobject thiz, jobject bitmap, jint ops) {
    if (bitmap == nullptr) {
        LOGD("addBitmapFilter - the bitmap is null ")}// Retrieve bitmap information
    AndroidBitmapInfo bitmapInfo;
// memset(&bitmapInfo , 0 , sizeof(bitmapInfo));
    int ret = AndroidBitmap_getInfo(env, bitmap, &bitmapInfo);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGD("AndroidBitmap_getInfo() bitmap failed ! error=%d", ret)
    }
    // Get Bitmap pixel cache pointer: iterate to read BGRA data from Bitmap memory addrPtr
    void *addrPtr;
    ret = AndroidBitmap_lockPixels(env, bitmap, &addrPtr);
    if(ANDROID_BITMAP_RESULT_SUCCESS ! = ret) {LOGD("AndroidBitmap_lockPixels() bitmap failed ! error=%d", ret)
    }

    // The logic to perform the image operation
    // Get the width and height
    uint32_t mWidth = bitmapInfo.width;
    uint32_t mHeight = bitmapInfo.height;
    // Matrix creates a new array pointer to fill the pixel value
    // auto *newBitmapPixels = new uint32_t[mWidth * mHeight];
    LOGD("bitmap width = %d", mWidth)
    LOGD("bitmap height = %d", mHeight)
    LOGD("bitmap format = %d", bitmapInfo.format)

    // Get native data
    auto pixelArr = ((uint32_t *) addrPtr);

    int a, r, g, b;
    // do not operate on A
    // Read BGRA data from Bitmap memory addrPtr and store BGR data to data memory


    switch (ops) {
        / / gray
        case 1: {
            for (int y = 0; y < mHeight; ++y) {
                for (int x = 0; x < mWidth; ++x) {
                    // This is defined as void for subsequent operations
                    void *pixel = nullptr;
                    / / 24
                    if (bitmapInfo.format == ANDROID_BITMAP_FORMAT_RGBA_8888) {
                        // Move the pixel pointer
                        pixel = pixelArr + y * mWidth + x;
                        // Obtain the corresponding pointer value according to ABGR storage sequence value
                        uint32_t v = *((uint32_t *) pixel);
                        a = BGR_8888_A(v);
                        r = BGR_8888_R(v);
                        g = BGR_8888_G(v);
                        b = BGR_8888_B(v);
                        // Average value method
                        // int sum = (r + g + b) / 3;
                        // Or weighted average method
                        int sum = (int) (r * 0.3 + g * 0.59 + b * 0.11); * ((uint32_t *) pixel) = MAKE_ABGR(a, sum, sum, sum); }}}break;
        }
            / / relief map
        case 2: {
            // 
            // Subtract the RGB value of the current point from the RGB value of the adjacent point and add 128 to the new RGB value
            void *pixel = nullptr;
            void *pixelBefore = nullptr;
            int  r1, g1, b1;
            for (int i = 1; i < mWidth * mHeight; ++i) {
                uint32_t color, colorBefore;

                pixel = pixelArr+i;
                pixelBefore = pixelArr+i - 1;
                color = *((uint32_t *) pixel);
                colorBefore =  *((uint32_t *) pixelBefore);
                a = BGR_8888_A(color);
                r = BGR_8888_R(color);
                g = BGR_8888_G(color);
                b = BGR_8888_B(color);

                r1 = BGR_8888_R(colorBefore);
                g1 = BGR_8888_G(colorBefore);
                b1 = BGR_8888_B(colorBefore);


                r = r - r1 + 128;
                g = g - g1+ 128;
                b = b - b1 + 128;
                // Grayscale again
                int sum = (int) (r * 0.3 + g * 0.59 + b * 0.11); * ((uint32_t *)pixelBefore) = MAKE_ABGR(a, sum, sum, sum);
            }
            break;
        }

        default:
            break;
    }

    // Release the cache pointer
    AndroidBitmap_unlockPixels(env, bitmap);
}
Copy the code

Above, relatively simple R, G, B filter.

Effect: