KLuban based on the Algorithm of Kotlin+Jetpack reconstruction of the project

KLuban uses Kotlin + coroutine + Flow(parallel task) + LiveData(listening callback) + Glide image recognition, memory optimization + adjacent (luban), bilinear sampling image algorithm compression framework, welcome to improve fork and star

The project address

Github.com/forJrking/K…

Integration and use

Step 1.Add it in your root build.gradle at the end of repositories:

allprojects {
 repositories {
   ...
   maven { url 'https://jitpack.io'}}}Copy the code

Step 2.Add the dependency

dependencies {
  implementation 'com.github.forJrking:KLuban:v1.0.1'}Copy the code

Step 3. The Api:

Luban.with(LifecycleOwner)               //Lifecycle can use ProcessLifecycleOwner without filling in internally
        .load(uri, uri)                  / / support File, Uri, InputStream, String, and the above data array and collections
        .setOutPutDir(path)              // Output directory folder
        .concurrent(true)                // If multiple files are compressed in parallel, internally optimize the number of parallel threads to prevent OOM
        .useDownSample(true)             // If the compression algorithm is true, use adjacent sampling, otherwise use bilinear sampling.
        .format(Bitmap.CompressFormat.PNG)// The output file format can be JPG,PNG and WEBP
        .ignoreBy(200)                   // The desired size, size and image rendering quality are not equal, so compression may not be less than this value.
        .quality(95)                     // Mass compressibility 0-100
        .rename { "pic$it" }             // Rename the file.filter { it! =null }             / / filter
        .compressObserver {
            onSuccess = { }
            onStart = {}
            onCompletion = {}
            onError = { e, s -> }
        }.launch()
Copy the code

Problem analysis and technical estimation of the original framework

Luban is based on Android native image compression framework, the main feature is almost wechat image sampling compression algorithm. Due to the iteration of technology, the product needs can not be met. The following is the core compression implementation and a list of lu Ban’s problems according to the code:

File compress(a) throws IOException {
    BitmapFactory.Options options = new BitmapFactory.Options();
    // Calculate the main neighborhood sampling algorithm
    options.inSampleSize = computeSize();
    // Decoding mainly takes place in OOM
    Bitmap tagBitmap = BitmapFactory.decodeStream(srcImg.open(), null, options);
    ByteArrayOutputStream stream = new ByteArrayOutputStream();
    // Judge the Angle to JPEG format
    if (Checker.SINGLE.isJPG(srcImg.open())) {
      tagBitmap = rotatingImage(tagBitmap, Checker.SINGLE.getOrientation(srcImg.open()));
    }
    // The output image format quality coefficient is 60 based on the transmitted focusAlpha
    tagBitmap.compress(focusAlpha ? Bitmap.CompressFormat.PNG : Bitmap.CompressFormat.JPEG, 60, stream); tagBitmap.recycle(); . Written to the filereturn tagImg;
 }
Copy the code
  • Memory is not predicted before decoding, easy to OOM
  • Mass compression write dead 60
  • No image output format option is provided
  • JPEG images have no alpha layer and should be compressed in JPG format to save more memory
  • Provides only adjacent sampling algorithm, bilinear sampling is more appropriate in some scenarios (text-only images)
  • Multi-file parallel compression is not supported, and the output sequence and compression sequence cannot be guaranteed to be consistent
  • Check the file format and image Angle repeatedly create InputStream, adding unnecessary overhead and increasing OOM risk
  • Memory leaks can occur and the lifecycle needs to be handled properly

Analysis of technical transformation

  • Before decoding, use the width and height of the acquired images to calculate the memory footprint, and use RGB-565 to try decoding if the memory exceeds the size
  • For mass compression, external afferent mass coefficient is required
  • Glide algorithm is used to obtain the real image format before compression, and the output file intelligent judgment, such as whether contains alpha, output format
  • See Glide’s reuse of byte arrays, andInputStream的 mark(),reset()To optimize the repeat open problem
  • usingLiveDataTo realize listening, automatic logout listening.
  • Using coroutine to achieve asynchronous compression and parallel compression tasks, ctrip can be cancelled at the appropriate time to terminate the task

Source code analysis and optimization

Glide image recognition

When we modify the image suffix or no suffix, Glide can still decode the image normally. And how it does that, it relies on the ImageHeaderParserUtils class,

public final class ImageHeaderParserUtils {...// Get the ImageType and image schedule from the ImageHeaderParser
  public static ImageType getType(List
       
         parsers, InputStream is,ArrayPool byteArrayPool)
       
   public static int getOrientation(List<ImageHeaderParser> parsers,
     InputStream is,final ArrayPool byteArrayPool). }/ / interface
interface ImgHeaderParser {
		// Get the image type
    fun getType(input: InputStream): ImageType
		// Get the original direction of the image
    fun getOrientation(input: InputStream): Int
}
// The implementation class reads bytes internally through the InputStream to determine the file formatDefaultImageHeaderParser and ExifInterfaceImageHeaderParserCopy the code

We analyze the call chain and copy the required classes. Because the source code is more, easy to fall asleep, here is only a simple description of the function and transformation ideas, interested in their own reading.

/ / there is a important role in class RecyclableBufferedInputStream main packaging InputStream to achieve an array of bytes for its reuse
// And support for mark()\reset(), which we'll talk about later in memory optimization

// Suffix: suffix of the image, hasAlpha: whether the image contains a transparent layer, format: supported format for output
enum class ImageType(val suffix: String, val hasAlpha: Boolean.val format: Bitmap.CompressFormat) {
    GIF("gif".true, Bitmap.CompressFormat.PNG),
    JPEG("jpg".false, Bitmap.CompressFormat.JPEG),
    RAW("raw".false, Bitmap.CompressFormat.JPEG),
    /* PNG type with alpha. */
    PNG_A("png".true, Bitmap.CompressFormat.PNG),
    /* PNG type without alpha. */
    PNG("png".false, Bitmap.CompressFormat.PNG),
    /* WebP type with alpha. */
    WEBP_A("webp".true, Bitmap.CompressFormat.WEBP),
    /* WebP type without alpha. */
    WEBP("webp".false, Bitmap.CompressFormat.WEBP),
    /* Unrecognized type. */
    UNKNOWN("jpeg".false, Bitmap.CompressFormat.JPEG);
}
Copy the code

Memory and performance tuning

  1. Memory footprint optimization

    The memory occupied in image processing is divided into two parts. The memory space occupied by Bitmap after image decoding is generated in byte array during decoding. The better display effect of the image and the memory footprint can not be had at the same time, we can first deal with the resource overhead in the decoding process, because Glide has implemented this idea and technology, we can apply.

    • Byte array pooling

      A large number of bytes [] are created during the decoding process. We know that Glide does a lot of memory and performance optimizations. Byte[] is pooled and only needs to be called as follows

      val byteArrayPool = com.bumptech.glide.Glide.get(context).arrayPool
      byteArrayPool.get(bufferSize, ByteArray::class.java)
      Copy the code

      However, since some projects may not have introduced Glide, generally we will copy the code to use for compatibility. This is obviously not appropriate, we use the reflex detection method to use Glide already implemented functionality.

      1. First of all use in our lib compileOnly (” com. Making. Bumptech. Glide: glide: 4.11.0 @ aar “) used to compile can get through it

      2. Next, implement a utility class that retrives and destroys the byte array. Note that when calling Glide. Get (checker.context). A class loading exception is thrown if Glide is not used. The final implementation is as follows:

      object ArrayProvide {
          private val hasGlide: Boolean by lazy {
              try {
                  // Check whether glide is introduced
                  Class.forName("com.bumptech.glide.Glide")
                  true
              } catch (e: Exception) {
                  false}}@JvmStatic
          fun get(bufferSize: Int): ByteArray = if (hasGlide) {
              // Reflect to determine whether the glide is included
              val byteArrayPool = com.bumptech.glide.Glide.get(Checker.context).arrayPool
              byteArrayPool.get(bufferSize, ByteArray::class.java)
          } else {
              ByteArray(bufferSize)
          }
      
          @JvmStatic
          fun put(buf: ByteArray) {
              if (hasGlide && buf.isNotEmpty()) {
                  // Reflect to determine whether the glide is included
                  val byteArrayPool = com.bumptech.glide.Glide.get(Checker.context).arrayPool
                  byteArrayPool.put(buf)
              }
          }
      }
      Copy the code
      1. Replace all uses of new byte[], and use this class in subsequent projects where you need to optimize byte array fetching.

  • Memory prediction during decoding

    Between 2.3 and 7.1, the pixels of Bitmap are stored on the Java heap of Dalvik. Before decoding the image, the true width, height and Bitmap configuration of the image can be obtained, JVM memory usage can be calculated, and the result of code execution can be predicted. It terminates if there is insufficient memory.

    // Determine that the image decoding bitmap configuration memory is insufficient, do not compress, throw an exception instead of crashing its OOM program
    val isAlpha = compressConfig == Bitmap.Config.ARGB_8888
    if(! hasEnoughMemory(width / options.inSampleSize, height / options.inSampleSize, isAlpha)) {// When TODO 8.0 is out of memory, use the degradation policy
     if(! isAlpha || ! hasEnoughMemory(width / options.inSampleSize, height / options.inSampleSize,false)) {
         throw IOException("image memory is too large")}else {
         Checker.logger("Memory Warring reduced bitmap pixels")
       	// Reduce the number of pixels to reduce memory
         options.inPreferredConfig = Bitmap.Config.RGB_565
     }
    }
    Copy the code
  1. InputStream optimization

    Lu Ban uses the inputStreamProvider.open () method to get the necessary image width and height before getting the image format and decoding, as well as to get the original Angle of the image

    public abstract class InputStreamAdapter implements InputStreamProvider {
      private InputStream inputStream;
      public InputStream open(a) throws IOException {... inputStream = openInternal();returninputStream; }...public abstract InputStream openInternal(a) throws IOException;
    }
    Copy the code

    Check out the implementation in Luban.java

    class Luban{...public Builder load(final String string) {
    	  mStreamProviders.add(new InputStreamAdapter() {
    	    @Override
    	    public InputStream openInternal(a) throws IOException {
            // Re-create the stream object each time
    	      return newFileInputStream(string); }... }Copy the code

    Using a new stream object every time here is costly. Look at Glide RecyclableBufferedInputStream, internal to the use of InputStream for packaging, and then call mark (), the reset () to optimize repeat open the overhead. We copy source BufferedInputStreamWrap transform:

    abstract class InputStreamAdapter<T> : InputStreamProvider<T> {
    		/ / BufferedInputStreamWrap come from inside the Glide to byte array reuse and mark \ reset optimized copy bytes
        private lateinit var inputStream: BufferedInputStreamWrap
    
        @Throws(IOException::class)
        abstract fun openInternal(a): InputStream
    
        @Throws(IOException::class)
        override fun rewindAndGet(a): InputStream {
            if (::inputStream.isInitialized) {
                inputStream.reset()
            } else {
                inputStream = BufferedInputStreamWrap(openInternal())
                inputStream.mark(MARK_READ_LIMIT)
            }
            return inputStream
        }
    
        override fun close(a) {
            if (::inputStream.isInitialized) {
                try {
                    inputStream.close()
                } catch (ignore: IOException) {
                    ignore.printStackTrace()
                }
            }
        }
    }
    Copy the code

Flow uses and custom thread scheduling to control the number of concurrent tasks

  1. Coroutine selection of Flow

    Because LiveData need to use LifecycleOwner, here coroutines flow to the use of the LifecycleOwner. LifecycleScope, because several thread scheduling coroutines, when executed in parallel image compression, once the picture too much at the same time the implementation of decoding graphics is not controlled, This will cause an instant increase in memory usage, most likely resulting in the OOM. Here we need custom coroutine thread scheduling.

  2. Custom thread scheduling

    // You can use the coroutine extension method.ascoroutineDispatcher ()For example: Executors. NewFixedThreadPool (2).asCoroutineDispatcher()
    Copy the code

    Of course that wasn’t good enough, I used a custom thread pool that allows different policies to be used on different versions of phones, and provided a custom thread name that can be used online to locate abnormal business

    companion object {
        // This function is used to limit the number of tasks to be executed during parallel execution
        internal val supportDispatcher: ExecutorCoroutineDispatcher
        init {
            / / Android O after the Bitmap memory in native https://www.jianshu.com/p/d5714e8987f3
            val corePoolSize = when {
                Build.VERSION.SDK_INT >= Build.VERSION_CODES.O -> {
                    (Runtime.getRuntime().availableProcessors() - 1).coerceAtLeast(1)
                }
                Build.VERSION.SDK_INT >= Build.VERSION_CODES.M -> { 2 }
                else- > {1}}val threadPoolExecutor = ThreadPoolExecutor(corePoolSize, corePoolSize,
                    5L, TimeUnit.SECONDS, LinkedBlockingQueue<Runnable>(), CompressThreadFactory())
            . / / DES: create a thread beforehand threadPoolExecutor prestartAllCoreThreads ()
            // DES: allow core threads to recycle as well
            threadPoolExecutor.allowCoreThreadTimeOut(true)
            // DES: converts to a coroutine scheduler
            supportDispatcher = threadPoolExecutor.asCoroutineDispatcher()
        }
    }
    Copy the code
  3. Flow parallel simulation compression in two ways

  • Simulate compressed image files

    // Control the number of tasks executed simultaneously in this custom coroutine schedule
    val customerDispatcher = Executors.newFixedThreadPool(2).asCoroutineDispatcher()
    suspend fun compressV(int: Int): String = withContext(customerDispatcher) {
       // Simulate compressed files
       println("Compress begins:$int \tthread:${Thread.currentThread().name}")
       Thread.sleep(300)
       val toString = int.toString() + "R"
       println("Compress ends:${toString} \tthread:${Thread.currentThread().name}")
       return@withContext toString
    }
    Copy the code
  • FlatMapMerge operator

    @Test
     fun testFlowFlat(a) = runBlocking<Unit> {
      val time = measureTimeMillis {
          listOf(1.2.3).asFlow().flatMapMerge {
              flow {
                  println("emit: $it  t:${Thread.currentThread().name}")
                  delay(500)
                  emit(it)
              }.flowOn(customerDispatcher)
          }.onStart {
              println("onStart: t:${Thread.currentThread().name}")
          }.onCompletion {
              println("onCompletion: $it  t:${Thread.currentThread().name}")}.catch {
              println("catch: $it  t:${Thread.currentThread().name}")
          }.collect {
              println("success: $it  t:${Thread.currentThread().name}")
          }
      }
      println("time: $time")
     }
     print:
     onStart: t:main @coroutine#1
     emit: 2  t:pool-2-thread-2 @coroutine#7
     emit: 1  t:pool-2-thread-1 @coroutine#6
     emit: 3  t:pool-2-thread-1 @coroutine#8
     success: 2  t:main @coroutine#1
     success: 1  t:main @coroutine#1
     success: 3  t:main @coroutine#1
     onCompletion: null  t:main @coroutine#1
     time: 561
     // The result is indeed parallel, but the result in collect is not in the original order because of the parallel uncertainty
    Copy the code
  • Map + async + await() implementation

    @Test
     fun testBinFa(a) = runBlocking {
      val time = measureTimeMillis {
          listOf(1.2.3).asFlow().map { i ->
              async { compressV(i) }
          }.buffer().flowOn(Dispatchers.Unconfined).map {
              it.await()
          }.onStart {
              println("onStart: t:${Thread.currentThread().name}")
          }.onCompletion {
              println("onCompletion: $it  t:${Thread.currentThread().name}")}.catch {
              println("catch: $it  t:${Thread.currentThread().name}")
          }.collect {
              println("success: $it  t:${Thread.currentThread().name}")
          }
      }
      println("Collected in $time ms")
    }
    print:
    onStart: t:main @coroutine#1Compress:1 	thread:pool-1-thread-1 @coroutine#3Compress:2 	thread:pool-1-thread-2 @coroutine#4Compress End :2R thread:pool-1-thread-2 @coroutine#4Compress End :1R Thread :pool-1-thread-1 @coroutine#3Compress:3 	thread:pool-1-thread-2 @coroutine#5
    success: 1R  t:main @coroutine#1
    success: 2R  t:main @coroutine#1Compress End :3R Thread :pool-1-thread-2 @coroutine#5
    success: 3R  t:main @coroutine#1
    onCompletion: null  t:main @coroutine#1
    Collected in 678 ms
    // The result of success shows that this can be used in parallel and in order
    Copy the code

Image compression algorithm

  • Adjacent sampling using Luban algorithm is not much introduced

    Advantages do not load Bitmap into memory, used for compression photo is better. Disadvantages: In some scenes, the compression effect will lose pixel details.

  • Bilinear sampling

    matrix.setScale(scale, scale)
    Bitmap.createBitmap(bitmap, 0.0, bitmap.width, bitmap.height, matrix, true)
    Copy the code

    For pure text image compression, the display effect is better than adjacent sampling. Disadvantages First load into the memory, if the memory usage is too large easy OOM

    Image compression algorithm pros and cons: QQ music technology team -Android image compression analysis

The compression algorithm does the same thingLuban

conclusion

  • Knowing the source code of an open source project can expand our knowledge, and we can quickly borrow (C) and (V) when such a need arises.
  • Actual combat is the best way to learn new technology, hands-on to have a deep understanding.