In the App, if the sharing, publishing and uploading functions involve pictures, it is necessary to compress them to a certain extent. I recently reconstructed a dual-terminal (iOS&Android) image compression module in my company’s project. This article will be very basic to explain some picture compression methods and ideas.

Picture Format Basics

Dot matrix diagram & vector diagram

  • Bitmap: also called bitmap. Pixels store color information and arrange pixels for display.
  • Vector graph: an algorithm that records the shapes and colors of elements and displays the results of the algorithm’s operations.

color

Index Color, Direct Color, Index Color, Index Color

  • Index Color: A numeric index representing a color is stored in the image information in the numeric to color mapping table (Palette). Each pixel holds a numeric index corresponding to that pixel’s color. Color palettes can store only a limited number of colors, usually 256. So each pixel of the number occupies 1 byte (8 bits) in size.
  • Direct color: Use four numbers to represent a color, corresponding to the color red, green, blue, transparency (RGBA). Each pixel holds information about these four latitudes to represent the color of that pixel. Depending on the color depth (the number of bits of color information stored in each pixel is different), the most commonly supported colors are 8-bit (R3+G3+B2), 16-bit (R5+G6+B5), 24-bit (R8+G8+B8), and 32-bit (A8+R8+G8+B8). So each pixel takes up between 1 and 4 bytes.

Commonly used image formats on mobile terminals

The picture format is generally divided into static graph and dynamic graph

Static figure
  • JPG: The most common image format in the standard that supports JPEG, a lossy compression method. Use a bitmap. It is common to use direct colors with a color depth of 24 bits (transparency is not supported).
  • PNG: the image format supports lossless compression. Use a bitmap. PNG has 5 color options: index color, gray, gray transparent, true color (24-bit direct color), true color transparent (32-bit direct color).
  • WebP: supports both lossy and uncompressed image formats. Use a bitmap. Support 32-bit direct colors. The mobile terminal support is as follows:
system native WebView The browser
iOS Third-party library support Does not support Does not support
Android After 4.3, complete functions are supported support support
Dynamic figure
  • GIF: an image format that supports lossless compression. Use a bitmap. Use index color and have 1 bit transparency channel (transparent or not).
  • APNG: PNG based extension with dynamic graph support. Use a bitmap. Use 32-bit direct colors. But it was not accepted by the official PNG. The mobile terminal support is as follows:
system native WebView The browser
iOS support support support
Android Third-party library support Does not support Does not support
  • Animated Webp: Animated Webp is a GIF form of Webp, which is actually multiple single-frame webPs packaged in a file, supported after libwebp 0.4. The mobile terminal support is as follows:
system native WebView System browser
iOS Third-party library support Does not support Does not support
Android Third-party library support Does not support Does not support

As the general project needs to be compatible with three terminals (iOS, Android, Web), the simplest is to support JPG, PNG, GIF these three common formats. So this article will not discuss the rest of the image format compression.

Image processing architecture of mobile terminal system

According to my understanding, draw iOS&Android image processing architecture. On the iOS side, you can also call the underlying framework directly.

The iOS ImageIO

In this paper, the iOS side mainly uses ImageIO framework for image processing. The main reason is that static image and dynamic image API calls are consistent, and part of data information will not be lost due to UIImage conversion.

ImageIO mainly provides the image codec function, encapsulates a set of C language interface. Swift doesn’t require memory management for C objects, which is much easier than objective-C, but the API returns Optional, guard/if, or! Make the transformation.

decoding

1. Create CGImageSource

CGImageSource is the abstract class equivalent to the ImageIO data source. The use of general CGImageSourceCreateWithDataProvider: need to provide a DataProvider, you can specify a file, a URL, such as Data input. Has created through afferent CFData CGImageSourceCreateWithData: convenient method. The second argument to the options method is passed to a dictionary for configuration. According to Apple’s Image and Graphics Best Practices in WWDC 2018, when no decoding is required and only CGImageSource needs to be created, KCGImageSourceShouldCache should be set to false.

2. CGImage is obtained by decoding

Use CGImageSourceCreateImageAtIndex: or CGImageSourceCreateThumbnailAtIndex: To obtain the generated CGImage, where the Index parameter is the number of frames of the image, the static image is passed 0.

coding

1. Create CGImageDestination

CGImageDestination is equivalent to an abstract class for ImageIO data output. The use of general CGImageDestinationCreateWithDataConsumer: need to provide a DataConsumer, can be placed at the top URL, such as Data input. Have created through afferent CFData CGImageDestinationCreateWithData: convenient method, the output will be written to the incoming Data. The method also needs to provide the image type and the number of frames for the image.

2. Add CGImage

Add CGImage use CGImageDestinationAddImage: method, dynamic figure, multiple calls in sequence.

And there is a special CGImageDestinationAddImageFromSource: method, is actually a CGImageSource added, what’s the use, through the options parameters, to change the role of the image set. For example, by changing the compression parameters of JPG, there is no need to convert it into a top-level object (such as UIImage) after using this function, which reduces the loss of codec during conversion and achieves better performance.

3. Encode

Call CGImageDestinationFinalize: that start coding, after the completion of the returns a Boolean value, and the data into DataConsumer CGImageDestination provides.

Analysis of compressed thinking

The space occupied by a bitmap is essentially the number of pixels x the space occupied by a single pixel x the number of frames. So to reduce the size of the image, actually start from these three directions. Single pixel occupies space, and in the case of direct colors, it is mainly related to color depth. In real projects, changing the color depth will cause the image color to not be exactly the same as the original image, and I do not recommend changing the color depth. And the number of pixels is usually very common image resolution scaling. In addition, the JPG format has unique lossy compression by specifying a compression factor.

  • JPG: Compression factor + resolution scaling + color depth reduction
  • PNG: Zoom resolution + reduce color depth
  • GIF: Reduce the number of frames + zoom in resolution per frame + reduce the palette

Judge the image format

It’s not safe to judge by the suffix or extension. The real way to judge is by the information in the header of the file.

JPG PNG GIF
Start: FF D8 + end: FF D9 89 50 4E 47 0D 0A 1A 0A 47, 49, 46, 38, 39/37, 61

Simple judgments are made using the first three bytes

iOS
extension Data{   
    enum ImageFormat {
        case jpg, png, gif, unknown
    }
    
    var imageFormat:ImageFormat {
        var headerData = [UInt8](repeating: 0.count: 3)
        self.copyBytes(to: &headerData, from:(0..<3))
        let hexString = headerData.reduce("") {$0 + String(($1&0xFF), radix:16) }.uppercased()
        var imageFormat = ImageFormat.unknown
        switch hexString {
        case "FFD8FF": imageFormat = .jpg
        case "89504E": imageFormat = .png
        case "474946": imageFormat = .gif
        default:break
        }
        return imageFormat
    }
}
Copy the code

In addition to the file header information available in iOS, you can also convert Data to CGImageSource and use the CGImageSourceGetType API to retrieve the UTI identifier string for the image format supported by the ImageIO framework. The corresponding identifier constants are defined in UTCoreTypes under the MobileCoreServices framework.

String constant UTI format (string raw value)
kUTTypePNG public.png
kUTTypeJPEG public.jpeg
kUTTypeGIF com.compuserve.gif
Andorid
enum class ImageFormat{
    JPG, PNG, GIF, UNKNOWN
}

fun ByteArray.imageFormat(a): ImageFormat {
    val headerData = this.slice(0.2.)
    val hexString = headerData.fold(StringBuilder("")) { result, byte -> result.append( (byte.toInt() and 0xFF).toString(16) ) }.toString().toUpperCase()
    var imageFormat = ImageFormat.UNKNOWN
    when (hexString) {
        "FFD8FF" -> {
            imageFormat = ImageFormat.JPG
        }
        "89504E" -> {
            imageFormat = ImageFormat.PNG
        }
        "474946" -> {
            imageFormat = ImageFormat.GIF
        }
    }
    return imageFormat
}
Copy the code

Color depth change

In fact, the depth reduction is generally from 32 bits to 16 bits, but the color change can not be accepted by products, users and design, so the author did not actually use the method of changing the color depth in the compression process, but only studied the practice.

iOS

In iOS, there is no easy way to change color depth in the native CGImage library. You need to set your own parameters to regenerate the CGImage.

public init? (width:Int, height: Int, bitsPerComponent: Int, bitsPerPixel: Int, bytesPerRow: Int, space: CGColorSpace, bitmapInfo: CGBitmapInfo, provider: CGDataProvider, decode: UnsafePointer<CGFloat>? , shouldInterpolate:Bool, intent: CGColorRenderingIntent)
Copy the code
  • BitsPerComponent Number of bytes per channel
  • BitsPerPixel occupies the number of bitsPerPixel, equal to the number of bits of all channels combined, which is the color depth
  • BytesPerRow will be computed automatically if you pass in 0
  • Color space
  • BitmapInfo this is a very important thing, and the commonly used information is CGImageAlphaInfo, which indicates whether there is a transparent channel, whether the transparent channel is before or after (ARGB or RGBA), whether there is a floatComponents, CGImageByteOrderInfo, which stands for byte order, big endian or little endian, and data unit width, iOS generally uses 32-bit little endian mode, usually orderDefault.

For common color depths, you can use a combination of these parameters. At the same time, when the author checks the vImage_CGImageFormat structure of the lower vImage framework (CGImage base also uses vImage, For details, see the vImage_Utilities file in the Accelerate framework vImage library. You can find the Apple annotation, which also contains common parameters for color depth.

In order to keep consistent with Android, the author encapsulates the enumeration values corresponding to commonly used Color depth parameters of Android.

public enum ColorConfig{
    case alpha8
    case rgb565
    case argb8888
    case rgbaF16
    case unknown // The rest of the color configuration
}
Copy the code

CGBitmapInfo, because it is an Optional Set, is a convenient way to encapsulate the attributes used.

extension CGBitmapInfo {
    init(_ alphaInfo:CGImageAlphaInfo._ isFloatComponents:Bool = false) {
        var array = [
            CGBitmapInfo(rawValue: alphaInfo.rawValue),
            CGBitmapInfo(rawValue: CGImageByteOrderInfo.orderDefault.rawValue)
        ]
        
        if isFloatComponents {
            array.append(.floatComponents)
        }
        
        self.init(array)
    }
}
Copy the code

The CGImage parameter corresponding to ColorConfig can also be mapped.

extension ColorConfig{
    struct CGImageConfig{
        let bitsPerComponent:Int
        let bitsPerPixel:Int
        let bitmapInfo: CGBitmapInfo
    }
    
    var imageConfig:CGImageConfig? {switch self {
        case .alpha8:
            return CGImageConfig(bitsPerComponent: 8, bitsPerPixel: 8, bitmapInfo: CGBitmapInfo(.alphaOnly))
        case .rgb565:
            return CGImageConfig(bitsPerComponent: 5, bitsPerPixel: 16, bitmapInfo: CGBitmapInfo(.noneSkipFirst))
        case .argb8888:
            return CGImageConfig(bitsPerComponent: 8, bitsPerPixel: 32, bitmapInfo: CGBitmapInfo(.premultipliedFirst))
        case .rgbaF16:
            return CGImageConfig(bitsPerComponent: 16, bitsPerPixel: 64, bitmapInfo: CGBitmapInfo(.premultipliedLast, true))
        case .unknown:
            return nil}}}Copy the code

In turn, the method to determine CGImage’s ColorConfig.

extension CGImage{
    var colorConfig:ColorConfig{
        if isColorConfig(.alpha8) {
            return .alpha8
        } else if isColorConfig(.rgb565) {
            return .rgb565
        } else if isColorConfig(.argb8888) {
            return .argb8888
        } else if isColorConfig(.rgbaF16) {
            return .rgbaF16
        } else {
            return .unknown
        }
    }
    
    func isColorConfig(_ colorConfig:ColorConfig) -> Bool{
        guard let imageConfig = colorConfig.imageConfig else {
            return false
        }
        
        if bitsPerComponent == imageConfig.bitsPerComponent &&
            bitsPerPixel == imageConfig.bitsPerPixel &&
            imageConfig.bitmapInfo.contains(CGBitmapInfo(alphaInfo)) &&
            imageConfig.bitmapInfo.contains(.floatComponents) {
            return true
        } else {
            return false}}}Copy the code

External encapsulation OF the Api, which is directly introduced the use of ImageIO steps, but the parameters are not the same.

    /// Change the image to the specified color configuration
    ///
    /// - Parameters:
    /// -rawData: raw image data
    /// -config: color configuration
    /// - Returns: data after processing
    public static func changeColorWithImageData(_ rawData:Data, config:ColorConfig) -> Data? {guard let imageConfig = config.imageConfig else {
            return rawData
        }
    
        guard let imageSource = CGImageSourceCreateWithData(rawData as CFData, [kCGImageSourceShouldCache: false] as CFDictionary),
            let writeData = CFDataCreateMutable(nil.0),
            let imageType = CGImageSourceGetType(imageSource),
            let imageDestination = CGImageDestinationCreateWithData(writeData, imageType, 1.nil),
            let rawDataProvider = CGDataProvider(data: rawData as CFData),
            let imageFrame = CGImage(width: Int(rawData.imageSize.width),
                                     height: Int(rawData.imageSize.height),
                                     bitsPerComponent: imageConfig.bitsPerComponent,
                                     bitsPerPixel: imageConfig.bitsPerPixel,
                                     bytesPerRow: 0,
                                     space: CGColorSpaceCreateDeviceRGB(),
                                     bitmapInfo: imageConfig.bitmapInfo,
                                     provider: rawDataProvider,
                                     decode: nil,
                                     shouldInterpolate: true,
                                     intent: .defaultIntent) else {
                                        return nil
        }
        CGImageDestinationAddImage(imageDestination, imageFrame, nil)
        guard CGImageDestinationFinalize(imageDestination) else {
            return nil
        }
        return writeData as Data
    }
    
    
    /// get the color configuration of the image
    ///
    /// -parameter rawData: raw image data
    /// - Returns: color configuration
    public static func getColorConfigWithImageData(_ rawData:Data) -> ColorConfig{
        guard let imageSource = CGImageSourceCreateWithData(rawData as CFData, [kCGImageSourceShouldCache: false] as CFDictionary),
            let imageFrame = CGImageSourceCreateImageAtIndex(imageSource, 0.nil) else {
                return .unknown
        }
        return imageFrame.colorConfig
    }
Copy the code
Android

For Android, the native Bitmap library has a pretty handy way to convert color depths, just by passing in Config.

public Bitmap copy(Config config, boolean isMutable) {
      checkRecycled("Can't copy a recycled bitmap");
      if (config == Config.HARDWARE && isMutable) {
          throw new IllegalArgumentException("Hardware bitmaps are always immutable");
      }
      noteHardwareBitmapSlowCall();
      Bitmap b = nativeCopy(mNativePtr, config.nativeInt, isMutable);
      if(b ! =null) {
          b.setPremultiplied(mRequestPremultiplied);
          b.mDensity = mDensity;
      }
      return b;
}

Copy the code

The mapping between iOS CGImage and Android bitmap. Config and color depth is as follows:

Color depth iOS Android
8-bit grayscale (transparency only) bitsPerComponent: 8 bitsPerPixel: 8 bitmapInfo: CGImageAlphaInfo.alphaOnly Bitmap.Config.ALPHA_8
16 bit color (R5+G6+R5) bitsPerComponent: 5 bitsPerPixel: 16 bitmapInfo: CGImageAlphaInfo.noneSkipFirst Bitmap.Config.RGB_565
32 bit color (A8+R8+G8+B8) bitsPerComponent: 8 bitsPerPixel: 32 bitmapInfo: CGImageAlphaInfo.premultipliedFirst Bitmap.Config.ARGB_8888
64-bit colors (R16+G16+B16+A16 but use half precision to reduce storage space in half) are used for wide gamut or HDR bitsPerComponent: 16 bitsPerPixel: 64 bitmapInfo: CGImageAlphaInfo.premultipliedLast + .floatComponents Bitmap.Config.RGBA_F16

JPG compressibility changes

JPG compression algorithms are so complex that mainstream use is using the extensive library libjpeg for encoding and decoding (starting with the better libjpeg-Turbo on Android 7.0, and Apple’s own, unopen-source AppleJPEG on iOS). On iOS and Android, there are apis that input compression coefficients to compress JPG. However, how the specific compression coefficient affects the compression size is not studied deeply. Here is a brief description of how to use it.

iOS

The compression factor in iOS is between 0 and 1, and it is said that the compression factor in iOS albums is 0.9. At the same time, PNG does not support the lossy compression, so kCGImageDestinationLossyCompressionQuality this parameter is null and void.

static func compressImageData(_ rawData:Data, compression:Double) -> Data? {guard let imageSource = CGImageSourceCreateWithData(rawData as CFData, [kCGImageSourceShouldCache: false] as CFDictionary),
            let writeData = CFDataCreateMutable(nil.0),
            let imageType = CGImageSourceGetType(imageSource),
            let imageDestination = CGImageDestinationCreateWithData(writeData, imageType, 1.nil) else {
                return nil
        }
        
        let frameProperties = [kCGImageDestinationLossyCompressionQuality: compression] as CFDictionary
        CGImageDestinationAddImageFromSource(imageDestination, imageSource, 0, frameProperties)
        guard CGImageDestinationFinalize(imageDestination) else {
            return nil
        }
        return writeData as Data
    }

Copy the code
Andoid

Andoird uses Bitmap’s own interface and outputs to the stream. The compressibility is between 0 and 100. The parameters here can fill Bitmap.Com pressFormat. PNG, but of course is also invalid.

val outputStream = ByteArrayOutputStream()
val image = BitmapFactory.decodeByteArray(rawData,0,rawData.count())
image.compress(Bitmap.CompressFormat.JPEG, compression, outputStream)
resultData = outputStream.toByteArray()

Copy the code

The compression of GIF

There are many ideas for GIF compression. Consider the open source projects gifsicle and ImageMagick.

  1. Since GIF supports global and local palettes, the global palettes in the file header are used when local palettes are not available. So for GIFs that don’t vary much, you can put the color in the global palette and remove the local palette.

  2. For giFs with fewer colors, reduce the palette size from 256 to 128, for example.

  3. For a GIF with a consistent background, where part of the image is changing, you can store multiple elements and the background separately, and then add information about how to restore them

  4. For a GIF with a consistent background, a portion of the frame that is moving can be compared to a previous frame to make the portion that is not moving transparent

  5. For giFs with many frames, you can extract the middle part of the frame and reduce the number of frames

  6. For giFs with high resolution per frame, reduce the resolution per frame

For animated GIFs, 3 and 4 are useful because the background is generally immutable, but not so useful for a filmed GIF, where slight wobble makes it difficult to keep the background unchanged. But on mobile, unless you port ImageMagick or gifsicle to iOS&Android, the first four methods are difficult to implement. The author here only achieved the extraction frame, and each frame resolution compression.

As for the interval of drawing frames, I refer to the values in the article.

frames Use 1 frame for each X frame
The < 9 x = 2
9-20 x = 3
21-30 x = 4
31 to 40 x = 5
> 40 x = 6

There is another problem here. When extracting frames, the original frame may have been compressed using the method of 3 and 4, but it needs to be restored into a complete image frame when restoring, and then re-encoding, there is no way to use 3 and 4 for optimization. The number of frames is reduced, but it actually returns the frame to the state where it was not optimized 3 or 4, so the compression is not as good as it should be (so try to do this compression on the server). Remember to add the time of the intermediate frames to the time of the remaining frames, otherwise the frame speed will be faster, and do not use the time of the extraction number X frame lazy to calculate, because not all frames are the same time.

iOS

IOS implementation is relatively simple, using ImageIO functions can be achieved, the performance is also better.

First define each frame was obtained from the ImageSource time convenient extension methods, frame length will exist kCGImagePropertyGIFUnclampedDelayTime or kCGImagePropertyGIFDelayTime, The difference between the two keys is that the latter has a limit on the minimum value. The correct method of obtaining keys is to refer to the method used in WebKit by Apple.

extension CGImageSource {
    func frameDurationAtIndex(_ index: Int) -> Double{
        var frameDuration = Double(0.1)
        guard let frameProperties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [AnyHashable:Any].let gifProperties = frameProperties[kCGImagePropertyGIFDictionary] as? [AnyHashable:Any] else {
            return frameDuration
        }
        
        if let unclampedDuration = gifProperties[kCGImagePropertyGIFUnclampedDelayTime] as? NSNumber {
            frameDuration = unclampedDuration.doubleValue
        } else {
            if let clampedDuration = gifProperties[kCGImagePropertyGIFDelayTime] as? NSNumber {
                frameDuration = clampedDuration.doubleValue
            }
        }
        
        if frameDuration < 0.011 {
            frameDuration = 0.1
        }
        
        return frameDuration
    }
    
    var frameDurations:[Double] {let frameCount = CGImageSourceGetCount(self)
        return (0..<frameCount).map{ self.frameDurationAtIndex($0)}}}Copy the code

Remove the unwanted frames first, merge the time of the frames, and then regenerate the frames. Be careful not to drag the frame too long, or the experience will be bad. The maximum I give here is 200ms.

    /// The number of frames extracted from a synchronized compressed image. Only GIF is supported
    ///
    /// - Parameters:
    /// -rawData: raw image data
    /// -samplecount: sampleCount: sampleCount: sampleCount: sampleCount: sampleCount: sampleCount: sampleCount: sampleCount: sampleCount
    /// - Returns: data after processing
    static func compressImageData(_ rawData:Data, sampleCount:Int) -> Data? {guard let imageSource = CGImageSourceCreateWithData(rawData as CFData, [kCGImageSourceShouldCache: false] as CFDictionary),
            let writeData = CFDataCreateMutable(nil.0),
            let imageType = CGImageSourceGetType(imageSource) else {
                return nil
        }
        
        // Calculate the interval of frames
        let frameDurations = imageSource.frameDurations
        
        // The maximum time for merging frames cannot be more than 200ms
        let mergeFrameDurations = (0..<frameDurations.count).filter{$0 % sampleCount == 0 }.map{ min(frameDurations[$0..<min($0 + sampleCount, frameDurations.count)].reduce(0.0) {$0 + $1 }, 0.2)}// Use 1 frame for every n frames
        let sampleImageFrames = (0..<frameDurations.count).filter{$0 % sampleCount == 0 }.compactMap{ CGImageSourceCreateImageAtIndex(imageSource, $0.nil)}guard let imageDestination = CGImageDestinationCreateWithData(writeData, imageType, sampleImageFrames.count.nil) else{
            return nil
        }
        
        // Each frame is re-encoded
        zip(sampleImageFrames, mergeFrameDurations).forEach{
            // Set the frame interval
            let frameProperties = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: $1, kCGImagePropertyGIFUnclampedDelayTime: $1]]
            CGImageDestinationAddImage(imageDestination, $0, frameProperties as CFDictionary)}guard CGImageDestinationFinalize(imageDestination) else {
            return nil
        }
        
        return writeData as Data
    }
    

Copy the code

Compression resolution is also similar, each frame according to the resolution of compression and re-coding good.

Android

Android native support for GIF is not very friendly, because the author of Android research is not deep, the first use Glide GIF encoding and decoding components to complete. The performance of the code is mediocre, not as good as iOS, but unless you switch to a more low-level C++ library implementation of the code, Java write performance is mediocre. Import Glide with Gradle first, note that the decoder is the default, but the encoder needs to be imported separately.

api 'com. Making. Bumptech. Glide: glide: 4.8.0'
api 'com. Making. Bumptech. Glide: gifencoder - integration: 4.8.0'

Copy the code

Frame extraction is the same as iOS, but Glide’s GIF decoder can not read a frame according to the specified index, can only read a frame, call advance method to read later. The header information is read from the GIF, then the actual frame information is read.

	/** * Returns the synchronized compressed GIF image Byte data from [rawData] sampled by [sampleCount] */
    private fun compressGifDataWithSampleCount(context: Context, rawData: ByteArray, sampleCount: Int): ByteArray? {
        if (sampleCount <= 1) {
            return rawData
        }
        val gifDecoder = StandardGifDecoder(GifBitmapProvider(Glide.get(context).bitmapPool))
        val headerParser = GifHeaderParser()
        headerParser.setData(rawData)
        val header = headerParser.parseHeader()
        gifDecoder.setData(header, rawData)

        val frameCount = gifDecoder.frameCount

        // Calculate the interval of frames
        val frameDurations = (0 until frameCount).map { gifDecoder.getDelay(it) }

        // The maximum time for merging frames cannot be more than 200ms
        val mergeFrameDurations = (0 until frameCount).filter { it % sampleCount == 0 }.map {
            min(
                frameDurations.subList(
                    it,
                    min(it + sampleCount, frameCount)
                ).fold(0) { acc, duration -> acc + duration }, 200)}/ / draw frame
        val sampleImageFrames = (0 until frameCount).mapNotNull {
            gifDecoder.advance()
            var imageFrame: Bitmap? = null
            if (it % sampleCount == 0) {
                imageFrame = gifDecoder.nextFrame
            }
            imageFrame
        }

        val gifEncoder = AnimatedGifEncoder()

        var resultData: ByteArray? = null

        try {
            val outputStream = ByteArrayOutputStream()
            gifEncoder.start(outputStream)
            gifEncoder.setRepeat(0)

            // Each frame is re-encoded
            sampleImageFrames.zip(mergeFrameDurations).forEach {
                // Set the frame interval
                gifEncoder.setDelay(it.second)
                gifEncoder.addFrame(it.first)
                it.first.recycle()
            }
            gifEncoder.finish()

            resultData = outputStream.toByteArray()
            outputStream.close()
        } catch (e: IOException) {
            e.printStackTrace()
        }

        return resultData
    }

Copy the code

When compressing the resolution, it should be noted that codes with too large resolution are prone to Crash (OOM). Set it to 512 here.

    /** * Returns a synchronized compressed GIF image Byte data [rawData] from each frame to the Byte data after [limitLongWidth] */
    private fun compressGifDataWithLongWidth(context: Context, rawData: ByteArray, limitLongWidth: Int): ByteArray? {
        val gifDecoder = StandardGifDecoder(GifBitmapProvider(Glide.get(context).bitmapPool))
        val headerParser = GifHeaderParser()
        headerParser.setData(rawData)
        val header = headerParser.parseHeader()
        gifDecoder.setData(header, rawData)
        val frameCount = gifDecoder.frameCount

        // Calculate the interval of frames
        val frameDurations = (0..(frameCount - 1)).map { gifDecoder.getDelay(it) }

        // Calculate the adjusted size
        val longSideWidth = max(header.width, header.height)
        val ratio = limitLongWidth.toFloat() / longSideWidth.toFloat()
        val resizeWidth = (header.width.toFloat() * ratio).toInt()
        val resizeHeight = (header.height.toFloat() * ratio).toInt()

        // Scale each frame
        val resizeImageFrames = (0 until frameCount).mapNotNull {
            gifDecoder.advance()
            var imageFrame = gifDecoder.nextFrame
            if(imageFrame ! =null) {
                imageFrame = Bitmap.createScaledBitmap(imageFrame, resizeWidth, resizeHeight, true)
            }
            imageFrame
        }

        val gifEncoder = AnimatedGifEncoder()
        var resultData: ByteArray? = null

        try {
            val outputStream = ByteArrayOutputStream()
            gifEncoder.start(outputStream)
            gifEncoder.setRepeat(0)

            // Each frame is re-encoded
            resizeImageFrames.zip(frameDurations).forEach {
                // Set the frame interval
                gifEncoder.setDelay(it.second)
                gifEncoder.addFrame(it.first)
                it.first.recycle()
            }

            gifEncoder.finish()

            resultData = outputStream.toByteArray()
            outputStream.close()
            return resultData
        } catch (e: IOException) {
            e.printStackTrace()
        }
        return resultData
    }

Copy the code

Resolution compression

This one is the most common and relatively simple.

iOS

The iOS ImageIO provides CGImageSourceCreateThumbnailAtIndex API to create thumbnail zoom. Add the long side parameter to be scaled in options.

    /// Synchronously compress the image data to the specified value
    ///
    /// - Parameters:
    /// -rawData: raw image data
    /// - limitLongWidth: limitLongWidth
    /// - Returns: data after processing
    public static func compressImageData(_ rawData:Data, limitLongWidth:CGFloat) -> Data? {guard max(rawData.imageSize.height, rawData.imageSize.width) > limitLongWidth else {
            return rawData
        }
        
        guard let imageSource = CGImageSourceCreateWithData(rawData as CFData, [kCGImageSourceShouldCache: false] as CFDictionary),
            let writeData = CFDataCreateMutable(nil.0),
            let imageType = CGImageSourceGetType(imageSource) else {
                return nil
        }
        
        
        let frameCount = CGImageSourceGetCount(imageSource)
        
        guard let imageDestination = CGImageDestinationCreateWithData(writeData, imageType, frameCount, nil) else{
            return nil
        }
        
        / / set the thumbnails parameters, kCGImageSourceThumbnailMaxPixelSize to generate the size of a thumbnail. When set to 800, if the image is larger than 800*600, the generated image size is 800*600, and if the source image is 700*500, the generated image size is 800*500
        let options = [kCGImageSourceThumbnailMaxPixelSize: limitLongWidth, kCGImageSourceCreateThumbnailWithTransform:true, kCGImageSourceCreateThumbnailFromImageIfAbsent:true] as CFDictionary
        
        if frameCount > 1 {
            // Calculate the interval of frames
            let frameDurations = imageSource.frameDurations
            
            // Each frame is scaled
            let resizedImageFrames = (0..<frameCount).compactMap{ CGImageSourceCreateThumbnailAtIndex(imageSource, $0, options) }
            
            // Each frame is re-encoded
            zip(resizedImageFrames, frameDurations).forEach {
                // Set the frame interval
                let frameProperties = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: $1, kCGImagePropertyGIFUnclampedDelayTime: $1]]
                CGImageDestinationAddImage(imageDestination, $0, frameProperties as CFDictionary)}}else {
            guard let resizedImageFrame = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options) else {
                return nil
            }
            CGImageDestinationAddImage(imageDestination, resizedImageFrame, nil)}guard CGImageDestinationFinalize(imageDestination) else {
            return nil
        }
        
        return writeData as Data
    }

Copy the code
Android

For Android static images, use the createScaleBitmap API in Bitmap, as described above.

   /** * returns the Byte data from the rawData to the Byte data after [limitLongWidth]. The Gif target is compressed up to 512
    fun compressImageDataWithLongWidth(context: Context, rawData: ByteArray, limitLongWidth: Int): ByteArray? {
        val format = rawData.imageFormat()
        if (format == ImageFormat.UNKNOWN) {
            return null
        }

        val (imageWidth, imageHeight) = rawData.imageSize()
        val longSideWidth = max(imageWidth, imageHeight)

        if (longSideWidth <= limitLongWidth) {
            return rawData
        }

        if (format == ImageFormat.GIF) {
            // Gif compression is prone to crash when encoding at too high a resolution
            return compressGifDataWithLongWidth(context, rawData, max(512, longSideWidth))
        } else {
            val image = BitmapFactory.decodeByteArray(rawData, 0, rawData.size)
            val ratio = limitLongWidth.toDouble() / longSideWidth.toDouble()
            val resizeImageFrame = Bitmap.createScaledBitmap(
                image,
                (image.width.toDouble() * ratio).toInt(),
                (image.height.toDouble() * ratio).toInt(),
                true
            )
            image.recycle()
            var resultData: ByteArray? = null
            when (format) {
                ImageFormat.PNG -> {
                    resultData = resizeImageFrame.toByteArray(Bitmap.CompressFormat.PNG)
                }
                ImageFormat.JPG -> {
                    resultData = resizeImageFrame.toByteArray(Bitmap.CompressFormat.JPEG)
                }
                else -> {
                }
            }
            resizeImageFrame.recycle()
            return resultData
        }
    }

Copy the code

Limit the size of the compression

That is, the method mentioned in front of the synthesis, the author here gives a scheme, not to change the color, JPG first with dichotomy to reduce the compression coefficient of up to 6 times, GIF first extract frame, extract frame interval reference before, finally use close to the target size to reduce the resolution.

iOS

    // synchronize compressed images to the specified file size
    ///
    /// - Parameters:
    /// -rawData: raw image data
    /// - limitDataSize: limits the file size, in bytes
    /// - Returns: data after processing
    public static func compressImageData(_ rawData:Data, limitDataSize:Int) -> Data? {guard rawData.count > limitDataSize else {
            return rawData
        }
        
        var resultData = rawData
        
        // If JPG, compress 6 times with compression factor, dichotomy
        if resultData.imageFormat == .jpg {
            var compression: Double = 1
            var maxCompression: Double = 1
            var minCompression: Double = 0
            for _ in 0..<6 {
                compression = (maxCompression + minCompression) / 2
                if let data = compressImageData(resultData, compression: compression){
                    resultData = data
                } else {
                    return nil
                }
                if resultData.count < Int(CGFloat(limitDataSize) * 0.9) {
                    minCompression = compression
                } else if resultData.count > limitDataSize {
                    maxCompression = compression
                } else {
                    break}}if resultData.count <= limitDataSize {
                return resultData
            }
        }
        
        // For gifs, use frames to reduce the size
        if resultData.imageFormat == .gif {
            let sampleCount = resultData.fitSampleCount
            if let data = compressImageData(resultData, sampleCount: sampleCount){
                resultData = data
            } else {
                return nil
            }
            if resultData.count <= limitDataSize {
                return resultData
            }
        }
        
        var longSideWidth = max(resultData.imageSize.height, resultData.imageSize.width)
        // The image size is scaled down, and the ratio approximates in bytes
        while resultData.count > limitDataSize{
            let ratio = sqrt(CGFloat(limitDataSize) / CGFloat(resultData.count))
            longSideWidth *= ratio
            if let data = compressImageData(resultData, limitLongWidth: longSideWidth) {
                resultData = data
            } else {
                return nil}}return resultData
    }

Copy the code

Android

    /** * Returns the size of rawData to the Byte after [limitDataSize] */
    fun compressImageDataWithSize(context: Context, rawData: ByteArray, limitDataSize: Int): ByteArray? {
        if (rawData.size <= limitDataSize) {
            return rawData
        }

        val format = rawData.imageFormat()
        if (format == ImageFormat.UNKNOWN) {
            return null
        }

        var resultData = rawData

        // If JPG, compress 6 times with compression factor, dichotomy
        if (format == ImageFormat.JPG) {
            var compression = 100
            var maxCompression = 100
            var minCompression = 0

            try {
                val outputStream = ByteArrayOutputStream()
                for (index in 0.6.) {
                    compression = (maxCompression + minCompression) / 2
                    outputStream.reset()
                    val image = BitmapFactory.decodeByteArray(rawData, 0, rawData.size)
                    image.compress(Bitmap.CompressFormat.JPEG, compression, outputStream)
                    image.recycle()
                    resultData = outputStream.toByteArray()
                    if (resultData.size < (limitDataSize.toDouble() * 0.9).toInt()) {
                        minCompression = compression
                    } else if (resultData.size > limitDataSize) {
                        maxCompression = compression
                    } else {
                        break
                    }
                }
                outputStream.close()
            } catch (e: IOException) {
                e.printStackTrace()
            }

            if (resultData.size <= limitDataSize) {
                return resultData
            }
        }

        // For gifs, use frames to reduce the size
        if (format == ImageFormat.GIF) {
            val sampleCount = resultData.fitSampleCount()
            val data = compressGifDataWithSampleCount(context, resultData, sampleCount)
            if (data! =null) {
                resultData = data
            } else {
                return null
            }

            if (resultData.size <= limitDataSize) {
                return resultData
            }
        }


        val (imageWidth, imageHeight) = resultData.imageSize()
        var longSideWidth = max(imageWidth, imageHeight)

        // The image size is scaled down, and the ratio approximates in bytes
        while (resultData.size > limitDataSize) {
            val ratio = Math.sqrt(limitDataSize.toDouble() / resultData.size.toDouble())
            longSideWidth = (longSideWidth.toDouble() * ratio).toInt()
            val data = compressImageDataWithLongWidth(context, resultData, longSideWidth)
            if (data! =null) {
                resultData = data
            } else {
                return null}}return resultData
    }

Copy the code

Note that this is a time-consuming operation in asynchronous threads.

The last

All code is packaged in iOS and Android files, if you have any errors and suggestions, please point out.

Reference

  • Lossless compression vs lossy compression vs how much loss

  • What are the advantages and disadvantages of JPG, PNG and GIF? What format should be used in what situations?

  • Also talk about picture compression

  • Mobile image format research

  • Concentrate is the essence: Brief analysis of GIF format image storage and compression

  • Compression GIF proper pose

  • Talk about decompressing images in iOS

  • IOS Image Codec Tutorial (Image/IO)