An overview of the

Nearest neighbor interpolation is the simplest interpolation method. Instead of calculating the average value according to some weighting criteria or generating the intermediate value according to complex rules, the method takes the pixel point in the relative position of the source image as the value of the target pixel point according to the ratio of the width (height) of the target image to the width (height) of the source image. The new pixel value must be some pixel value of the original image.

Suppose we want to stretch a small 2X2 image to a 4X4 size, as shown below

Assume that the coordinate of P1 is (Dx, Dy), the width and height of the original picture are Sw and Sh respectively, and the width and height of the stretched image are Dw and Dh respectively. We need to ask the coordinate of P1 in the original picture (Sx, Sy). the

Dx/Dw = Sx/Sw Dy/Dh = Sy/Sh Sx = Dx * (Sh/Dh) Sy = Dy * (Sw/Dw) Dx = 0 Dy = 0 Sh = 2 Sw = 2 Dh = 4 Dw Sx = 0 * (2/4) = 0 Sy = 0 * (2/4) = 0 and P2 is (0, 1), P2 corresponds to the coordinates of the original picture Sx = 0 * (2/4) = 0 Sy = 1 * (2/4) = 0.5 // When the fractional part is removed, it is 0Copy the code

So P1 is (0, 0), so P1 is 10, P2 is (0, 0), so P2 is 10, and so on

IOS platform native API implementation

  1. The Core Graphics framework

InterpolationQuality of CGContext is Set to None when we resampling an image using the Core Graphics framework. This will use the nearest neighbor interpolation algorithm

class func resizedImage(at url: URL, for size: CGSize) -> UIImage? {
    guard let imageSource = CGImageSourceCreateWithURL(url as NSURL, nil),
        let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
    else {
        return nil
    }

    letcontext = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: image.bitsPerComponent, bytesPerRow: image.bytesPerRow, space: image.colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)! , bitmapInfo: image. BitmapInfo. RawValue) / / set the quality of interpolation to none, namely using nearest neighbor interpolation algorithm context? .interpolationQuality = .none context?.draw(image,in: CGRect(origin: .zero, size: size))

    guard letscaledImage = context? .makeImage()else { return nil }

    return UIImage(cgImage: scaledImage)
}
Copy the code

The rule in Objective-C is this

CGContextSetInterpolationQuality(context, kCGInterpolationNone);
Copy the code
  1. API provided by CALayer

In CALayer, two properties are provided: minificationFilter (the type of filter to use when zooming out pictures) and magnificationFilter (the type of filter to use when zooming in pictures), Specifies the type of filter to use when rendering the content of the Content property in a CALayer. Use the filter specified by minificationFilter when image data needs to be shrunk, and the filter specified by the magnificationFilter when image data needs to be enlarged. The default value for both attributes is kCAFilterLinear. KCAFilterNearest is the filter that uses the nearest neighbor interpolation algorithm.

  • kCAFilterLinear
  • kCAFilterNearest
  • kCAFilterTrilinear

We can set the layer property in UIImageView as follows. When the image data is too large or too small to stretch, the filter type you specify will be used to resample the image. However, it is important to note that the inconsistency between the image size and the display size can have potential performance concerns. If you need to display many images in a list, and the image size is inconsistent with the actual display size, it may cause lag when sliding. It’s usually done manually in the background after resampling the image to UIImageView, but this is done on the main thread.

imageView.layer.magnificationFilter = kCAFilterNearest;
Copy the code

Algorithm implementation

I implemented this algorithm in Objective-C on iOS. First we need to get the RGBA data of the image, also known as RAW data. This is a one-dimensional array, but in practice we need to traverse it in two dimensions.

  1. Get image RGBA data
UInt32* pixelData = (UInt32 *)calloc(width * height, sizeof(UInt32));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData,
                                                width,
                                                height,
                                                bitsPerComponent,
                                                bytesPerRow,
                                                colorSpace,
                                                kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgOriginalImage);
Copy the code
  1. Simply loop through all pixels in the target/output image by pressing the width and height coordinatesrowRatioandcolRatioScale, and remove the decimal to get the scale index value, which addresses the source pixel to be copied
static UInt32* scaleImageWithNearesNeighborInterpolation(UInt32* pixelData, int sourceWidth, int sourceHeight, int desWidth, int desHeight) {// Width and Height scaling constantsfloat rowRatio = ((float)sourceHeight) / ((float)desHeight);
    float colRatio = ((float)sourceWidth) / ((float)desWidth);
    UInt32* rgba = (UInt32 *)calloc(desWidth * desHeight, sizeof(UInt32));
    int offset=0;
    for(int i = 0; i < desHeight; ++ I) {// round(0.5) = 1 // floor(0.5) = 0 int srcRow = floor(()float)i)*rowRatio);
        if(srcRow >= sourceHeight) {
            srcRow = sourceHeight - 1;
        }
        
        for (int j = 0; j < desWidth; j++) {
            
            int srcCol = floor(((float)j)*colRatio);
            if(srcCol >= sourceWidth) {
                srcCol = sourceWidth - 1;
            }
            
            rgba[offset]   = pixelData[(srcRow * sourceWidth + srcCol)]; offset++; }}return rgba;
}
Copy the code
  1. Convert the new RGBA data back to UIImage
CGContextRef bitmapContext = CGBitmapContextCreate(
                                                    rgba,
                                                    width,
                                                    height,
                                                    bitsPerComponent,
                                                    bytesPerRow,
                                                    colorSpace,
                                                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
UIImage *newUIImage = [UIImage imageWithCGImage:cgImage];
Copy the code

The final result is the following effect comparison diagram

Source Code

GitHub

Refer to the link

  • Image Processing in iOS Part 1: Raw Bitmap Modification
  • UIImage resize with hard edges
  • Capturing Uncompressed Image Data
  • Image Processing in C
  • Converting RGB data into a bitmap in Objective-C++ Cocoa
  • Converting Array of Pixel Brightness Values Back into UIImage?
  • Common image resampling algorithms