In the process of project development, I encountered a strange phenomenon: the same picture can be normally displayed on iOS and Mac terminals, but not on H5 and Android terminals. After comparing the normal picture with the abnormal one, it is found that the difference lies in the value of the Alpha channel of the abnormal picture is “Yes”, while the value of the normal picture is “no”. View the picture description below:
So how do you remove the Alpha channel from a screenshot?
1. The method one
- (void)converter:(NSString*)name{
NSLog(@"NAME: %@",name);
NSURL *url = [NSURL fileURLWithPath:name];
CGImageSourceRef source;
NSImage *srcImage =[[NSImage alloc] initWithContentsOfURL:url];;
NSLog(@"URL: %@",url);
source = CGImageSourceCreateWithData((__bridge CFDataRef)[srcImage TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0.NULL);
CGRect rect = CGRectMake(0.f, 0.f, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
rect.size.width,
rect.size.height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Little
);
CGContextDrawImage(bitmapContext, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(bitmapContext);
NSImage *finalImage = [[NSImage alloc] initWithCGImage:decompressedImageRef size:NSZeroSize];
NSData *imageData = [finalImage TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:name atomically:NO];
CGImageRelease(decompressedImageRef);
CGContextRelease(bitmapContext);
}
Copy the code
2. The method 2
CGImageRef imageRef = [image CGImageForProposedRect:&((CGRect) {0.0,image.size.width,image.size.height}) context:nil hints:nil];
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSInteger bitsPerSample = 16;
bool hasAlpha = NO;
NSInteger samplesPerPixel = hasAlpha ? 4 : 3;
/ / / canvas
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:image.size.width
pixelsHigh:image.size.height
bitsPerSample:bitsPerSample
samplesPerPixel:samplesPerPixel
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:bytesPerRow*(bitsPerSample/8)
bitsPerPixel:bitsPerPixel*(bitsPerSample/8)];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:rep];
[NSGraphicsContext setCurrentContext:context];
[image drawInRect:NSMakeRect(0.0, image.size.width, image.size.height) fromRect:NSZeroRect operation:NSCompositingOperationCopy fraction:1.0];
NSData *data = [rep representationUsingType:NSBitmapImageFileTypePNG properties:@{}];
NSImage *resultImage = [[NSImage alloc]initWithData:data];
Copy the code
Methods are not the same, the same idea, according to their own needs to adjust the parameters. After the output, open the image profile again to see the Alpha channel value.