Mainly for learning iOS Vision library

The effect

Original image (image network download weak infringement notice modification)

I pulled out the picture

Details or rougher face cut out 😂

implementation

1, through the iOS Vision framework provides VNGenerateObjectnessBasedSaliencyImageRequest can get photo significant areas, pay attention to using the ObjectnessBased is mainly based on the significance of object area.

Thermal map obtained

CIImage *ciOriginImg = [CIImage imageWithCGImage:originImage.CGImage]; / / original image VNImageRequestHandler * imageHandler = [[VNImageRequestHandler alloc] initWithCIImage: ciOriginImg options: nil]; VNGenerateObjectnessBasedSaliencyImageRequest *attensionRequest = [[VNGenerateObjectnessBasedSaliencyImageRequest alloc]  init]; // Object significance region detection request NSError *err = nil; BOOL haveAttension = [imageHandler performRequests:@[attensionRequest] error:&err]; // If (attensionRequest. Results && [attensionRequest. Results count] > 0) {// If (attensionRequest. Results count) { VNSaliencyImageObservation *observation = [attensionRequest.results firstObject]; / / get significant regional hot map, then on the image edge detection [self heatMapProcess: observation. PixelBuffer catOrigin: ciOriginImg]; }}Copy the code

2. After obtaining the thermal map of significance region, VNDetectContoursRequest edge detection was used to obtain the edge of significance region in the image. The VNContoursObservation attribute normalizedPath is a set of normalized points that CGPathRef objects will eventually convert into coordinates on the image.

CIImage *heatImage = [CIImage imageWithCVPixelBuffer:hotRef]; VNDetectContoursRequest *contourRequest = [[VNDetectContoursRequest alloc] init]; contourRequest.revision = VNDetectContourRequestRevision1; ContourRequest. ContrastAdjustment = 1.0; contourRequest.detectDarkOnLight = NO; contourRequest.maximumImageDimension = 512; VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithCIImage:heatImage options:nil]; NSError *err = nil; BOOL result = [handler performRequests:@[contourRequest] error:&err]; if(result) { VNContoursObservation *contoursObv = [contourRequest.results firstObject]; CIContext *cxt = [[CIContext alloc] initWithOptions:nil]; CGImageRef origin = [cxt createCGImage:catOrigin fromRect:catOrigin.extent]; / / cutout UIImage * clipImage = [self drawContourWith: contoursObv withCgImg: nil originImg: origin]. }Copy the code

3. Finally, the contents of significant regions are extracted by setting mask at image.layer.

cutout

- (UIImage *)drawContourWith:(VNContoursObservation *)contourObv withCgImg:(CGImageRef)img originImg:(CGImageRef)origin{  CGSize size = CGSizeMake(CGImageGetWidth(origin), CGImageGetHeight(origin)); UIImageView *originImgV = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)]; originImgV.image = [UIImage imageWithCGImage:origin]; CAShapeLayer *layer = [CAShapeLayer layer]; CGAffineTransform flipMatrix = CGAffineTransformMake(1, 0, 0, -1, 0, size.height); CGAffineTransform scaleTranform = CGAffineTransformScale(flipMatrix, sie.width, sie.height); / / the path according to the size of figure amplification CGPathRef scaedPath = CGPathCreateCopyByTransformingPath (contourObv normalizedPath, & scaleTranform); Layer. path = scaedPath; [originImgV.layer setMask:layer]; UIGraphicsBeginImageContext(originImgV.bounds.size); [originImgV.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); / / capture / / for buckle up the main content of the original data is enlarge the scope of the UI CGAffineTransform originScale = CGAffineTransformMakeScale (size, width, the size height); CGPathRef originScalePath = CGPathCreateCopyByTransformingPath(contourObv.normalizedPath, &originScale); CGRect targetReact = CGPathGetBoundingBox(originScalePath); CIImage *getBoundImage = [[CIImage alloc] initWithImage:image]; CIImage *targetBoundImg = [getBoundImage imageByCroppingToRect:targetReact]; / / intercept range images CIImage * cropImg = [ciOriginImg imageByCroppingToRect: normalRect]; return [UIImage imageWithCIImage:targetBoundImg]; }Copy the code