The cropping function is used in many image uploading scenarios. On the one hand, the application server may have restrictions on the size of the image, so the uploaded image is expected to conform to the rules. On the other hand, the user may want to upload only part of the image to highlight key information in the image. In order to meet the diverse needs of users, it is necessary to support users to dynamically change the cutting range and size when cutting pictures.

Take a look at the renderings first

Source code repository

The basic process of dynamic cropping can be roughly divided into the following steps

  • Displays images with cropped areas
  • Supports moving and zooming pictures
  • Support gesture to change clipping area
  • Crop the picture and get the cropped picture

Displays images with cropped areas

Display images

Before cropping the image, first we need to display the image to be cropped on the page, as shown below

This step is easy, just configure a UIImageView to hold the image. But notice that UIImageView has multiple contentModes, three of which are the most common

  • UIViewContentModeScaleToFill
  • UIViewContentModeScaleAspectFit
  • UIViewContentModeScaleAspectFill

The three differences can be seen in the following comparison

  • UIViewContentModeScaleToFill

  • UIViewContentModeScaleAspectFit

  • UIViewContentModeScaleAspectFill

As you can see, ScaleToFill will change the ratio of the image to cover the entire UIImageView, ScaleAspectFill will keep the ratio of the image to cover the entire UIImageView, so that some of the image content is outside the UIImageView area, ScaleAspectFit is going to keep the scale of the image constant, and the content of the image is going to be displayed in the UIImageView, even if it doesn’t fill the UIImageView.

So different display modes will affect our final to the appearance of the image on the screen, and in the process of cutting the most ideal place the pattern of the picture is that picture of short side just covered with short edge cut out of the area, and the long side or at least not less than the long side of cutting area, this requests us to consider aspect to place our images cut out of the area.

Cutting area

Next we will place our clipping area, which will look like this

Clipping itself is just putting a Layer of UIView on top of UIImageView, and then drawing a Layer of grid with a white border on top of UIView.

Start by customizing a CAShapeLayer

#import <QuartzCore/QuartzCore.h> @interface YasicClipAreaLayer : CAShapeLayer @property(assign, nonatomic) NSInteger cropAreaLeft; @property(assign, nonatomic) NSInteger cropAreaTop; @property(assign, nonatomic) NSInteger cropAreaRight; @property(assign, nonatomic) NSInteger cropAreaBottom; - (void)setCropAreaLeft:(NSInteger)cropAreaLeft CropAreaTop:(NSInteger)cropAreaTop CropAreaRight:(NSInteger)cropAreaRight CropAreaBottom:(NSInteger)cropAreaBottom; @end @implementation YasicClipAreaLayer - (instancetype)init { self = [super init]; if (self) { _cropAreaLeft = 50; _cropAreaTop = 50; _cropAreaRight = SCREEN_WIDTH - self.cropAreaLeft; _cropAreaBottom = 400; } return self; } - (void)drawInContext:(CGContextRef)ctx { UIGraphicsPushContext(ctx); CGContextSetStrokeColorWithColor(ctx, [UIColor whiteColor].CGColor); CGContextSetLineWidth(ctx, lineWidth); CGContextMoveToPoint(ctx, self.cropAreaLeft, self.cropAreaTop); CGContextAddLineToPoint(ctx, self.cropAreaLeft, self.cropAreaBottom); CGContextSetShadow (CTX, CGSizeMake (2, 0), 2.0); CGContextStrokePath(ctx); CGContextSetStrokeColorWithColor(ctx, [UIColor whiteColor].CGColor); CGContextSetLineWidth(ctx, lineWidth); CGContextMoveToPoint(ctx, self.cropAreaLeft, self.cropAreaTop); CGContextAddLineToPoint(ctx, self.cropAreaRight, self.cropAreaTop); CGContextSetShadow (CTX, CGSizeMake (0, 2), 2.0); CGContextStrokePath(ctx); CGContextSetStrokeColorWithColor(ctx, [UIColor whiteColor].CGColor); CGContextSetLineWidth(ctx, lineWidth); CGContextMoveToPoint(ctx, self.cropAreaRight, self.cropAreaTop); CGContextAddLineToPoint(ctx, self.cropAreaRight, self.cropAreaBottom); CGContextSetShadow (CTX, CGSizeMake (2, 0), 2.0); CGContextStrokePath(ctx); CGContextSetStrokeColorWithColor(ctx, [UIColor whiteColor].CGColor); CGContextSetLineWidth(ctx, lineWidth); CGContextMoveToPoint(ctx, self.cropAreaLeft, self.cropAreaBottom); CGContextAddLineToPoint(ctx, self.cropAreaRight, self.cropAreaBottom); CGContextSetShadow (CTX, CGSizeMake (0, 2), 2.0); CGContextStrokePath(ctx); UIGraphicsPopContext(); } - (void)setCropAreaLeft:(NSInteger)cropAreaLeft { _cropAreaLeft = cropAreaLeft; [self setNeedsDisplay]; } - (void)setCropAreaTop:(NSInteger)cropAreaTop { _cropAreaTop = cropAreaTop; [self setNeedsDisplay]; } - (void)setCropAreaRight:(NSInteger)cropAreaRight { _cropAreaRight = cropAreaRight; [self setNeedsDisplay]; } - (void)setCropAreaBottom:(NSInteger)cropAreaBottom { _cropAreaBottom = cropAreaBottom; [self setNeedsDisplay]; } - (void)setCropAreaLeft:(NSInteger)cropAreaLeft CropAreaTop:(NSInteger)cropAreaTop CropAreaRight:(NSInteger)cropAreaRight CropAreaBottom:(NSInteger)cropAreaBottom { _cropAreaLeft = cropAreaLeft; _cropAreaRight = cropAreaRight; _cropAreaTop = cropAreaTop; _cropAreaBottom = cropAreaBottom; [self setNeedsDisplay]; } @endCopy the code

Here, the layer has several attributes: cropAreaLeft, cropAreaRight, cropAreaTop and cropAreaBottom. It can be seen from the naming that these attributes define the coordinate information of the white border clipping area drawn on this layer. It also exposes a method for configuring these four properties.

And then inside CAShapeLayer, the emphasis is on the drawInContext method, which is responsible for drawing directly on layers, What the copying method mainly does is to draw four closed lines according to the above four attributes: cropAreaLeft, cropAreaRight, cropAreaTop and cropAreaBottom, which can represent the boundary of the clipping area.

Note that the drawInContext method cannot display the call manually. You must call setNeedsDisplay or setNeedsDisplayInRect to get the method to be called automatically.

In the clipping page, we place a cropView and add our custom CAShaplayer to the view

self.cropView.layer.sublayers = nil; YasicClipAreaLayer * layer = [[YasicClipAreaLayer alloc] init]; CGRect cropframe = CGRectMake(self.cropAreaX, self.cropAreaY, self.cropAreaWidth, self.cropAreaHeight); UIBezierPath * path = [UIBezierPath bezierPathWithRoundedRect:self.cropView.frame cornerRadius:0]; UIBezierPath * cropPath = [UIBezierPath bezierPathWithRect:cropframe]; [path appendPath:cropPath]; layer.path = path.CGPath; layer.fillRule = kCAFillRuleEvenOdd; layer.fillColor = [[UIColor blackColor] CGColor]; Layer. The opacity = 0.5; layer.frame = self.cropView.bounds; [layer setCropAreaLeft:self.cropAreaX CropAreaTop:self.cropAreaY CropAreaRight:self.cropAreaX + self.cropAreaWidth CropAreaBottom:self.cropAreaY + self.cropAreaHeight]; [self.cropView.layer addSublayer:layer]; [self.view bringSubviewToFront:self.cropView];Copy the code

The main purpose here is to create a hollow mask effect with a custom CAShapelayer, so that the central clipping area is highlighted and the surrounding non-clipping area is masked, as shown in the following diagram

So we first determine the size of cashapelayer to be the size of CropView, generate a corresponding UIBezierPath, Then generate the inner circle UIBezierPath of the hollow mask according to the size of the clipping area (determined by self.cropAreaX, self.cropAreaY, self.cropAreaWidth, self.cropAreaHeight).

CGRect cropframe = CGRectMake(self.cropAreaX, self.cropAreaY, self.cropAreaWidth, self.cropAreaHeight);
    UIBezierPath * path = [UIBezierPath bezierPathWithRoundedRect:self.cropView.frame cornerRadius:0];
    UIBezierPath * cropPath = [UIBezierPath bezierPathWithRect:cropframe];
    [path appendPath:cropPath];
    layer.path = path.CGPath;Copy the code

This path is then configured to the CAShapeLayer and the fillRule of the CAShapeLayer is configured to kCAFillRuleEvenOdd

layer.fillRule = kCAFillRuleEvenOdd; layer.fillColor = [[UIColor blackColor] CGColor]; Layer. The opacity = 0.5; layer.frame = self.cropView.bounds;Copy the code

The fillRule attribute indicates which algorithm is used to determine whether a certain area on the canvas belongs to the “interior” of the graph, and the interior area will be filled with color. There are two main ways

  • KCAFillRuleNonZero. The judgment rule of this algorithm is that if a ray is emitted from a certain point in any direction, the point of intersection with the corresponding Layer is 0, it is not in the Layer, and if it is greater than 0, it is in the canvas

  • KCAFillRuleEvenOdd is in the canvas if it shoots any ray from a point that intersects even with the corresponding Layer, otherwise it’s not in the canvas

To achieve the hollow layer effect, set the layer color of CAShapeLayer to black with transparency 0.5.

Finally, set the layer’s four properties and draw the white border of the inner border.

    [layer setCropAreaLeft:self.cropAreaX CropAreaTop:self.cropAreaY CropAreaRight:self.cropAreaX + self.cropAreaWidth CropAreaBottom:self.cropAreaY + self.cropAreaHeight];
    [self.cropView.layer addSublayer:layer];
    [self.view bringSubviewToFront:self.cropView];Copy the code

Placement of pictures

At this point we have the image correctly displayed and the clipping area correctly displayed, but we have not established the constraint relationship between the two, so the following situation may occur

As you can see here, since the width of the image is much larger than the height, there will be a black area in the clipping area, which is a bad experience for the user and will also affect our subsequent clipping steps. The reason is that we did not place UIImageView according to the width and height of the clipping area. Hope is the most ideal result, we can achieve similar UIViewContentModeScaleAspectFill effect in cutting area, namely images keep proportion shakedown is full of cutting area, and allow part out of cutting area, this request

  • When the ratio of the width of the picture to the width of the cropping area is greater than the ratio of the height of the picture to the height of the cropping area, spread the height of the picture to the height of the cropping area and enlarge the width of the picture proportionally
  • When the ratio of the height of the picture to the height of the cropping area is greater than the ratio of the width of the picture to the width of the cropping area, spread the width of the picture to the width of the cropping area, and the height of the picture is proportional

Here we’ll use navigation to perform these layout operations

    CGFloat tempWidth = 0.0;
    CGFloat tempHeight = 0.0;
    
    if (self.targetImage.size.width/self.cropAreaWidth <= self.targetImage.size.height/self.cropAreaHeight) {
        tempWidth = self.cropAreaWidth;
        tempHeight = (tempWidth/self.targetImage.size.width) * self.targetImage.size.height;
    } else if (self.targetImage.size.width/self.cropAreaWidth > self.targetImage.size.height/self.cropAreaHeight) {
        tempHeight = self.cropAreaHeight;
        tempWidth = (tempHeight/self.targetImage.size.height) * self.targetImage.size.width;
    }
    
    [self.bigImageView mas_updateConstraints:^(MASConstraintMaker *make) {
        make.left.mas_equalTo(self.cropAreaX - (tempWidth - self.cropAreaWidth)/2);
        make.top.mas_equalTo(self.cropAreaY - (tempHeight - self.cropAreaHeight)/2);
        make.width.mas_equalTo(tempWidth);
        make.height.mas_equalTo(tempHeight);
    }];Copy the code

As you can see, we made a two-step judgment to get the right width and height. Then we laid out the image and placed the center of the image with the center of the cropping area during the automatic layout. Finally we got the following effect

Supports moving and zooming pictures

As the above, because the picture is similar in cutting area placed UIViewContentModeScaleAspectFill way, part likely overflow cutting area, so we want to make images can support dynamic movement and zoom, thus the user can flexibly cut out of the picture content.

In terms of implementation, we actually add gestures to cropView to indirectly manipulate the size and position of the image, which will help us to dynamically change the clipping area later.

The zoom function

And the way we’re doing that is we’re actually changing the frame of the UIImageView that we’re putting the image in, so first we’re going to record the frame of the original UIImageView

self.originalFrame = CGRectMake(self.cropAreaX - (tempWidth - self.cropAreaWidth)/2, self.cropAreaY - (tempHeight - self.cropAreaHeight)/2, tempWidth, tempHeight);Copy the code

Then add the gesture to cropView

PinGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(handleCenterPinGesture:)]; [self.view addGestureRecognizer:pinGesture];Copy the code

Then there are the gesture handlers

-(void)handleCenterPinGesture:(UIPinchGestureRecognizer *)pinGesture { CGFloat scaleRation = 3; UIView * view = self.bigImageView; / / scale to start with the scale of the if (pinGesture. State = = UIGestureRecognizerStateBegan | | pinGesture. State = = UIGestureRecognizerStateChanged) {/ / mobile scaling center to finger CGPoint pinchCenter = [pinGesture locationInView: the superview]; CGFloat distanceX = view.frame.origin.x - pinchCenter.x; CGFloat distanceY = view.frame.origin.y - pinchCenter.y; CGFloat scaledDistanceX = distanceX * pinGesture.scale; CGFloat scaledDistanceY = distanceY * pinGesture.scale; CGRect newFrame = CGRectMake(view.frame.origin.x + scaledDistanceX - distanceX, view.frame.origin.y + scaledDistanceY - distanceY, view.frame.size.width * pinGesture.scale, view.frame.size.height * pinGesture.scale); view.frame = newFrame; pinGesture.scale = 1; } / / end of the zoom the if (pinGesture. State = = UIGestureRecognizerStateEnded) {CGFloat ration = the frame. The size, width / self.originalFrame.size.width; / / resized too big if (ration > 5) {CGRect newFrame = CGRectMake (0, 0, the self. OriginalFrame. Size. Width * scaleRation, self.originalFrame.size.height * scaleRation); view.frame = newFrame; } // Too small if (ration < 0.25) {view.frame = self.originalFrame; } CGRect resetPosition = CGRectMake(view.frame.origin. X, view.frame.Origin. Y, view.frame.size. view.frame.size.height); if (resetPosition.origin.x >= self.cropAreaX) { resetPosition.origin.x = self.cropAreaX; } if (resetPosition.origin.y >= self.cropAreaY) { resetPosition.origin.y = self.cropAreaY; } if (resetPosition.size.width + resetPosition.origin.x < self.cropAreaX + self.cropAreaWidth) { CGFloat movedLeftX = fabs(resetPosition.size.width + resetPosition.origin.x - (self.cropAreaX + self.cropAreaWidth)); resetPosition.origin.x += movedLeftX; } if (resetPosition.size.height + resetPosition.origin.y < self.cropAreaY + self.cropAreaHeight) { CGFloat moveUpY = fabs(resetPosition.size.height + resetPosition.origin.y - (self.cropAreaY + self.cropAreaHeight)); resetPosition.origin.y += moveUpY; } view.frame = resetPosition; // Adjust the scale of the image. Too little to prevent the if (self cropAreaX < self. BigImageView. Frame. Origin. X | | ((self. CropAreaX + self. CropAreaWidth) > self.bigImageView.frame.origin.x + self.bigImageView.frame.size.width) || self.cropAreaY < self.bigImageView.frame.origin.y || ((self.cropAreaY + self.cropAreaHeight) > self.bigImageView.frame.origin.y + self.bigImageView.frame.size.height)) { view.frame = self.originalFrame; }}}Copy the code

In gesture processing, it should be noted that in order to follow the user to pinch the center of the gesture, we need to move the pinch center to the center of the finger during the gesture. Here, we judge the state of pinGesture to determine the start, progress and end stages of the gesture.

if (pinGesture.state == UIGestureRecognizerStateBegan || pinGesture.state == UIGestureRecognizerStateChanged) { // Mobile scaling center to finger CGPoint pinchCenter = [pinGesture locationInView: the superview]; CGFloat distanceX = view.frame.origin.x - pinchCenter.x; CGFloat distanceY = view.frame.origin.y - pinchCenter.y; CGFloat scaledDistanceX = distanceX * pinGesture.scale; CGFloat scaledDistanceY = distanceY * pinGesture.scale; CGRect newFrame = CGRectMake(view.frame.origin.x + scaledDistanceX - distanceX, view.frame.origin.y + scaledDistanceY - distanceY, view.frame.size.width * pinGesture.scale, view.frame.size.height * pinGesture.scale); view.frame = newFrame; pinGesture.scale = 1; }Copy the code

PinchCenter is the kneading-gesture center. We obtain the frame of the current picture view, then calculate the x and Y coordinate difference between the current view and the gesture center, and create a new frame according to the gesture scale value

        CGRect newFrame = CGRectMake(view.frame.origin.x + scaledDistanceX - distanceX, view.frame.origin.y + scaledDistanceY - distanceY, view.frame.size.width * pinGesture.scale, view.frame.size.height * pinGesture.scale);Copy the code

The center coordinate of the frame is in the center of the zooming gesture. Assign the new frame to the image View to achieve the zooming effect based on the center of the gesture.

At the end of the gesture, we need to protect the edges of the zooming image so that it is neither too large nor too small.

CGFloat ration = view.frame.size.width / self.originalFrame.size.width; / / resized too big if (ration > 5) {CGRect newFrame = CGRectMake (0, 0, the self. OriginalFrame. Size. Width * scaleRation, self.originalFrame.size.height * scaleRation); view.frame = newFrame; } // Too small if (ration < 0.25) {view.frame = self.originalFrame; }Copy the code

At the same time, if there is a blank area between the picture and the cropping area after zooming, the position of the picture should be corrected to ensure that the picture always covers the full cropping area.

CGRect resetPosition = CGRectMake(view.frame.origin. X, view.frame.Origin. Y, view.frame.size. Width, view.frame.size. view.frame.size.height); if (resetPosition.origin.x >= self.cropAreaX) { resetPosition.origin.x = self.cropAreaX; } if (resetPosition.origin.y >= self.cropAreaY) { resetPosition.origin.y = self.cropAreaY; } if (resetPosition.size.width + resetPosition.origin.x < self.cropAreaX + self.cropAreaWidth) { CGFloat movedLeftX = fabs(resetPosition.size.width + resetPosition.origin.x - (self.cropAreaX + self.cropAreaWidth)); resetPosition.origin.x += movedLeftX; } if (resetPosition.size.height + resetPosition.origin.y < self.cropAreaY + self.cropAreaHeight) { CGFloat moveUpY = fabs(resetPosition.size.height + resetPosition.origin.y - (self.cropAreaY + self.cropAreaHeight)); resetPosition.origin.y += moveUpY; } view.frame = resetPosition;Copy the code

Here we compare the current image’s CGRect with the clipped region’s boundaries as follows

  • When the left line of the image is larger than the left line of the clipping area, the image moves to the clipping area x value
  • When the upper edge of the image is larger than the upper edge of the clipping region, the image moves to the clipping region y value
  • If the right line of the picture is smaller than the right line of the clipping area, paste the right line of the picture on the right side of the clipping area
  • If the lower edge line of the picture is smaller than the right line of the clipping area, paste the lower edge line of the clipping area under the picture

After doing this, you may have a situation where the image is too small to cover the cropped area, as shown below

Therefore, it is necessary to correct the image size again

// Adjust the scale of the image. Too little to prevent the if (self cropAreaX < self. BigImageView. Frame. Origin. X | | ((self. CropAreaX + self. CropAreaWidth) > self.bigImageView.frame.origin.x + self.bigImageView.frame.size.width) || self.cropAreaY < self.bigImageView.frame.origin.y || ((self.cropAreaY + self.cropAreaHeight) > self.bigImageView.frame.origin.y + self.bigImageView.frame.size.height)) { view.frame = self.originalFrame; }Copy the code

This enables scaling.

Mobile capabilities

Compared to zooming, the move function is easy to implement. Just add UIPanGestureRecognizer to CropView, get the required distance in the callback method, and change the CENTER of UIImageView.

    CGPoint translation = [panGesture translationInView:view.superview];
    [view setCenter:CGPointMake(view.center.x + translation.x, view.center.y + translation.y)];
                [panGesture setTranslation:CGPointZero inView:view.superview];Copy the code

However, in order to ensure that the moving picture will not be blank or even beyond the clipping area, the position of the picture should be corrected at the end of the gesture after updating the position of the picture

CGRect currentFrame = view.frame; if (currentFrame.origin.x >= self.cropAreaX) { currentFrame.origin.x = self.cropAreaX; } if (currentFrame.origin.y >= self.cropAreaY) { currentFrame.origin.y = self.cropAreaY; } if (currentFrame.size.width + currentFrame.origin.x < self.cropAreaX + self.cropAreaWidth) { CGFloat movedLeftX = fabs(currentFrame.size.width + currentFrame.origin.x - (self.cropAreaX + self.cropAreaWidth)); currentFrame.origin.x += movedLeftX; } if (currentFrame.size.height + currentFrame.origin.y < self.cropAreaY + self.cropAreaHeight) { CGFloat moveUpY = fabs(currentFrame.size.height + currentFrame.origin.y - (self.cropAreaY + self.cropAreaHeight)); currentFrame.origin.y += moveUpY; } [UIView animateWithDuration:0.3 animations:^{[view setFrame:currentFrame];}];Copy the code

As you can see, the position check is the same as when you scale, except that you don’t need to do any resizing because you’re not changing the image size.

Support gesture to change clipping area

Next is the core content of the dynamic cropping picture, in fact, the principle is also very simple, as long as in the above movement gesture processing function, make some judgment, decide to move the picture position or change the cropping area, that is, the size of the drawing box of the customized CAShapeLayer.

First we define an enumeration that indicates whether we should currently operate on the edges of the image or clipping area

typedef NS_ENUM(NSInteger, ACTIVEGESTUREVIEW) {
    CROPVIEWLEFT,
    CROPVIEWRIGHT,
    CROPVIEWTOP,
    CROPVIEWBOTTOM,
    BIGIMAGEVIEW
};Copy the code

They indicate that the trigger object is the clipping region left line, right line, top line, bottom line, and UIImageView, respectively

Then we define an enumerated property

@property(assign, nonatomic) ACTIVEGESTUREVIEW activeGestureView;Copy the code

The criteria for determining the action object is whether the position triggered by the current gesture is on or off the edge, so we need to know the coordinates when the gesture is triggered. To know this, we need to inherit a UIPanGestureRecognizer and override some methods

@interface YasicPanGestureRecognizer : UIPanGestureRecognizer

@property(assign, nonatomic) CGPoint beginPoint;
@property(assign, nonatomic) CGPoint movePoint;

-(instancetype)initWithTarget:(id)target action:(SEL)action inview:(UIView*)view;

@end

@interface YasicPanGestureRecognizer()

@property(strong, nonatomic) UIView *targetView;

@end

@implementation YasicPanGestureRecognizer

-(instancetype)initWithTarget:(id)target action:(SEL)action inview:(UIView*)view{
    
    self = [super initWithTarget:target action:action];
    if(self) {
        self.targetView = view;
    }
    return self;
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event{
    
    [super touchesBegan:touches withEvent:event];
    UITouch *touch = [touches anyObject];
    self.beginPoint = [touch locationInView:self.targetView];
}

- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{
    [super touchesMoved:touches withEvent:event];
    UITouch *touch = [touches anyObject];
    self.movePoint = [touch locationInView:self.targetView];
}

@endCopy the code

As you can see, we first pass in a View that converts the gesture-triggered position to the coordinate value in the view. In – (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event{ MovePoint (void)touchesMoved:(NSSet

*)touches withEvent: UIEvent *)event

After customizing UIPanGestureRecognizer we add it to CropView and pass cropView as an argument to UIPanGestureRecognizer

/ / drag YasicPanGestureRecognizer * panGesture = [[YasicPanGestureRecognizer alloc] initWithTarget: self action:@selector(handleDynamicPanGesture:) inview:self.cropView]; [self.cropView addGestureRecognizer:panGesture];Copy the code

Here we can divide the whole process into three steps: start -> progress -> end.

  • At the beginning of the gesture

In this case, we’re going to determine whether the trigger object is a border or a UIImageView based on the beginPoint of the gesture

/ / start sliding judgment is sliding object ImageView or Layer on the Line if (panGesture. State = = UIGestureRecognizerStateBegan) {if (beginPoint. X > = self.cropAreaX - judgeWidth && beginPoint.x <= self.cropAreaX + judgeWidth && beginPoint.y >= self.cropAreaY && beginPoint.y <= self.cropAreaY + self.cropAreaHeight && self.cropAreaWidth >= 50) { self.activeGestureView = CROPVIEWLEFT; } else if (beginPoint.x >= self.cropAreaX + self.cropAreaWidth - judgeWidth && beginPoint.x <= self.cropAreaX + self.cropAreaWidth + judgeWidth && beginPoint.y >= self.cropAreaY && beginPoint.y <= self.cropAreaY + self.cropAreaHeight && self.cropAreaWidth >= 50) { self.activeGestureView = CROPVIEWRIGHT; } else if (beginPoint.y >= self.cropAreaY - judgeWidth && beginPoint.y <= self.cropAreaY + judgeWidth && beginPoint.x >=  self.cropAreaX && beginPoint.x <= self.cropAreaX + self.cropAreaWidth && self.cropAreaHeight >= 50) { self.activeGestureView = CROPVIEWTOP; } else if (beginPoint.y >= self.cropAreaY + self.cropAreaHeight - judgeWidth && beginPoint.y <= self.cropAreaY + self.cropAreaHeight + judgeWidth && beginPoint.x >= self.cropAreaX && beginPoint.x <= self.cropAreaX + self.cropAreaWidth && self.cropAreaHeight >= 50) { self.activeGestureView = CROPVIEWBOTTOM; } else { self.activeGestureView = BIGIMAGEVIEW; [view setCenter:CGPointMake(view.center.x + translation.x, view.center.y + translation.y)]; [panGesture setTranslation:CGPointZero inView:view.superview]; }}Copy the code
  • Gesture continuous

Here, if the trigger object is a edge line, it calculates the distance and direction that the edge line needs to move, and limits the boundary conditions to prevent cross and dislocation between the edges. Specifically, it obtains the coordinate difference and updates the equivalent value of cropAreaX and cropAreaWidth. Then update the hollow layer on the CAShapeLayer.

If the trigger object is A UIImageView then you just change its position.

/ / in the process of the sliding position if (panGesture. State = = UIGestureRecognizerStateChanged) {CGFloat diff = 0; switch (self.activeGestureView) { case CROPVIEWLEFT: { diff = movePoint.x - self.cropAreaX; if (diff >= 0 && self.cropAreaWidth > 50) { self.cropAreaWidth -= diff; self.cropAreaX += diff; } else if (diff < 0 && self.cropAreaX > self.bigImageView.frame.origin.x && self.cropAreaX >= 15) { self.cropAreaWidth -= diff; self.cropAreaX += diff; } [self setUpCropLayer]; break; } case CROPVIEWRIGHT: { diff = movePoint.x - self.cropAreaX - self.cropAreaWidth; if (diff >= 0 && (self.cropAreaX + self.cropAreaWidth) < MIN(self.bigImageView.frame.origin.x + self.bigImageView.frame.size.width, self.cropView.frame.origin.x + self.cropView.frame.size.width - 15)){ self.cropAreaWidth += diff; } else if (diff < 0 && self.cropAreaWidth >= 50) { self.cropAreaWidth += diff; } [self setUpCropLayer]; break; } case CROPVIEWTOP: { diff = movePoint.y - self.cropAreaY; if (diff >= 0 && self.cropAreaHeight > 50) { self.cropAreaHeight -= diff; self.cropAreaY += diff; } else if (diff < 0 && self.cropAreaY > self.bigImageView.frame.origin.y && self.cropAreaY >= 15) { self.cropAreaHeight -= diff; self.cropAreaY += diff; } [self setUpCropLayer]; break; } case CROPVIEWBOTTOM: { diff = movePoint.y - self.cropAreaY - self.cropAreaHeight; if (diff >= 0 && (self.cropAreaY + self.cropAreaHeight) < MIN(self.bigImageView.frame.origin.y + self.bigImageView.frame.size.height, self.cropView.frame.origin.y + self.cropView.frame.size.height - 15)){ self.cropAreaHeight += diff; } else if (diff < 0 && self.cropAreaHeight >= 50) { self.cropAreaHeight += diff; } [self setUpCropLayer]; break; } case BIGIMAGEVIEW: { [view setCenter:CGPointMake(view.center.x + translation.x, view.center.y + translation.y)]; [panGesture setTranslation:CGPointZero inView:view.superview]; break; } default: break; }}Copy the code
  • End of gesture

At the end of the gesture, we need to correct the position. If it is a clipped area edge, determine whether the distance between left and right, top and bottom edges is too short, and whether the edge is outside the scope of UIImageView, etc. If the left edge distance is too short, set the minimum clipping height. If the left edge distance is too short, set the minimum clipping height. If the left edge distance exceeds the left edge of the UIImageView, stick to the left edge of the UIImageView and update the clipping area width, and so on. Then update the hollow layer on the CAShapeLayer.

If it’s a UIImageView then just like in the last video you want to make sure that you don’t have a blank space between the image and the clipping area.

/ / after the sliding position correction if (panGesture. State = = UIGestureRecognizerStateEnded) {switch (self. ActiveGestureView) {case CROPVIEWLEFT: { if (self.cropAreaWidth < 50) { self.cropAreaX -= 50 - self.cropAreaWidth; self.cropAreaWidth = 50; } if (self.cropAreaX < MAX(self.bigImageView.frame.origin.x, 15)) { CGFloat temp = self.cropAreaX + self.cropAreaWidth; self.cropAreaX = MAX(self.bigImageView.frame.origin.x, 15); self.cropAreaWidth = temp - self.cropAreaX; } [self setUpCropLayer]; break; } case CROPVIEWRIGHT: { if (self.cropAreaWidth < 50) { self.cropAreaWidth = 50; } if (self.cropAreaX + self.cropAreaWidth > MIN(self.bigImageView.frame.origin.x + self.bigImageView.frame.size.width, self.cropView.frame.origin.x + self.cropView.frame.size.width - 15)) { self.cropAreaWidth = MIN(self.bigImageView.frame.origin.x + self.bigImageView.frame.size.width, self.cropView.frame.origin.x + self.cropView.frame.size.width - 15) - self.cropAreaX; } [self setUpCropLayer]; break; } case CROPVIEWTOP: { if (self.cropAreaHeight < 50) { self.cropAreaY -= 50 - self.cropAreaHeight; self.cropAreaHeight = 50; } if (self.cropAreaY < MAX(self.bigImageView.frame.origin.y, 15)) { CGFloat temp = self.cropAreaY + self.cropAreaHeight; self.cropAreaY = MAX(self.bigImageView.frame.origin.y, 15); self.cropAreaHeight = temp - self.cropAreaY; } [self setUpCropLayer]; break; } case CROPVIEWBOTTOM: { if (self.cropAreaHeight < 50) { self.cropAreaHeight = 50; } if (self.cropAreaY + self.cropAreaHeight > MIN(self.bigImageView.frame.origin.y + self.bigImageView.frame.size.height,  self.cropView.frame.origin.y + self.cropView.frame.size.height - 15)) { self.cropAreaHeight = MIN(self.bigImageView.frame.origin.y + self.bigImageView.frame.size.height, self.cropView.frame.origin.y + self.cropView.frame.size.height - 15) - self.cropAreaY; } [self setUpCropLayer]; break; } case BIGIMAGEVIEW: { CGRect currentFrame = view.frame; if (currentFrame.origin.x >= self.cropAreaX) { currentFrame.origin.x = self.cropAreaX; } if (currentFrame.origin.y >= self.cropAreaY) { currentFrame.origin.y = self.cropAreaY; } if (currentFrame.size.width + currentFrame.origin.x < self.cropAreaX + self.cropAreaWidth) { CGFloat movedLeftX = fabs(currentFrame.size.width + currentFrame.origin.x - (self.cropAreaX + self.cropAreaWidth)); currentFrame.origin.x += movedLeftX; } if (currentFrame.size.height + currentFrame.origin.y < self.cropAreaY + self.cropAreaHeight) { CGFloat moveUpY = fabs(currentFrame.size.height + currentFrame.origin.y - (self.cropAreaY + self.cropAreaHeight)); currentFrame.origin.y += moveUpY; } [UIView animateWithDuration:0.3 animations:^{[view setFrame:currentFrame];}]; break; } default: break; }}Copy the code

Crop the picture and get the cropped picture

The last step is to crop the image. First determine the scale size of the image, imageScale

    CGFloat imageScale = MIN(self.bigImageView.frame.size.width/self.targetImage.size.width, self.bigImageView.frame.size.height/self.targetImage.size.height);Copy the code

Then map the clipping region of the cropView to the UIImageView and divide by the scaling value to get the clipping region of the corresponding UIImage

    CGFloat cropX = (self.cropAreaX - self.bigImageView.frame.origin.x)/imageScale;
    CGFloat cropY = (self.cropAreaY - self.bigImageView.frame.origin.y)/imageScale;
    CGFloat cropWidth = self.cropAreaWidth/imageScale;
    CGFloat cropHeight = self.cropAreaHeight/imageScale;
    CGRect cropRect = CGRectMake(cropX, cropY, cropWidth, cropHeight);Copy the code

Finally, the CoreGraphic method is called to take out the data of the corresponding area of the picture to generate a new picture, which is the cropped picture we need.

    CGImageRef sourceImageRef = [self.targetImage CGImage];
    CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, cropRect);
    UIImage *newImage = [UIImage imageWithCGImage:newImageRef];Copy the code

GitHub code address