preface
Continuing with the implementation of the Flutter sideslipper bar and city selection UI, today we will continue with the implementation of the Flutter, drawing in picture effect implementation. Let’s take a look at PIP implementation.
See PIP DEMO code address :FlutterPIP for more effects
Why does this article exist?
One day when I was browsing my moments, I found a picture posted by one of my friends (not my girlfriend, of course, but a girl), similar to the above effect. A look at the effect is quite amazing, how is this achieved? Want to do it yourself? So I started preparing to implement it with Android.
But I happened to learn about Flutter recently, and found that Flutter CustomPainter has the same API as Android,Canvas,Paint,Path, etc. View the drawing section of Canvas drawImage code as follows
/// Draws the given [Image] into the canvas with its top-left corner at the
/// given [Offset]. The image is composited into the canvas using the given [Paint].
void drawImage(Image image, Offset p, Paint paint) {
assert(image ! =null); // image is checked on the engine side
assert(_offsetIsValid(p));
assert(paint ! =null);
_drawImage(image, p.dx, p.dy, paint._objects, paint._data);
}
void _drawImage(Image image,
double x,
double y,
List<dynamic> paintObjects,
ByteData paintData) native 'Canvas_drawImage';
Copy the code
We can see that the drawImage calls the internal _drawImage, which uses the code ‘Canvas_drawImage’ of the Native Flutter Engine to draw the Flutter Native. The drawing of Canvas can be as efficient as the mobile Native. For information about the efficiency of Flutter, see the principle of Flutter Performance
Implementation steps
From the bottom to the top, the image is divided into three parts, the first part is the gaussian blur effect of the bottom, the second is the part of the original image is cut, and the third layer is an effect mask.
Implementation of Flutter Gaussian blur effect
The official documentation for The BackdropFilter that Flutter provides says so
A widget that applies a filter to the existing painted content and then paints child.
The filter will be applied to all the area within its parent or ancestor widget’s clip. If there’s no clip, the filter will be applied to the full screen.
In a nutshell, it is a filter that filters all widgets drawn to child content, as shown in the official demo below
Stack(
fit: StackFit.expand,
children: <Widget>[
Text('0' * 10000),
Center(
child: ClipRect( // <-- clips to the 200x200 [Container] below
child: BackdropFilter(
filter: ui.ImageFilter.blur(
sigmaX: 5.0,
sigmaY: 5.0,
),
child: Container(
alignment: Alignment.center,
width: 200.0,
height: 200.0,
child: Text('Hello World'(), ((), ((), [(Copy the code
The effect is to blur the 200 by 200 area in the middle. In this paper, the realization of gaussian blur effect on the bottom picture is as follows
Stack(
fit: StackFit.expand,
children: <Widget>[
Container(
alignment: Alignment.topLeft,
child: CustomPaint(
painter: DrawPainter(widget._originImage),
size: Size(_width, _width))),
Center(
child: ClipRect(
child: BackdropFilter(
filter: flutterUi.ImageFilter.blur(
sigmaX: 5.0,
sigmaY: 5.0,
),
child: Container(
alignment: Alignment.topLeft,
color: Colors.white.withOpacity(0.1),
width: _width,
height: _width,
// child: Text(' '),(() [() [() [()Copy the code
The size of the Container is the same as that of the image, and the Container must have child controls or a background color. The child controls and background colors can be arbitrary. The result is shown in figure
Flutter picture cropping
Picture cropping Principle
When drawing with Canvas in Android, we can use PorterDuffXfermode to mix the pixel of the drawing graph with the pixel of the corresponding position in the Canvas according to certain rules to form a new pixel value, so as to update the final pixel color value in the Canvas. This creates a lot of interesting effects.
The same API for Flutter can be used to achieve the same effect by setting the blendMode property for the Paint. The mixed mode can be found in the official documentation of Flutter for an example.
The blending mode used here is blendmode.dstin, as noted below
/// Show the destination image, but only where the two images overlap. The /// source image is not rendered, it is treated merely as a mask. The color /// channels of the source are ignored, only the opacity has an effect. /// To show the source image instead, consider [srcIn]. // To reverse the semantic of the mask (only showing the source where the /// destination is present, rather than where it is absent), consider [dstOut]. /// This corresponds to the “Destination in Source” Porter-Duff operator.
Only draw the target image where the source image intersects the target image. The rendering effect is affected by the transparency of the corresponding place of the source image. It’s expressed by a formula in Android
\(\alpha_{out} = \alpha_{src}\)
\(C_{out} = \alpha_{src} * C_{dst} + (1 - \alpha_{dst}) * C_{src}\)
Copy the code
The actual cutting
We will use a Frame image (frame.png) to mix with the original image. The Frame image is shown below
frame.png
The implementation code
// Use the frameImage and the original image to draw the clipped figure
static Future<flutterUi.Image> drawFrameImage(
String originImageUrl, String frameImageUrl) {
Completer<flutterUi.Image> completer = new Completer<flutterUi.Image>();
// Load the image
Future.wait([
OriginImage.getInstance().loadImage(originImageUrl),
ImageLoader.load(frameImageUrl)
]).then((result) {
Paint paint = new Paint();
PictureRecorder recorder = PictureRecorder();
Canvas canvas = Canvas(recorder);
int width = result[1].width;
int height = result[1].height;
// Scale the image to the frame size and move it to the center
double originWidth = 0.0;
double originHeight = 0.0;
if (width > height) {
double scale = height / width.toDouble();
originWidth = result[0].width.toDouble();
originHeight = result[0].height.toDouble() * scale;
} else {
double scale = width / height.toDouble();
originWidth = result[0].width.toDouble() * scale;
originHeight = result[0].height.toDouble();
}
canvas.drawImageRect(
result[0],
Rect.fromLTWH(
(result[0].width - originWidth) / 2.0,
(result[0].height - originHeight) / 2.0,
originWidth,
originHeight),
Rect.fromLTWH(0.0, width.toDouble(), height.toDouble()),
paint);
// Crop the image
paint.blendMode = BlendMode.dstIn;
canvas.drawImage(result[1], Offset(0.0), paint);
recorder.endRecording().toImage(width, height).then((image) {
completer.complete(image);
});
}).catchError((e) {
print("Loading error." + e);
});
return completer.future;
}
Copy the code
There are three main steps
- In the first step, load the original image and Frame image, and wait for both images to load using future. wait
- Scale and translate the original image to the appropriate size of the frame, and then move the image to the center of the image
- Set the paint blend mode, draw the Frame image, and finish cropping
The cropped image is shown below
Synthesis and preservation of Flutter pictures
Compositing cropped image and effect image (mask.png)
Let’s take a look at the mask image
/// merge the mask image with the clipped image
static Future<flutterUi.Image> drawMaskImage(String originImageUrl,
String frameImageUrl, String maskImage, Offset offset) {
Completer<flutterUi.Image> completer = new Completer<flutterUi.Image>();
Future.wait([
ImageLoader.load(maskImage),
// Get the cropped image
drawFrameImage(originImageUrl, frameImageUrl)
]).then((result) {
Paint paint = new Paint();
PictureRecorder recorder = PictureRecorder();
Canvas canvas = Canvas(recorder);
int width = result[0].width;
int height = result[0].height;
/ / synthetic
canvas.drawImage(result[1], offset, paint);
canvas.drawImageRect(
result[0],
Rect.fromLTWH(
0.0, result[0].width.toDouble(), result[0].height.toDouble()),
Rect.fromLTWH(0.0, width.toDouble(), height.toDouble()),
paint);
// Generate the image
recorder.endRecording().toImage(width, height).then((image) {
completer.complete(image);
});
}).catchError((e) {
print("Loading error." + e);
});
return completer.future;
}
Copy the code
Effect of implementation
As this article begins, the image is divided into three layers, so the Stack component is used here to wrap the PIP image
new Container(
width: _width,
height: _width,
child: new Stack(
children: <Widget>[
getBackgroundImage(),// The bottom gaussian blur image
// Create the composite image using CustomPaint
CustomPaint(
painter: DrawPainter(widget._image),
size: Size(_width, _width)),
],
)
)
Copy the code
class DrawPainter extends CustomPainter {
DrawPainter(this._image);
flutterUi.Image _image;
Paint _paint = new Paint();
@override
void paint(Canvas canvas, Size size) {
if(_image ! =null) {
print("draw this Image");
print("width =" + size.width.toString());
print("height =" + size.height.toString());
canvas.drawImageRect(
_image,
Rect.fromLTWH(
0.0, _image.width.toDouble(), _image.height.toDouble()),
Rect.fromLTWH(0.0, size.width, size.height), _paint); }}@override
bool shouldRepaint(CustomPainter oldDelegate) {
return true; }}Copy the code
photo
Flutter is a cross-platform, high-performance UI framework that uses Native services and needs to be implemented individually, where images need to be saved locally. Flutter uses a library that can be used to retrieve file paths for each platform.
Path_provider: ^ 0.4.1Copy the code
Implementation steps: first, wrap the PIP above with a RepaintBoundary component, then set the key to the RepaintBoundary, and then save the screenshot. The implementation code is as follows
Widget getPIPImageWidget() {
return RepaintBoundary(
key: pipCaptureKey,
child: new Center(child: new DrawPIPWidget(_originImage, _image)),
);
}
Copy the code
Screen shot to save
Future<void> _captureImage() async {
RenderRepaintBoundary boundary =
pipCaptureKey.currentContext.findRenderObject();
var image = await boundary.toImage();
ByteData byteData = await image.toByteData(format: ImageByteFormat.png);
Uint8List pngBytes = byteData.buffer.asUint8List();
getApplicationDocumentsDirectory().then((dir) {
String path = dir.path + "/pip.png";
new File(path).writeAsBytesSync(pngBytes);
_showPathDialog(path);
});
}
Copy the code
The save path of the image is displayed
Future<void> _showPathDialog(String path) async {
return showDialog<void>(
context: context,
barrierDismissible: false,
builder: (BuildContext context) {
return AlertDialog(
title: Text('PIP Path'),
content: SingleChildScrollView(
child: ListBody(
children: <Widget>[
Text('Image is save in $path'),
],
),
),
actions: <Widget>[
FlatButton(
child: Text('exit'), onPressed: () { Navigator.of(context).pop(); },),,); }); }Copy the code
The idea of gesture interaction
The current implementation is: move the original image to the center for cutting, the default is that the important display area of the picture is in the center, so there will be a problem, if the important display area of the picture is not in the center, or the display area of the effect of the picture in the picture is not in the center, there will be a certain deviation.
Therefore, it is necessary to add gesture interaction. When the important area of the picture is not in the center, or the effect of painting in the picture is not in the center, you can manually adjust the display area.
Add the gesture operation, obtain the offset of the current gesture, take the original picture and the frame area for clipping, then it can be displayed normally.
At the end of the article
Welcome to star Github Code
All the resources and pictures used in this article are only for learning. Please delete them within 24 hours after learning. If there is any infringement, please contact the author to delete them.