This is the 93rd article of Flutter without water. For more original articles, please search our official account to follow us. This article was first published on the blog of Flutter: color straws
preface
As client development, there is usually a UI walkthrough before the application is delivered. On one side is color-insensitive development and on the other side is color-sensitive vision. How often does the following dialogue occur:
Vision: your color is not the same as my design.
Development: what’s different, this is the same color as the design draft.
Vision: the design draft is clearly invisible black, you this black is not pure.
Dev: Don’t leave until I look at the code.
.
Look at the code, that’s one way to do it. But if you’re in another branch, you need to stash the local code first, then cut the branch, look at the code, look for the color… At this point, do you really want to have a tool that can instantly see what colors are actually displayed?
Here’s how I made a color straw tool to “punch in the face” on the spot, usually “get punched in the face”.
To put the elephant in the refrigerator, there are three steps: 1. Open the refrigerator. 2. Close the fridge. How many steps does it take to make a colored straw?
- Gets the current screen color
- Select the specified position
- Color output
1. Get the colors of all pixels
How to get the color of all the pixels on the current screen is not practical to do component by component. We can save the country by curving the current screen capture, which is the color that is being displayed. To capture a screen, Flutter provides a Widget RepaintBoundary. Just wrap the content around the RepaintBoundary:
Widget build(BuildContext context) {
return RepaintBoundary(
key: _key,
child: Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Container(),
),
);
}
Copy the code
Where a screenshot is needed, the specified RenderRepaintBoundary can be directly converted into a picture via _key, and the code is as follows:
/ / components according to the key to acquiring screenshots RenderRepaintBoundary boundary = _key. CurrentContext. FindRenderObject (); // get the current devicePixelRatio double pix = window.devicepixelratio; Var image = await boundary. ToImage (pixelRatio: pix);Copy the code
At this point, we have a screenshot of the current screen. Images can be regarded as a group of special data structures. For PNG images, a PNG image is composed of two parts: file signature and chunk. A data block consists of critical chunk and ancillary chunk. These data blocks contain all the information of the image, such as: width and height of the image, color type, image depth, actual image data, image position information, and the last modification information. More information can be found here.
Image data block (IDAT) belongs to the key data block, which saves the actual image data of the picture. Combined with the color type (common RGB, YUV, etc.), the specified color of all pixels can be obtained. This concludes the first step.
2. Get the color of the specified pixel
How do we get the color of a given pixel, of course, by hand, and look at it wherever it is most convenient. This is also very simple to implement. Use the image.memory () method to display the Image from the previous screenshot, but do a data conversion as follows:
// Convert the Image type to Uint8List
ByteData byteData = await image.toByteData(format: ImageByteFormat.png);
Uint8List pngBytes = byteData.buffer.asUint8List();
Copy the code
Add a GestureDetector widget to the above image to easily get the current offset in the onPanUpdate or onTapUp methods. So with the color value of all pixels of the image and the offset of the image, how to obtain the color value of the specified offset position? Here we need to use a famous image processing library image. He provides the getPixelSafe() method, which passes in x and y values to get the color value type of the current position (Uint32 in the AABBGGRR format). 👏👏👏 :
Color getColorFromDragUpdateDetails(Offset globalPosition) {
int x = globalPosition.dx.toInt();
int y = globalPosition.dy.toInt();
double pix = window.devicePixelRatio; // Get the pixel ratio of the current device
int pixel32 = this.temp.getPixelSafe((x * pix).toInt(), (y * pix).toInt());
int argb = _abgrToArgb(pixel32);
Color pixelColor = Color(argb);
print('Current coordinate: x:$x, y:$y');
print('--------ARGB:$argb');
print('--------HEX:${argb.toRadixString(16).toUpperCase()}');
print('--------A:${pixelColor.alpha} R:${pixelColor.red} G:${pixelColor.green}B:${pixelColor.blue}');
return pixelColor;
}
Copy the code
The general principle of the image library is as follows: images with different suffixes are analyzed in a fixed way to obtain the data. The pixel of the image is encoded as a 4-byte Uint32 integer, and the color value of the corresponding position can be obtained according to the input X and Y values. We add a hover window to show the selected color, and the final display looks like this:
You think that’s the end of it? NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO, NO. It would be great if we could zoom in on the selected area.
3. Zoom in on the selected position
In Flutter, images can be manipulated using ImageFilter. ImageFilter provides two constructors:
Imagefilter.blur ({double sigmaX = 0.0, Double sigmaY = 0.0}) // Apply a transform to ImageFilter. Matrix (Float64List Matrix4, { FilterQuality filterQuality = FilterQuality.low })Copy the code
Here we can use ImageFilter. Matrix () to perform matrix transformation on the texture of the image to achieve the image magnification effect. Magnification effect is divided into two steps:
3.1 Obtain the picture matrix after enlarging the specified position
This is quite easy to understand. We wrapped the images obtained from the screenshot in the last stage with GestureDetector. During onPanUpdate, the coordinates of the corresponding positions were obtained, and then matrix transformation was performed on the screenshot to obtain the transformed texture:
// When the finger moves
onPanUpdate: (detail) {
setState(() {
// Get the color value of the current selected point
Color pixelColor =
getColorFromDragUpdateDetails(detail.globalPosition);
choiceColor = pixelColor;
choiceColorString = "0x${pixelColor.value.toRadixString(16).toUpperCase()}";
// The currently selected point
_magnifierPosition =
detail.globalPosition - _size.center(Offset.zero);
double newX = detail.globalPosition.dx;
double newY = detail.globalPosition.dy;
// Matrix transformation
finalMatrix4 newMatrix = Matrix4.identity() .. translate(newX, newY) .. scale(scale, scale) .. translate(-newX, -newY);// Save the transformed matrix
matrix = newMatrix;
});
}
Copy the code
3.2 Create a follow component & application matrix
21.. This is a routine operation, using the Stack and the Positioned piece, you can implement a trailing gesture, and then create a BackdropFilter piece and apply the matrix transformed on the ImageFilter. This is done by setting state in real time when the position changes, triggering a refresh of the component. It is particularly emphasized that since the obtained matrix is the complete matrix of the whole image transformation, ClipRRect component is needed to cut out the parts that do not need to be displayed.
Visibility(
visible: _visible,
child: Positioned(
left: _magnifierPosition.dx,
top: _magnifierPosition.dy,
child: ClipRRect(
borderRadius: BorderRadius.circular(_size.longestSide),
child: BackdropFilter(
filter: ImageFilter.matrix(matrix.storage),
child: CustomPaint(
painter: painter,
size: _size,
),
),
),
))
Copy the code
The final effect looks like this:
4. Problems encountered
At this point, this article is basically over, here are some of the problems encountered:
4.1 Color Coding
The color of a Flutter is AABBGGRR, while the color of a Flutter is AARRGGBB. A conversion is required for Flutter.
// AABBGGRR -> AARRGGBB
int _abgrToArgb(int oldValue) {
int newValue = oldValue;
newValue = newValue & 0xFF00FF00; //open new space to insert the bits
newValue = ((oldValue & 0xFF) < <16) | newValue; // change BB
newValue = ((oldValue & 0x00FF0000) > >16) | newValue; // change RR
return newValue;
}
// Convert values of type int to hex values
String hexColor = argb.toRadixString(16).toUpperCase();
Copy the code
The YUV type is actually more common. YUV has many sub-types, such as YUV420, YUV421, etc. Readers can understand the relevant information by themselves. How to calculate the actual memory size of an image? The memory size of an image is related to the resolution and color type. The resolution determines how many pixels there are, and the color type determines how much data a pixel stores. Generally speaking, the memory size of an image is calculated by the formula width * height *bytesPerPixel / 8. For example, a 1000 by 1000 resolution, RGB color type image, in general, The image is automatically scaled to 2 to the power of n. Each color component in RGB color space is made up of 8 bits, but usually the color and alpha channel are also 8 bits, so the total is 32 bits. So the general formula for a picture is w*h*4. The actual size of this image is 1024*1024 * 4/1024/1024 = 4MB. It was more complicated than that, and there were many more memory-saving variations of the RGBA type, such as RGBA8888, RGBA4444, etc. Other chunks contained in the image will also occupy a certain amount of memory. This is just a supplement for readers to learn by themselves.
4.2 Obtaining the color of the specified position
In the screenshot, we pass double pix = window.devicepixelRatio; Device pixel ratio. In the case of the iPhone11, the pix value is 2.0. When we obtain the touch point of the device later, the position of the touch point is based on the physical size, so the PIX value should also be applied to the picture.
4.3 Matrix Transformation
In this case, what we want to do is zoom in on the image at the specified location. In terms of matrix representation, it’s a combination of matrix translation and scaling. We need to shift the matrix to the point where we want to scale, scale, scale and then shift back. Because scaling defaults to the origin coordinate, which defaults to the (0, 0) position in the upper left corner. So we need to move the scaled point to the origin, scale again, and then restore the scene. Matrix changes are fun, and I’m not going to expand on them here, so you can explore more gameplay on your own.
5. Write at the end
Overall, nothing particularly difficult or sophisticated was used, but the resulting gadget was quite useful. Of course, there are many other things that can be done to improve UI restore and UI development efficiency, such as detecting component size, component location, component hierarchy, etc.
In terms of improving UI restore degree and development efficiency, some big factories in the industry have gone quite far in this regard, such as IQiyi. They’ve done semi-automatic UI acceptance. The idea is to use AI to identify component boundaries, and then establish one-to-one relationship and spacing relationship between the controls of development page and design page through control matching algorithm and spacing selection algorithm. These relationships are then matched one by one to output the matching results. However, this method is certainly not as good as using various tools for measurement in precision and accuracy, but it is more efficient.
In my opinion, the future UI automatic acceptance must be a combination of automatic acceptance mode based on AI recognition and personalized acceptance mode based on manual measurement. In the page structure is clear, not many components of the page to automatic acceptance, in the page structure is complex to manual acceptance. This is the best combination of efficiency and accuracy.
Finally, I don’t know where I read a sentence to end, mutual encouragement ~
Technology is to solve business problems, and only on the premise of realizing business and bringing convenience to people, the existence of technology is meaningful.
Recommended reading
How to implement binary heap with JS
Writing high quality maintainable code: Application paradigms
, recruiting
ZooTeam, a young passionate and creative front-end team, belongs to the PRODUCT R&D department of ZooTeam, based in picturesque Hangzhou. The team now has more than 40 front-end partners, with an average age of 27, and nearly 30% of them are full-stack engineers, no problem in the youth storm group. The members consist of “old” soldiers from Alibaba and netease, as well as fresh graduates from Zhejiang University, University of Science and Technology of China, Hangzhou Electric And other universities. In addition to daily business docking, the team also carried out technical exploration and practice in material system, engineering platform, building platform, performance experience, cloud application, data analysis and visualization, promoted and implemented a series of internal technical products, and continued to explore the new boundary of front-end technology system.
If you want to change what’s been bothering you, you want to start bothering you. If you want to change, you’ve been told you need more ideas, but you don’t have a solution. If you want change, you have the power to make it happen, but you don’t need it. If you want to change what you want to accomplish, you need a team to support you, but you don’t have the position to lead people. If you want to change the pace, it will be “5 years and 3 years of experience”; If you want to change the original savvy is good, but there is always a layer of fuzzy window… If you believe in the power of believing, believing that ordinary people can achieve extraordinary things, believing that you can meet a better version of yourself. If you want to be a part of the process of growing a front end team with deep business understanding, sound technology systems, technology value creation, and impact spillover as your business takes off, I think we should talk. Any time, waiting for you to write something and send it to [email protected]