The original article appears on Github
Process automation
The front-end has gone through a series of processes from nonspecification to specification to automation. At the beginning, there was no front end in this field, and all the front-end responsibilities were completed by the server. At that time, the situation was very chaotic. Factions were clustered, browser manufacturers’ specifications were not unified, module mechanism was chaotic, and the code style was written by each faction. Since the release of HTML5, CSS3, ES6 (536), the front-end has become more standardized. Browser specifications are converging, Microsoft’s browser is catching up, and browser compatibility has taken a big step forward. Now the process of front-end development is more and more standardized, more and more rules to follow. So people are wondering whether these regular things can be automated (programmers love to be lazy), can not be boring work to the computer to do, reduce repetitive work, improve work efficiency. The answer is yes, and there are plenty of successful examples. From Facebook’s Waitir, Google’s automated process system, To Ali’s DEF in China, to gitLab CI used by small companies, it is found that many companies have begun to explore process automation. In this chapter, I will first talk about the workflow of the front end, then analyze the workflow of the front end, analyze what tasks can be automated, what is the idea of automation, and finally I will talk about how to build a small platform for automation.
Front-end workflow
Before we get started, let’s take a look at what our current workflow looks like on the front end. I’m just sharing my workflow. It may be a little different for different companies and individuals, but the general idea is pretty much the same.
Phase one Requirement generation – Preparing for development
This stage can be divided into the following three sub-stages:
Requirement review
Under normal circumstances, from requirements to prepare development, judges should have a demand, then the relevant development and product managers gather together and discuss how requirements are generated (agile development requirements are usually produced from customer feedback), we do have functions to solve customer’s problems, we have to understand the needs of deviation and demand priority, etc. After thorough and thorough discussion, if there is no problem understanding the requirements and the requirements do solve the customer’s problem, we schedule and prioritize the work, list the tasks on kanban (both on the system and the whiteboard), and track updates daily.
Understand the real intention of the product manager, understand the context of requirements
Split components and modules
The current recommended approach to work on the front end is component development, where pages are broken down into sufficiently granular widgets and modules. A single person is responsible for the development of related independent modules and components, so that components and modules can be reused, which has been discussed in Chapter 2, and will not be described here.
Establish branch
At present, most of the version management tools of Internet companies, including our company, use Git, and the company usually has its own Git flow. When we do new functions, we usually establish a feature branch, which is merged into the Release branch to wait for testing and release after development. Of course, even if you are SVN, the idea is the same.
Phase 2 Begins development – Commit tests
Writing unit Tests
There are two common problems. Question one is can you not write unit tests, or is there any downside to not writing unit tests? The second question is why write unit tests first instead of code first? We answer each of these questions. To start with the first question, not writing unit tests is as easy as not working out, but do we regret it a little bit when it comes to our abs? The same is true for unit tests, when you start writing them, nothing happens. Statistically, it takes more time to write unit tests than it does to write code, so why do we spend so much time writing test cases for nothing? That’s because perfect test cases give you the courage to change your code as your application grows in size and complexity. When a unit test shows an all pass, it’s as if someone told you, “Go ahead, buddy, nothing’s wrong.” . You don’t tiptoe around because you’re afraid of breaking your code. But it’s important to point out that low coverage unit tests don’t work, they give people a sense of “Go ahead, man, nothing’s wrong.” The illusion. That’s why I’ll emphasize making unit test coverage high enough later. Let’s look at the second issue, which involves a concept called test-driven development (TDD). The idea of TDD is to write the test cases first, then the implementation. One of the great advantages of TDD is that you leave the implementation behind, so you don’t get bogged down in the details, you have a clearer vision, and you understand the business or logic better. It’s just as if you’ve seen a lot of good code writers write their ideas down first and then write the code later.
Having answered a common question about testing, let’s take a look at how unit tests are actually written. In fact, unit tests usually come from the test cases that testers organize, but of course some special algorithm logic needs to be organized. One of the principles is to test the students if you can, don’t use it for nothing, right? But when it comes to writing unit tests on the front end, it’s always difficult, and that’s how I felt at first. Then I got into functional programming, and it was like a light bulb went off. Traditional object-oriented programming, imperative programming has a very big downside, to quote Joe Armstrong, creator of the Erlang language:
You ask for a banana, but instead you get a gorilla with a banana
Yes, when I look back at my code from a long time ago, that’s true. It’s not entirely our fault. We’re used to all kinds of assumptions, all kinds of external dependencies. We take it for granted that we are constantly changing the state of the state, making the state hard to track. So we had a hard time writing unit tests. When you start writing code in a functional way, you’ll find that it’s very easy to test. For example, the popular React display component is a pure function, and testing such components is easy. When you look at redux’s code, you’ll see that redux’s reducer design is quite elegant. It transforms the spatial abstraction of reduce into the temporal abstraction of Reducer, and the idea of reducing pure functions makes the code much easier to test, as you can see in this post
Defensive coding
Before you write code, you have to think about wheels. If not at the beginning of the building wheel. I like to divide the logic into multiple functions, and then check the incoming parameters at the beginning of the function, and check every possible error at each step, typically nPE (NULL Pointer Exception). This is classic defensive programming.
// String a -> Number b -> Boolean
function testA(a, b) {
if(! a)return false;
if(! isString(a))return false;
if(! b)return false;
if(! isNumber(b))return false
return!!!!! a.concat(b); }Copy the code
This approach is especially effective on the front end because JS is a dynamic language, and if you don’t use typescript and other features that add static detection, your code will be vulnerable. A simple way to do this is to assume that every line of code will encounter an exception and will report an error.
Make unit test coverage high enough
I talked about the importance of unit testing and how to write unit tests. To highlight the issue of unit test coverage, low unit test coverage is useless or counterproductive, so it is important to keep unit test coverage high enough that the industry generally agrees that unit test coverage above 95% is appropriate.
Phase 3 testing complete – Releasable status
By this stage our code had been tested by ourselves and testers, and we thought it was ready for release. We usually have product managers check it out to see if it’s what they want.
compile
The current code is written in multiple files, our code uses a lot of features that browsers don’t support, we need to extract CSS from JS, etc. All of this work needs to be compiled.
Packet analysis
Our project code is made up of many dependencies, we rely on frameworks like React, Vue, and we use libraries like Lodash, Ramdajs, mostjs, etc. These all constitute the instability and uncertainty of the project. That’s why big companies like Alibaba have such a strong obsession with external dependence. NPM is aware of this as well, so much so that NPM is now locked after installation. But this still doesn’t guarantee the quality of dependency packages. One principle to look at package quality is that the documentation is rich enough, the unit test coverage is high enough, and it is popular enough (start enough), which is not sufficient for a good library quality, but it is necessary to filter a large number of failing libraries.
Code review
We check the code for syntax errors and so on. This step can be checked by ESLint, or by static checking tools such as Flow or typescript. In general, this step is to check for error code or nonconforming code.
Code optimization
The code has already passed the inspection. At this point we need to optimize the code, such as compression, merging, removing whitespaces and console, or extracting common dependencies, or removing tree shaking.
CodeReview
We will organize relevant personnel to conduct code review to ensure code quality. This part is done manually, is the last check on the previous work.
Phase 4 Preparing for release – Release complete
Publish static resources to the CDN
We publish resources to CDN for final release, so that users can directly obtain the latest CDN resources after the release of the version.
Play tag
We tag the code, and the tags are like milestones. Tag is useful when we need to fix a version of the code.
Change the online version number
We’ve reached the point where the feature is ready for release, and then we’ll release the version. Change the online version number so that our users can access our latest code.
Phase five release online – Online verification
We’ve released the code online, and usually we need to verify that the code has been released correctly and has not affected other functions online.
The above process is probably the workflow of most Internet companies. So in the next section, I’m going to look at each of these stages, find out where it can be automated, and tell you what the technical idea of automation is.
What is the idea of process automation
The above describes the complete process from regular requirements generation to function release. Through the above analysis, we find that phases 3 and 4 can be highly automated. To help you understand what phases 3 and 4 do, I have compiled a diagram:
The dotted line in the figure indicates automatic completion without manual labor. Solid lines indicate manual operation. The center point of the diagram is dev. We can see that dev has three operations: commit, tag, and pull request. Different operations trigger different hooks to complete different operations. We analyze them node by node, what they do, and how they are designed to be implemented. There are three systems that need to be implemented in the figure. The first is the Package Analyser engine, the second is the CI center, and the third is the CD Center.
package analyser
Package analysis tool this can be NPM package analysis on the front end or maven package analysis on the back end. Here, NPM package analysis is taken as an example. The same is true for maven and other package analysis, but the specific technical implementation details are different. NPM package analysis and Maven package analysis only have different specific strategies. Let’s start by looking at what the package analysis engine does, which is to analyze the packages of application dependencies and recursively analyze them one by one to find risky dependencies and notify the users (project owners). Features include, but are not limited to, analyzing packages with security risks, indicating patch updates, and using this dependency data, we can even count company-wide package usage (including versions), which is useful. In the next section we will look at the implementation details of the package analysis engine so that you can build your own NPM package analysis engine.
CI
Continuous Integration (CI) is the practice of continuously integrating new functions into existing systems, and Extreme programming has also borrowed the basic ideas of CI. So how do we use CI? As I have just shown in the diagram, the developer’s proposal triggers the CI, which will do some unit tests, code detection, etc. If the code fails, it will be reported to the relevant staff, otherwise the code that passes will be merged into the library. That said, CI is not a technology, but a best practice. See Wikipedia for more. I’ll cover the basic idea of implementing a CI later.
CD
Continuous Delivery is also a best practice, not a technology. The basic idea is that code is ready to be released, it guarantees the reliability of code distribution, and continuous integration reduces the workload of the developer. See Wikipedia for more. I’ll cover the basic idea of implementing a CD later.
Of course, our system is still imperfect. We can also add a configuration center to facilitate version management and a monitoring platform (as described in Chapter 4), so that everyone can give full play to their intelligence.
What else are we missing
The above describes a process from requirement generation to feature rollout, and we also analyze how to automate it. One of the things we’ve overlooked in the early stages of a project is the scaffolding process. How to automate, configure, and ideally maintain consistency across the company.
What does the scaffolding do
When we start a new project, we should first conduct technical research and selection. When the technical direction is determined, we need a shelf, which project members can write code according to. This shelf can be manually generated, which is what I did a long time ago, but there are many disadvantages to manually generated shelves. The first is inefficient, and the second is not conducive to unification. To solve this problem, we introduced the concept of cloud scaffolding. To understand cloud scaffolding we need to know the scaffolding. Let’s take a look at what the scaffolding does. In short, the scaffolding is the initial code that generates the project. We use create-React-App (hereinafter referred to as CRA), the scaffolding tool that is currently popular with React, to understand the working principle of scaffolding. Let’s take a look at how CRA is used:
npm install -g create-react-app
create-react-app my-app
cd my-app/
npm start
Copy the code
So we have the following project structure:
One of the easiest ways to do this is to execute create-react-app XXX and download the file from the remote end to the local end. In fact, the idea of CRA is like this. I use a diagram to show the basic process of CRA:
Each step of the implementation is relatively simple, you can directly view the source code
Centralized, configurable scaffolding service
The above describes the role of scaffolding and implementation principle. However, the above scaffolding cannot be centralized, that is, different teams cannot form mutual awareness. Specifically, the scaffolding of different projects is different or cannot be highly customized. Therefore, we need to build a centralized, configurable scaffolding service, which I call cloud scaffolding. The scaffold adopts CS architecture. The client can be a CLI, while the server is a configuration center and module discovery center. The following figure is an architectural diagram of cloud scaffolding:
The client tells the server the template information to be initialized through the command, and the server will find the corresponding template from the template library. If there is one, it will return, if there is no, it will request the NPM Registry. If there is one, it will return and synchronize it to the template library for next use. If no return fails. This way, the scaffolding is transparent to different teams, and teams can customize their own scaffolding and upload it to a template library for other teams to use, creating a closed loop.
How to build an automation platform
Build package analyser
I have divided the working process of the package analysis engine into three phases
Create a blacklist and whitelist
We need to analyze the package, and the results of the analysis need to be supported by data. Therefore, the blacklist and whitelist is indispensable, we can add their own blacklist and whitelist, we can even establish their own blacklist and whitelist system, of course, can also access third party data sources. Anyway, the first step is to have the data source, which is the first and most important step. For simplicity, let me describe our data source in JSON:
{"whiteList": ["react", "redux", "ant-design"], "blackList": {"kid": ["insecure dependencies 'ssh-go'"] } }Copy the code
Recursively analyze the package and match the whitelist
This step requires recursive analysis of the package. Our input is just a configuration file, which in NPM’s case is a package.json file. We need to extract the fields Dependencies and devDependencies for package.json. These are dependencies for the project, but the latter is a development dependency. At this point we can get an array of dependencies for the project. Like:
const dependencies = [{
name: "react".version: "15.4.2"
}, {
name: "react-redux".version: "5.0.3"
}]
Copy the code
We then need to iterate through the array, getting the package details from the NPM Registry (which can be either the official Registry or a private mirror source) and recursively fetching the dependencies. At this point we get all of the dependencies and deep dependencies packages for the project. Finally, we need to match the blacklist and whitelist according to the package name. One more step is to get the update log of the package and output the meaningful log to the project owner.
Results output
We have matched all the dependency packages. At this time, we know the whitelist package, blacklist package and unknown package that the system depends on, and we have the data source after matching.
const result = {
projectName: "demo"
whiteList: ["react"."redux"].blackList: [{name: "kid"["insecure dependencies 'ssh-go'"]}],
changeLog: [{name: ""react-redux, logs: {url: ' '.content: ' '}}}]Copy the code
We need to separate the data from the display. At this point, we will save the data separately, and then use friendly information to present, this part should be relatively simple, not to say.
Build a continuous integration platform
Lint, Test, and Report are the three basic services you need to build a persistent platform. The actual implementations of Lint, Test, and report are beyond the scope of CI. For Lint, it’s essentially checking js text and then matching some rules. The best known in the industry is THE little Red Book’s NZAKas ESLint. You can see the overall architecture of ESLint here. Instead, when the JSHint team didn’t respond to their actual needs, they developed and open sourced ESLint: write a JS Lint tool that supports the idea of being extensible, per-rule independent, and with no built-in coding style. In the case of Test, it’s all about running test cases written by developers and making sure they run correctly and have sufficient coverage. If the preceding steps fail, an alarm is generated. The following is the architecture diagram of CI:
The code passes through Lint, pulling the project’s presets and plugins from the configuration center to the next pipeline, Test, which distributes the code to the browser cloud for unit and integration tests and sends the results to the appropriate people. If there is an error in the above two steps, the information will be sent to the relevant personnel through the Report Service.
Build a continuous deployment platform
Continuous deployment On the basis of continuous integration, integrated code is deployed to production-like environments that are closer to real running environments. For example, once we have finished unit testing, we can deploy the code to a Staging environment connected to the database for more testing. If the code is fine, you can proceed with manual deployment to production. So the smallest unit of continuous deployment is to break the code down into releasable states. Here is a classic enterprise-level continuous deployment implementation:
https://continuousdelivery.com/implementing/architecture/
It can be seen that continuous integration, in fact, is the integration of new modules into the system to form a releasable unit, which itself does not involve the release process. Actually publishing the code online still requires manual manipulation.
Code distribution can be implemented through the configuration center
To implement a continuous deployment architecture, the code and the system architecture need to work together, unlike the other systems I talked about earlier. Continuous deployment requires code to be low enough coupled to affect other modules as little as possible. Whenever a new feature is released, you don’t need to test all the code back, but mock, stub, and so on to simulate external dependencies. Microservices, which are more popular today, are also a practice of code decoupling, which divides the system into several independently running services that are unaware of each other’s existence and even have different languages and technical architectures. The importance of componentization and modularity, which was discussed in previous chapters, can be seen here.
More on continuous deployment implementation
Set up the notification service and other accessible third-party services
The feedback system has been mentioned repeatedly before. We may also need other third-party systems, such as data visualization systems. This section uses the access notification service as an example to explain how to access a common third-party system. Before we get into the specifics of notification services, let’s talk about what it takes to access a third-party service. Essentially accessing different services is the domain of service governance, and at the moment service governance is dominated by microservices. This separates business from application by constantly accessing “third party” services. Before microservices, it was common to take different systems and make them into different applications, and then communicate through some means. One significant disadvantage of this approach is that the application has a lot of redundant logic and code. Microservices, on the other hand, break the system into small enough chunks to significantly reduce redundancy. Servitization has the following characteristics:
- Applications are broken down into services by business
- Services can be deployed independently
- Services can be shared by multiple applications
- Services can communicate with each other
The details of the implementation of service governance are not going to be discussed here, but it is important to understand that through this idea of microservices. We need the notification service, we just need to send a signal telling the notification service, and the notification service returns a signal indicating the output. Let’s say I need to access the email service, the notification service. The code looks something like this:
'use strict';
const nodemailer = require('nodemailer');
const promisify = require('promisify')
// Generate test SMTP service account from ethereal.email
// Only needed if you don't have a real mail account for testing
exports default mailer = async context => {
// create reusable transporter object using the default SMTP transport
let transporter = nodemailer.createTransport({
host: 'smtp.ethereal.email'.port: 587.secure: false.// true for 465, false for other ports
auth: {
user: account.user, // generated ethereal user
pass: account.pass // generated ethereal password}});// setup email data with unicode symbols
let mailOptions = {
from: '" Fred Foo 👻 "< [email protected] >'.// sender address
to: '[email protected], [email protected]'.// list of receivers
subject: 'Hello ✔'.// Subject line
text: 'Hello world? '.// plain text body
html: 'Hello world? ' // html body
};
// send mail with defined transport object
await promisify(transporter.sendMail(mailOptions, (error, info) => {
if (error) {
return console.log(error);
}
console.log('Message sent: %s', info.messageId);
// Preview only available when sending through an Ethereal account
console.log('Preview URL: %s', nodemailer.getTestMessageUrl(info));
// Message sent: <[email protected]>
// Preview URL: https://ethereal.email/message/WaQKMgKddxQDoou...
}));
return {
status: 200.body: 'send sucessfully'.headers: {
'Foo': 'Bar'}}};Copy the code
If every application needs to use the mail service, you need to write such a pile of code, if the company system lead to different language is different, you also need to implement it again in different language, very troublesome, and if the email notification service implementation, abstract can reduce redundant code, even can call Java in js we write email services.
Tip. When we need to use the mail service, it is best not to send the message directly to the mail service in the code, but to a higher level of abstraction such as the notification service.
A popular concept is faAS (function as a service). It is often talked about in conjunction with no services, which is not about no servers, but about making the service architecture transparent, as if it were server-free for the average developer, freeing us from the server environment and focusing on the logic itself. Fission (Fast Serverless Functions for Kubernetes) is a service-oriented framework based on K8S. By allowing developers to focus on the logic itself, we can directly use the mail method above as a unit of deployment. Here’s an example:
$ fission env create --name nodejs --image fission/node-env
$ curl https://notification.severless.com/mailer.js > mailer.js
# Upload your function code to fission
$ fission function create --name mailer --env nodejs --code mailer.js
# Map GET /mailer to your new function
$ fission route create --method GET --url /mailer --function mailer
# Run the function. This takes about 100msec the first time.
$ curl -H "Content-Type: application/json" -X POST -d '{"user":"user", "pass": "pass"}' http://$FISSION_ROUTER/mailer
Copy the code
If you have a system that needs to switch mailer services to SMS services, it’s easy:
$ fission env create --name nodejs --image fission/node-env
$ curl https://notification.severless.com/sms.js > sms.js
# Upload your function code to fission
$ fission function create --name sms --env nodejs --code sms.js
# Map GET /mailer to your new function
$ fission route create --method GET --url /sms --function sms
# Run the function. This takes about 100msec the first time.
$ curl -H "Content-Type: application/json" -X POST -d '{"user":"user", "pass": "pass"}' http://$FISSION_ROUTER/sms
Copy the code
The implementation of the service is simple enough that only the concrete logic is concerned.
Automated script
Where to automate
This describes the process of software development and what we can automate. In this section, we cover the second part of automation, automation scripts. Aside from software development itself, computers are full of repetitive work, and they are also full of automated solutions to these repetitive work. The solution can be a script, a software, a plug-in, etc., but it frees people from repetitive work. For example, we’ve all had the experience of downloading a video. We saw a video on the Internet, we wanted to download it, but when we downloaded it, only VIPs could download it. We went online to find a solution. We went through a lot of trouble following the tutorial and finally downloaded the video. The next time we download the video again, we’ll go through the steps above (we’ll even have to watch the tutorial again). So an automated solution for automatically downloading videos from online sites has emerged. People can download their favorite videos with a simple operation. How easy it is! There are many similar, such as batch processing tools, a key reloading system work, and so on, can not count.
Essentially, anything that should be automated should be automated. Any repetitive task should be automated. Why do people have to deal with things that can be handled by computers? Developers should spend their energy on something more worthwhile, something creative. With the average user example above, let’s take a working example. For example, the previous release of my company was a simple process that was followed for each release, making it a good place to automate. So here’s our release process
Git Tag Publish/version Git Push Origin Publish/versionCopy the code
Then go to check whether success CDN (https://g.alicdn.com/dingding/react-hrm-h5/version/index.js). I think it has taken me a certain amount of time, and the new students have to learn such tedious things. Why not automate it? To borrow murphy’s law, the automated will be automated. Automation really doesn’t just reduce time, it reduces the likelihood of errors. There is a saying in software engineering that there are two ways to reduce bugs. One is to write less code so that there are no obvious bugs. The other is to write so much code that there are no obvious bugs.
Write less code and do less repetition. That’s my mantra.
If you look closely, there’s just so much that can be automated. When you actually implement automation into your actual coding life, you’ll find that the amount of time you spend reinventing things will decrease, your happiness will increase, and your sense of accomplishment will increase. Before seen an article, the article it says it will be anything can be automated automation (not limited to) work, it can control if in the evening after work, their session is still in (haven’t) work, will send the wife send text messages, automatic text content is randomly selected from the preset phrases.
Earlier, I read a study on Weibo, which analyzed the girlfriend’s microblog to analyze the girlfriend’s mood. I think it would be better if they could respond intelligently to their wife’s mood when sending a text message.
operations
If you have any experience with operations and maintenance, you know that operations and maintenance often need to start, stop, and restart services. These jobs have a strong regularity and are well suited to automation. Fortunately, with the help of the shell, we can talk to the operating system in depth and easily implement the above mentioned operation and maintenance requirements. Here is an example of a shell script that starts, stops, and restarts a service.
#! /bin/sh
DIR= `pwd`;
NODE= `whichNode '// The first argument is action, which is one of start, stop, and restart. ACTION=The $1
# utils
get_pid() {
echo `ps x|grep my-server-name |awk '{print $1}'`}# start
start() {
pid= `get_pid`;
if [ -z $pid ]; then
echo 'server is already running';
else
$NODE $DIR/server.js 2>&1 &
echo 'server is running now'
fi} // Omit the stop and restart codescase "$ACTION" in
start)
start
;;
stop)
stop
;;
esac
Copy the code
With the programming power of the shell and its ability to abstract data into streams and combine streams to accomplish complex tasks, we can build very complex scripts. We can even script the function of detecting the potential risk information of the tripartite library, and then integrate it into CI. This is just to give you an idea, and hopefully you can extend that idea and automate everything that should be automated.
Development process automation
This is where automation should take place. You’re probably a crazy person with a hammer. You’ll find that everything is a nail. Can anything be automated? Of course not! The boring, highly regulated activities should be automated, while the creative tasks should be left to the developer to enjoy. So when you think something is boring and has strong rules and metrics, that’s the time to pick up the hammer. For example, I repay my work every day, synchronizing my progress with the rest of the group, which is boring (hopefully not visible to my leader) but not regular. Therefore, it should not be automated.
Here’s another example. I have classified the things that need to be handled in the development process, which I call meta-scripts, and they are as follows:
-
Concat-readme (combining all the readMes in a project to form a complete readME)
-
Generate changelog (generate changelog from commit MSG)
-
Serve-markdown (generates static web site from MarkDown)
-
Lint (Code quality testing)
-
Start server
-
Stop the development server
-
Restart the development server
-
Start-attach (attach to the browser for debugging in the editor)
Each meta-script is either a small script or an external library. Since I am using NPM as a package management tool, I put the meta-script in the script of the package.json file. This allows me to execute the corresponding script or external library by running NPM run XXX. But don’t forget that sometimes we need to do complicated tasks, like I need to view all the readMes in my project in my browser. Then I need to concat all the readMes in the project, and then send the contents of the concat as the data source to serve-MarkDown, and then serve-Markdown to start-server. The code looks something like this:
npm run concat-readme > npm run serve-markdown > npm run start-server --port 1089
Copy the code
I call the above code task, and then we put the above code in the package.json script, which seems to work fine. But it has several disadvantages.
- Meta-script and task are mixed together, which in itself is not terrible, but not all scripts can be reused, and our task is not for reuse.
- Package. json exists as part of version control and should not be used if a developer wants to customize a task to his or her own situation.
So what I do is I put the meta-script in the repository (in this case in package.json) and then I put the task in the editor for control. The editor I used was VSCODE, which has a task manager function and can also be extended by downloading third-party plug-ins. Then we can define our own task, such as the above we can save as a personal configuration, named start doc-site.
We can continue to combine:
npm run changelog > npm run serve-markdown > npm run start-server --port 1089
Copy the code
We have added a nice meta-script task that we can name start changelog-site.
There’s more:
npm run stop > npm run start-attach
Copy the code
We have implemented another task to debug with the editor instead of the browser, which we call editor-debug.
We can add more meta-scripts, we can combine more tasks based on meta-scripts. And then we just need one-key and we can do any combination of them. Isn’t that great? Try it yourself!
Automation everywhere
The above example is a typical development process, but what about other day-to-day automation? For example, I want to control the computer to send email, such as I want to control the computer to sleep, I want to adjust the volume of the computer and so on. You could write a meta-script along the same lines, and then combine the meta-scripts. But the meta-script doesn’t seem to work with node cli programs or shell scripts. But that’s exactly what automation is all about. So we were faced with a GUI automation process that would free us from the tedious and repetitive UI work. Here is an example of automation under the MAC.
JXA
Speaking of automation in the MAC, JXA is a technology that uses javaScript to communicate with apps in the MAC. Through it, developers can get an instance of the app through JS, as well as the properties and methods of the instance. Jxa makes it easy to automate scripting with JavaScript to do things like send someone an email, open specific software, and get iTunes playback information. Here’s an example of sending an email:
const Mail = Application("Mail");
const body = "body";
let message = Mail.OutgoingMessage().make();
message.visible = true;
message.content = body;
message.subject = "Hello World";
message.visible = true;
message.toRecipients.push(Mail.Recipient({address: "[email protected]".name: "zhangsan"}));
message.toRecipients.push(Mail.Recipient({address: "[email protected]".name: "lisi"}));
message.attachments.push(Mail.Attachment({ fileName: "/Users/lucifer/Downloads/sample.txt"}));
Mail.outgoingMessages.push(message);
Mail.activate();
Copy the code
There are two ways to run the above example, either from the command line or directly as a script.
- Run the command line interface (CLI)
osascript -l JavaScript -e 'Application("iTunes").isrunning()'
Copy the code
- Run as a script
One way is to save it as a file and run it later
osascript /Users/luxiaopeng/jxa/hello.js
Copy the code
Another is to use Apple’s own script editor:
This is what happens when you run it:
Jxa provides a wealth of apis for us to use. For details, see script editor – file – dictionary:
For example, I want to look at the DASH API:
Unfortunately, not all apps provide a lot of useful apis, such as Dingdao, and not all programs have dictionaries, such as QQ and wechat. The good news is that the MAC comes with plenty of programming interfaces. But we found that even though we wanted to implement some of the features, it was complicated. A simple tool is necessary for developers who don’t want to get too deep into it and want to automate it. Here’s a gadget for the MAC.
Alfred
JXA is powerful, but its functionality is cumbersome. If you simply want to write an automated script, do some simple understanding. Introduce a more simple yet powerful tool – Alfred – Workflow. You can customize your own workflows to support GUIs, shell scripts and even the aforementioned JXA write workflows. The combination of its ease of use and its unique streaming feature makes it very powerful. Here is my Alfred-Workflow:
You can break down your workflow the way I break up the development process, each part implementing its own functionality and setting up languages differently. For example, it’s ok to use bash to process user input, and bash to redirect the input stream to a Perl script. Alfred. workflow allows you to simply add file operations, Web operations, clipboard Settings, and more without writing any code.
conclusion
This chapter starts with the front-end workflow, explains the work in front-end development, and attempts to automate the integration of the steps that can be automated. Then it describes how a perfect automation platform system is, and the specific ideas of the realization of each subsystem. Through my explanation, I believe that you should have understood the work content of automation, and even can build a simple automation platform by yourself. But automation among programmers goes far beyond automating the process of implementing requirements. We also build productivity widgets that are automated in nature. It’s just not engineering. In the appendix to this book, I’ll also provide some automation scripts.