The author | zhang wei (hill)

Edit | orange

New retail product | alibaba tao technology

In Alibaba’s Amoy front end team, development and deployment patterns are still in flux as the technology evolves. On the one hand, the complex capability modules inside and outside the system continue to develop forward, on the other hand, the underlying technology base such as LSP and DAP is gradually mature. In the moment we are through the way of the IDE integrated development environment, on the one stage incubator precipitation research again, integration and sublimation of ecology, the original link again restructuring, from the user to find the breakthrough pain points, to discover the best ability of research and development of various scenarios combination, build a common underlying platform, upgrade existing patterns change.

As the front-end development of the company from 2014 internship to 2015, I have experienced the change of development and deployment mode of Amoy Department front-end team, which was formerly taobao front-end team. Meanwhile, I was fortunate to participate in the construction of some capabilities in the whole trend of change. Today, from the perspective of an experienced person, through their own experience and research to colleagues, in a phased way to recall and describe.

The answer divides the whole story into four stages. The first is the “Stone Age” around 13 years with the main melody of code release storage, deployment and transformation into Gitlab technology system. 14 years with NodeJS technology mature, using front-end JS language to build engineering tools “silver age”; Since 2015, after the attempt to complete engineering tools with NodeJS technology, the “golden era” of online and offline front-end engineering system has been systematically constructed. And the “future era” in which we are shaping the future of research and development through diverse technologies such as clients, containers and algorithms.

The Stone Age

Thirteen years ago, the front-end development model was not that different from the back-end development model. Most of it was SCM code management based on SVN. After completing the daily code development, upload the code to the SVN server through the cli or the little Turtle tool to complete the daily development process. In the deployment phase, upload the test code to the test server through manual copy or FTP upload. After completing the test, manually check the version and content of the code, and then upload it to the production environment to complete the entire R&D deployment process.

During that period, besides SVN, Gitlab, a code management tool based on GIT protocol, also appeared. Faced with the advantages of sha-1 algorithm, such as the convenience of version detection and the flexibility of locally distributed code version control, we gradually migrated the SVN r&d workflow within the department to Gitlab. At the same time this change is Amoy department front end research and development of the origin of change.

While experiencing the convenience brought by switching code version management tools, we wondered whether the deployment process, which was relatively cumbersome and required manual assurance at that time, could be further improved in the new system. Under the background of the new release system, we found that the webhook mechanism can be encapsulated in the upper layer to some extent in the various capabilities provided by the new Gitlab system, and the connection with the release system can be triggered by regular operation to automatically complete the release online process. We trigger webhook event notification based on Git Tag of fixed rules such as publish/ version information, and trigger the call on-line process of publishing system, thus completing the first fully automated publishing process of front-end research and development in practical sense.

The stage was as primitive as the Stone Age, but with a bottomless overhaul of the R&D infrastructure, it provided a solid foundation for further development. At the same time, with the development of front-end technology, we have entered a new stage based on this underlying system.

The silver age

Around 2014, NodeJS technology, which had been born for more than 5 years, gradually became mature. At that time, the team produced a set of local CLI terminal tools named DEF based on NodeJS. The core of the tool is a set of Node module installation and call management mechanism. The developer of the tool encapsulates the research and development function module into the plug-in form under the DEF system, and realizes the compilation, debugging and on-line operation of the front-end project through the combination of different plug-in modules.

Under the KISSY framework research and development system at that time, we made use of NodeJS capability to compile scripts through regular matching, UglifyJS parsing AST and other methods, and started to build engineering tools in the front-end field through JS technology. Gradually replacing tools implemented in Java with the Ant platform.

At the time, in addition to using tools like Yeoman to complete the initialization process, we were abstracting the basic concept of a builder along with the business compilation build logic.

In traditional project organizations at the time, the configuration logic for build compilation was placed in the same code directory as the project’s code files. From a team perspective, the build logic is fragmented and there is no unified update management logic. If there are any changes in the build tools themselves for a certain type of R&D scenario within the team, it will cost a lot of money to update and cover the tools within the team. At the same time, from the user’s point of view, there are a large number of repeatedly installed build dependencies in different projects of the same type, and each project also needs to be installed once before construction, which wastes a certain proportion of space and time.

Build dependencies are converged and abstracted through the builder concept, and the build dependencies of a project are maintained in an NPM package. In this way, the compilation logic becomes one-to-many instead of multi-pair and multi-variable, which greatly reduces the space occupation of the construction logic and enables the reuse of installed build dependencies, simplifying the construction process and improving the construction efficiency. Further, it foreshadows the subsequent online construction system.

At this stage, NodeJS is becoming a basic skill in addition to daily page development. With the advent and scale of NodeJS, front-end development and deployment has moved from the “Stone Age” to the “silver Age.” As the basic application of NodeJS system becomes more and more mature, we start to think more deeply and abstract design on the basis of the use of basic system, and start to build a more mature front-end engineering system.

The golden age

After the team completed nodeJs-based engineering tools, Web services framework and other infrastructure construction, more and more mature online and offline engineering systems were developed in the following years. In that phase of the design, construction of services, tools and follow-up has gradually formed the Amoy department front-end research and development, deployment process in the infrastructure.

Development kit

With the development of native DEF tools, more and more tool plug-ins are springing up. On the one hand, users have a huge choice of tools, and almost all of the local functionality can be found in the plugin ecosystem. But on the other hand, from the perspective of actual users, during the development of a certain project, users need to be familiar with the plug-ins used in the project and the usage of plug-ins, and know the best combination of use methods. As the number of projects increases, the cost of remembering the tool portfolio becomes high, and you need to be aware of the plug-in portfolio switching between projects.

After a thousand flowers blossomed, we came up with the concept of a development suite on top of the native DEF tools. By summarizing and converging the local development process, the function nodes of init initialization, DEV compile preview, Build build, test test and publish are abstracted. After classifying the original plug-in capabilities by type of project development, a standard local tool for each type of development – the r&d suite – was developed.

With the help of the package concept, users can start and use the corresponding tools and services for project development directly by unified command when developing between different projects. In the new native suite architecture, there is also a more fine-grained and flexible version specification for each user. Equipped with a more perfect log monitoring system, users can sense and solve problems in real time when they encounter problems, and ensure front-line students’ experience of using local research and development tools to the greatest extent.

R&d deployment platform

After the process of releasing front-end resources with the help of Gitlab’s Webhook capability is mature, on the one hand, r&d users expect a more optimized release experience in the release process; on the other hand, the team has higher requirements for more systematic, structured process governance and data statistics in the front-end research and development process.

In this context, by opening up the publishing capability on the original link, we connected all links in the publishing link in a more friendly series, and produced the R&D deployment platform under the front-end system. In terms of experience, users can directly start publishing tasks with the help of a simple command line command, which simplifies the operation process of releasing git tags during publishing. In the whole publishing process, the running information and logs of each link are displayed back through a long link, providing a publishing link for new operations. For the first time, users began to have a more comprehensive control over the release process. From the beginning of code submission and detection to the last resource CDN online, there was perfect information disclosure, which greatly improved the release experience.

The r&d deployment platform based on internal NodeJS, after the completion of the basic link, gradually began to reshape the original online release link as the basis of the online process of engineering research and development in a more systematic perspective. Abstract the links in the original process and gradually disperse to three main steps of construction, detection and release. As the business r&d type switches from PC to wireless, the concept of release type under the front-end domain is gradually incubated at the executive level based on three basic steps. For example at the time there is a Web application type, front-end resource type, Weex types, such as the underlying system for each segment of the release type, on the details of the release process of the whole docking take different release deployment plan, local development combining with the suite of tools, online, offline link associated unified combination of r&d, More refined to provide the best r&d deployment experience for each r&d mode.

Since then, users no longer need to trigger the release process through Gitlab’s Webhook. “One-click online” represents a new stage in the front-end engineering system.

Cloud build

In the previous stage, we formed the specification description of the compiling and building process in the r&d process by centralizing the compilation logic and using the form of NPM. However, the builder is not connected to the online deployment process in the live deployment phase, and different local system environments can also lead to unstable build results.

At that time, with the development of Docker technology, we thought to realize a builder running environment through docker’s ability to quickly start and stop the unified environment, and simulated the local compilation and construction process through the Container container started by Docker. At the same time, using a unified container environment, combined with the version identification information such as COMMIT provided by Gitlab after the migration described above, can ensure the stability and unity of the build content to the greatest extent.

After a period of exploration, through the verification and exploration of technical links in the system, such as establishing front-end Docker container cluster, connecting container and application network channel in series, constructing task scheduling strategy logic, and opening docker container run log with redis, We built a cloud build system based on container technology in the area of front-end continuous integration.

In addition to building online construction capacity through Amoy NodeJS application framework MidwayJS, we are also gradually establishing more friendly and perfect online construction business logic in the business side. By invoking the cloud build service when the process is published online, the business relationship between the builder, the warehouse, and the user is connected in the execution process of the entire build task. Under this relational network, we have established a perfect task execution scheduling management, which can record the specific situation of the builder task execution completely.

From the perspective of the builder, you can see the execution of the builder for different warehouses, with complete run logs, build time and build error records, facilitating the user to troubleshoot the builder and optimize its performance. For the iterative update of the builder, the developer can set the gray scale range for the user, and ensure the stable coverage of the online through the complete gray scale process when the version changes. If there is a problem during the builder release process, cancel the grayscale immediately for processing. In this way, the risk of releasing a release is minimized. In the case of large horizontal coverage, the complete grayscale publishing mechanism plays an important role.

During the development process, users need to push the local build code to the warehouse. After the source code is developed, they directly submit the source code to the warehouse and perform the release task at the same time, the cloud build system will automatically build stable and unified build output results.

goalkeeper

After completing the compilation and compression process of the project, the front-end resources that seem to have gone to the deployment link still have many hidden problems in the face of increasingly complex business scenarios, and the reliability of online resources can not be guaranteed gradually by manual inspection in the past. In this context, it is also necessary to carry out systematic intervention and detection in a streamlined and standardized way, so as to be the last line of defense for releasing resources.

Similar to cloud construction, the online resource inspection running environment is built based on NodeJS. By further abstracting the inspection logic, the concept of inspector is also abstracted by carrying it in the form of NPM. After the user resources complete all the pre-processing, and then enter the door god system to parallel check the execution of tasks, such as resource address, sensitive words, code notes and so on after the completion of the check results of different levels, feedback users corresponding processing methods. As the last checkpoint for resources before going live. System named door god implied to ensure that every release of resources online basic quality and safety.

Through the door god automated way, users no longer need to “sharp eye” before the online inspection of various problems that cannot be found through the conventional compilation tools, easy online.

Ability to open

On the one hand, with the continuous development of front-end engineering within the system, more R&D solutions emerge with the continuous development of business scenarios; on the other hand, as the r&d deployment platform, cloud construction and door god within the system have gradually become the de facto norms and infrastructure of the group’s front-end R&D scenarios. In this context, the underlying capabilities get an opportunity to further abstract and open, and the upper business system can build its own front-end RESEARCH and development process according to its own business conditions by using mature underlying engineering capabilities.

For example, from the perspective of Ali system, different delivery clients, different forms of resource organization, with the help of general abstraction of the underlying engineering capabilities, ultimately make developers in each business scenario have the best RESEARCH and development experience in the project business scenario.

Bay building platform

How to design tianma, the construction service that ali economy is using

In addition to conventional project research and development, in the e-commerce system, especially in the marketing scenario, there is a need to quickly produce pages. In the context of such a business scenario, a platform was hatched to generate pages through modules. By redividing the page into page skeleton and page module, the front-end students develop these page elements, combine and match them after the development, and at the same time open up the business data flow, realizing the ability to quickly build the page.

After completing the local module development, the front-end r&d students published the module resources online, and completed the page-level release after page assembly and context injection preview in the building system. In the bottom layer, the source station system implemented by NodeJS also carries the distribution of CDN content back to the source, providing access preview for hundreds of millions of users. After several generations of development, this system has become one of the most core scenarios of front-end research and development.

Roughly from the beginning of 15 years, along with the development of technology, Tao department’s front-end RESEARCH and development system from the original local tools, research and development deployment platform as a pacesetter, gradually developed into online and offline systematic professional solutions. In guarantee the development of the front-end roles of experience constantly breakthrough upgrade at the same time, the enterprise scale the front end of the project construction will be scattered r&d form ceaselessly to daily more systematic, efficient and extensible development pattern, as well as subsequent upper r&d deployment pattern breakthrough and upgrade to lay a solid foundation.

In the future time

In the past few years, we have also been thinking about whether there is a more breakthrough and future-oriented solution to r&d efficiency. With the continuous transformation and iteration of many internal products, the direction and mode that a single spark can start a fire has also emerged.

D2C

Imgcook project originally originated from visual construction, and the r&d mode seen from the outside has undergone constant improvement and change. In the change of the whole research and development mode, there are several stages from the external perspective: The first stage: at that time, users used to build pages by dragging and dropping page elements, and released the source code and went online.

The second stage: an image scanning engine is established for pixel scanning layout, and visual design tools such as Sketch are connected to automatically convert the design draft. At this stage, the r&d mode of front-end users begins to change into uploading the visual draft to the platform for code conversion.

Stage 3: In the current stage 3, the platform is improving the ability of image code conversion through deep learning and other artificial intelligence technologies, and at the same time, the basic R&D link is improved on the platform side. The internal RESEARCH and development mode is also gradually switching to the design and code workbench for operation, to achieve one-stop research and development, online operation experience.

At present, D2C capability has been gradually verified and gained certain results in the research and development scenarios of intelligent restoration components, forms, modules and pages.

IDE

In the last year, we have also gradually invested in the construction of a new IDE direction, hoping to incubate a new and more efficient development mode through the IDE platform capabilities.

The external trend

From an external perspective, two trends can be seen emerging. On the one hand, startups related to IDE have gradually emerged. Many startup stars have emerged, such as Theia under Eclipse system, Coder, which is compatible with VSCode interface, and CodesandBox, an industry rookie in the field of R&D and editing. Another is the entry of cloud vendors, with AWS acquiring Cloud9, Tencent acquiring Coding, and Azure providing Codespace services.

With the continuous development and maturity of editor, Docker and other technologies, all IDE related service manufacturers hope to find opportunities for r&d efficiency and experience improvement by virtue of the ability of integrated R&D environment, so as to seize the pain points of users and expand the product market.

The internal system

With the continuous differentiation and development of front-end r&d mode, more and more tools and services emerge. At present, the form of tools and services is not only the command line terminal tools mentioned above, or terminal tools have gradually become the starting point of rich interactive tools and services. At the same time, a business research and development mode often needs to be connected to the research and development tools and services provided by different systems and teams. For example, in the current development of Alipay small program, in addition to basic compilation services and preview services, simulators, debuggers, real machine debugging and other services are needed.

In fact, Alipay small program is an epitome of many internal scenes. The current solution of Alipay small program is actually to create a local IDE research and development tool with the help of Electron platform. In this context, the system also began to build the IDE underlying system last year. Expect to realize the basic IDE solution online and offline through a set of IDE layer.

With the help of IDE underlying plug-in mechanism, Tao department also hatched its own IDE integration research and development tools, such as D2C, Alipay small program, Serverless and other scenarios mentioned above are in intensive internal expansion and use of improvement. Through IDE and IDE plug-in system, on the one hand, we improve the basic r&d experience with the help of VSCode ecology, and on the other hand, we connect all the tools and services involved in a single project link through the plug-in UI capability extended beyond the scope of VSCode.

Different from the previous process, in addition to integrating the online release and deployment process in series, all the operations of the r&d students in the current R&D process, including r&d coding, debugging preview and deployment process, are organically linked in the unified R&D IDE, and all operations are directly completed through the R&D panel. This is a seed we have planted for the future, and it is slowly germinating.

In fact, the whole answer from a point of view of the staff in the system to introduce you about the Amoy department front-end R&D deployment related to some of the process, the current RESEARCH and development model we also continue to explore breakthroughs day and night, finally introduced the team to do things.

We are Amoy front end team – engineering team, as Amoy front end and Ali Group front end research and development infrastructure, currently doing front-end engineering related system construction, mainly have several things:

  1. Next-generation front-end RESEARCH and development mode: Through the self-built KAITIAN, the core layer of Ali IDE, and the deep integration of existing RESEARCH and development service assets with the help of basic capabilities such as the unity of both ends of the bottom layer and visual plug-in system, the current and future research and development mode and new fields are set up for you
  2. Front-end engineering basic services: continuous integration process of front-end research and development based on NodeJS, Docker and other capabilities, with millions of tasks waiting for you to optimize and challenge
  3. Large-scale CDN source station: Realize the core CDN source service of the group based on NodeJS, greatly promote the infrastructure, hundreds of millions of traffic waiting for you to “fly a plane to change engines”