In recent years, there have been a lot of questions about how NodeJS should actually fit into a company’s business. Since NodeJS was born in 2009, it has been stealing the show and gaining millions of fans. But there must be some engineers who can’t help but question, “Has NodeJS really broken ground and occupied a place in the architecture system?”, “Foreign countries have heard that NodeJS is in full swing, what is the state of the country now?”, “Heard that Ali NodeJS won the Singles’ Day, what is the situation?”
Yeah, the bigger the show, the more questions. When controversy and pomp fade, technology falls to the ground: let God’s be God’s, and Satan’s satan’s. ** In this series of articles, I will review typical NodeJS projects at our company and other teams in China and abroad, and explore the prospects and best practices of NodeJS in depth. ** Interested readers can subscribe to front-end chat or LucasHC for inspiration.
The opening tea break
My related answer: How should the front end land NodeJS in a company business?
In addition, this article is longer and covers: end-to-end testing, NodeJS services, mid-stage capabilities (docker mirroring infrastructure), plug-in implementation based on image comparison, etc. Recommend after the collection ~ can also directly jump to the end of the egg link ~
Serendipitous fate – When NodeJS runs into an end-to-end testing dilemma
End to end testing, also known as UI testing, E2E testing. In plain English, it is similar to a common automated test, which stands at the user’s point of view and, based on protocol or other technical means, opens the real browser and interacts with the page in the browser.
End-to-end testing has obvious advantages, such as: after continuous development, the project will eventually be stable, and the introduction of end-to-end testing at the appropriate time can find problems early, so as to ensure the quality of products. This kind of software instead of manual, the implementation of rapid, repeated testing, the benefits are very obvious. Someone summed up the end-to-end revenue formula (source: tester’s lifeline) :
End-to-end benefit = number of iterations * cost of full manual execution – Cost of first automation – Number of maintenance * Cost of maintenance
Even though the benefits are clear and tools abound, end-to-end testing is not widely implemented. Teams that have access to end-to-end testing seem to be underperforming and struggling to maximize their impact. In addition to the “fit of the project characteristics”, I think it also has to do with the stage at which end-to-end testing is plugged into the development go-live process.
In particular, quite a few teams execute end-to-end testing locally, with strong coupling to project code. For example, end-to-end testing is implemented locally through NPM script scripts. End-to-end testing is required to ensure the accessibility of the latest page to be tested, so scripts are required to ensure the successful setup of local services first. Take NPM script such as NPM run e2e as an example. The related flow is as follows:
This is essentially a “mood based” execution. It usually requires developers to execute scripts and observe end-to-end testing results after local development. “According to the mood” of things, naturally can not be normalized, process, platform, is doomed to be the existence of chicken ribs.
With Huskey, we can force the end-to-end testing in a pre-commit or pre-push phase. In doing so, we streamlined end-to-end testing, moving from “mood” execution to enforcement.
But if you think about it further, the downside of doing this in the pre-commit/pre-push phase is obvious: ** adds extra Git hooks, lengthends the code commit process, and directly affects live time. If it was a hot-fix, such a time lag would be unacceptable. At the same time, the local phase of end-to-end testing is a prerequisite to ensure the availability of local services, not to mention the time cost of starting local services, a more embarrassing problem is: local services and online environment is naturally different, local end-to-end testing is difficult to be equal to the online real effect.
Because of the above, end-to-end testing in the team technology system will either gradually become a “pretty toy”, not a real one; Or a developer who is “a nuisance to the eye” and ends up as an afterthought.
In this sense, end-to-end testing must be innovated in the execution process if it is to break the ice and break the ice. To this end, we believe that end-to-end testing should be moved to the “container” and integrated into CI/CD phase, fully automated and service-oriented.
What is the CI/CD phase?
These are common words used in the compilation and distribution process of modern Internet applications. They correspond to: Continuous Integration and Continuous Deployment. In fact, we also have the concept of Continuous Delivery. Instead of going into detail here, we will focus on Continuous Integration and Deployment.
In a continuous integration environment, the developer submits code to the trunk Master and triggers a Gitlab hook that automatically drives the code to compile. Different teams may define the continuous integration phase slightly differently, but that doesn’t prevent us from understanding it. ** In the stage of continuous integration, our company mainly completed: ** Construction project process. Specifically, in this phase the mid-stage team starts the container with the base image, pulls the latest code, installs the necessary dependencies, executes the single-test script, and finally commits the image for the next phase (continuous deployment phase).
In the continuous deployment phase, the middle stage uses the image produced in the construction phase (continuous integration phase) to start the container, release the latest version of the application in succession according to a certain pipeline process, and finally start the service.
Once you understand the concept of CI/CD, putting the end-to-end testing behind the CD/CD stage and executing it in a real container seems like a good attempt and innovation.
NodeJS implements end-to-end services – It’s not easy to say I love you
To this end, we can draw a simple process analysis based on our design idea of “performing end-to-end test services in containers and connecting CI/CD”.
From the figure above, the first question arises: should we connect end-to-end services to the CI phase or the CD phase?
As a general rule, the CI phase should focus on testing and validation results to ensure the quality of all commits before deployment and provide early warning of possible problems.
There should be no human intervention in the CD phase, and only if a change fails to build in the workflow pipeline can it be prevented from being deployed to the product line.
However, end-to-end testing requires an accessible application address of the latest version. In the CI stage, our company only compiles the code and generates the basic image of the container without opening the application service, so it does not have the address required for end-to-end testing. We could certainly “reinvent” the CI phase and start a new process to start the application service, but that would be rude and unreasonable.
At the same time, our company has an “office stage” (the term “office stage/office environment” is used uniformly below) before canary process in CD stage, that is, the new version application can be fully accessed in the office (in the internal network environment of the company). That is to say, if the company accesses the online address: www.a.com/b, the gateway will send all the traffic…
Overall, for our company, this “office” phase should be the best time to carry out end-to-end testing. Failure to pass the end-to-end test interrupts the deployment delivery process.
In this way, the accessibility of fully measurable pages is solved, while the environment for end-to-end testing is completely consistent with the online environment.
The development of any innovative project is doomed to twists and turns. Design first, but in the implementation process, we still met more resistance and difficulties. The main problems focus on the fit and consistency between the end-to-end framework and the mid-stage container. Here are some typical examples to illustrate.
Engage with the community and push back the progress and perfection of the framework
We chose Cypress, the most popular and active in the industry, as the end-to-end testing framework. The comparison and technical implementation principle of different end-to-end frameworks will not be described here. Interested students can pay attention to our blog, and we will make a special dissection later.
The overall end-to-end service process is not complicated, as shown below:
This is just a simple diagram that roughly shows how the central platform asks our end-to-end test service to open the interface when the relevant code MR (Merge Request) is successfully built and deployed to an office environment.
This leads to the first difficulty: we found that after the office deployment, the end-to-end service received a POST request and executed cypress.run(), and always received an error: Cypress binary is not installed. Why does it work locally and then get an error when you start NodeJS on the container?
Cypress will install Cypress binary into the container system path through the NPM post-install process. Post-install is a hook for NPM. This is triggered after successful execution of NPM install. When cypress.run() is executed, cypress first executes cypress.verify() to verify cypress’s availability. One of the verification criteria is to check whether cypress binary exists under the system path.
Why can’t we find Cypress binary on the container? I still use the graphic to reconstruct the crime scene:
On the first build, our build script executes NPM install and successfully triggers NPM post-install, Cypress installs Cypress binary in the container system path: ~/.cache/Cypress.
On the second (N) build, faced with a ** “brand new” empty container **, node_modules was cached in the middle platform for us, so NPM install didn’t actually download dependencies and the post-install hook didn’t fire. There is no Cypress install Cypress binary in the container system path: ~/. Cache /Cypress. Cypress binary is not installed.
The solution is not too difficult. My first intuitive idea is to change NPM install to NPM CI in the build script. Here are the differences between NPM CI and NPM install:
npm ci
Required items must be includedpackage-lock.json
ornpm-shrinkwrap.json
file- If the above two lock files and
package.json
Declared dependencies create conflicts,npm ci
The command will be forced to exit with an error, whilenpm install
The command updates the lock file npm ci
The command will fully install all dependencies for the project. Individual dependencies cannot be added- If the project already exists
node_modules
.npm ci
The command will deletenode_modules
File and reinstall npm ci
Command won’t writepackage.json
Contents and lock file contents
Therefore, it is not hard to see that the NPM CI command should have been used during the construction phase, hence the name of the NPM CI command.
But the use ofnpm ci
And mid-stage cachenode_modules
“, which inevitably increases build time. In any company’s build deployment system,npm ci
The time to install dependencies must be one of the big ones.
Is there a more “elegant” way? I believe that “PR makes the world better”. Let’s get back to the basics. The core problem is that “node_modules is cached in the middle platform, so post-install cannot be triggered and Cypress binary cannot be installed to the container system path”. If we could also cache Cypress binary to the specified path, Cypress.run () and cypress.verify() can be used to find cypress binary in the specified cache path. The “cache path” of course is somewhere in the node_modules file (because node_modules is cached in the central platform).
To summarize, the key points are:
- Cypress needs to add a configurable environment variable that specifies the installation path for Cypress Binary
- We set the environment variable CYPRESS_CACHE_FOLDER to
./node_modules/.cache/cypress/
cypress.run()
The triggercypress.verify()
When executed, go to the path specified by CYPRESS_CACHE_FOLDER to find if the Cypress binary exists
At this point, the overall process is shown as follows:
The proposal to add configuration environment variables to make the container environment perform Cypress more flexibly has been officially approved by Cypress, and the issue has been resolved for now. Also, we used a CYPRESS_INSTALL_BINARY environment variable to specify the default Cypress software download location for Cypress itself, which is large, time-consuming, and unstable. We keep a copy on our Intranet, where Cypress is fast and reliable to download. The final build part of the script is as follows (in YML format, not affecting readers’ understanding) :
build: # export cypress variables - export CYPRESS_CACHE_FOLDER=node_modules/.cache/Cypress && export CYPRESS_INSTALL_BINARY= http://intranet address /cypress.zip ## application build-yarn-yarn buildCopy the code
You can see that we declared and exported the related environment variables before performing the dependency install (YARN) and build project (YARN Build).
Front-end and centralization open up NodeJS services based on Cypress
Having solved the Cypress Binary installation problem, the second problem we encountered during implementation was also interesting. When cypress.run() was executed, I got an error message, “CI stage dependency missing in docker”, after discussion with the official team:
We strongly suspect that Cypress on containers is going wrong because the container system version is too late. Debian 8.2 (Jessie), NodeJS v10.14.0, that is, the docker base image is declared as: base_image: NodeJS/v10.14.0_Jessie (Debian 8). To this end, we organized the middle Taiwan team and the company’s internal security group to communicate, and produced the new base_Image: NodeJS/v12.13.0_Stretch (Debian 9) basic image with the security package, which was used by the project container.
The upgrade of the basic image is not as simple as simply making a image, which involves more “outside the technology” exploration and running-in, here we do not expand more. In short, the presence of a mid-platform team is critical to the landing and development of all types of NodeJS applications/services. At the same time, the ability involved in the traditional front – end development is lacking. Therefore, project promotion ability and cross-team communication ability are also important parts of NodeJS development and even any front-end technology.
In addition, As a complex end-to-end testing framework, Cypress itself requires many system-level dependencies, Xvfb (is an X server that can run on machines with no display hardware and no physical input devices), etc. Here is a summary of the necessary system dependency packages:
- xvfb
- libgtk-3-dev
- libnss3
- libxss1
- libasound2
- xz-utils
So far, a brief summary of the key issues encountered in the end-to-end testing of containers and their solutions:
- The Cypress Binary cache installation path is provided
- Cypress installation timed out and unstable: improve THE PR solution, provide the Intranet access path, download from the Intranet
- Container system incompatibility: Create an image and push the center to upgrade the container system base image
Of course, the above questions are not all, but they are representative and can sum up some of the problems that any front-end team may encounter when promoting a new technology within the company. Specific setbacks can come from the NodeJS service itself, or from incompatibility with existing technology systems. Solutions range from technical efforts to project-driven efforts.
At this point, we covered the rough design of the service and the setting up of the basic environment. Next, I’ll focus on the technical architecture design of the containerized end-to-end test service.
A complete and easily extensible NodeJS service design
The topic of this article is how to develop an end-to-end test system running on a container. As mentioned earlier, it is as simple as triggering the execution of an end-to-end framework at the right time. But when we design a system, a platform, we should consider more questions, such as:
- Horizontal multi-project expansion capabilities
- Platform service capability
- Operation efficiency of the ultimate design scheme
- Notification and warning interruption mechanism
- Reasonable selection of technical scheme and storage scheme
But seconds after our end-to-end service is called “Kossi”, meaning “Goalkeeper”, hoping that it will be like a good Goalkeeper, the last line of defense for the quality of our products.
Horizontal multi-project expansion capabilities
But seconds have now entered the mature stage, for project approval, the NodeJS service cannot serve only one project test, ideally it should be able to support the access of all products within the company and minimize the access process and complexity.
When the office environment after the completion of the deployment, China request Goalkeeper https://api.goalkeeper.com/run Post interface, this interface to submit data fields including:
{"stage_name": "office", "description": "style: 1221 Active page style with lower version Android ", "MR_iID ": 2049999, "app_name": "xen", "author": "houce", "event_name": "deployment_finished", "candidate_id": 6666, "deploy_id": 6666 }Copy the code
App_name field is the only project name, together with other indicative fields (should not be difficult to understand, here will not explain one by one), such interface design naturally supports the access of applications used by the whole company, only from the interface design, with innate expansion ability. The following description will also expand further on horizontal scaling.
But Seconds from home Dashboard page, select View application project:
Ultimate design of operation efficiency
In order to carry out end-to-end testing most efficiently, we analyzed: for different applications, multi-core and multi-process end-to-end testing should be carried out to ensure the parallelism of test task execution of different applications, that is, for the deployment of multiple projects, end-to-end testing will not block or queue; For the same project application, it is necessary to avoid the interaction between multiple deployments in a short period of time. For these end-to-end test execution tasks, they should be orthogonalized and expanded sequentially.
Specific implementation requires a message queue. Different applications adopt different message queue tubes, and the same application is produced and consumed in the same tube in serial. But because Kossi was an in-service message queue design, I chose Lightweight and powerful Beastalkd as the technical selection for message queue.
But seconds from the deployment list page for a certain application, click for details:
Platform service capability
With the ability to support multiple applications, it’s a natural next thought: “How do developers view test reports and understand test details?”
But Seconds’ design included a very important piece – the platform presentation. In fact, this is a typical one: koA-BASED NodeJS back-end service, using React as a multi-page application solution. Specifically, each time the end-to-end testing service is completed, will generate all the test report data in Redis, developers access to https://www.goalkeeper.com/dashboard, Koa based on the service side rendering, access to relevant data application platform of the single page display.
These relevant data not only contain case content, case execution information and results of each end-to-end test, but also the address of rich media files generated by the test (including test videos, test screenshots, etc.). As for the rich media files generated by the test, we use container persistence technology to store them and provide static services externally.
In other words, Seconds at NodeJS service level provide:
- End-to-end testing on containers
- A complete set of one-page application services (including query platform and rich media static services, etc.)
For example, to apply deployment test information to a project, you can query:
Video information and screenshot information:
Notification and warning interruption mechanism
In order to better serve online applications, we have also designed efficient notification and early warning mechanisms. Notification mechanism means that the test information and test platform display address are sent to the submitter or person in charge via enterprise wechat and email at the beginning of a committed deployment and after the relevant end-to-end testing is completed. The early-warning interrupt mechanism is to block the online process and notify the submitter and the person in charge when abnormal results are found in the end-to-end test.
Our exception results not only contain test case failures, but also, more characteristically, visual testing exceptions. Based on Cypress, we packaged a set of visual test plug-ins, which can automatically take full screenshots of the test page at any node and save them as a reference image for comparison. In the next test, the same nodes in the latest test will be taken full screen shots of the page and compared with the benchmark image. If the different pixels of the two images exceed a certain percentage or a certain pixel threshold, the visual comparison will be considered as failure. As shown in figure:
Visual comparison test, can greatly liberate the complexity of test case writing, very suitable for style class test regression. Of course, for expected image comparison failures, such as normal page UI reworks, we provide the ability to “skip visual comparison and update the base image”.
In the event of a test failure of any kind, we set out to alert the interruption process. As shown in figure,
The architecture and processes are reorganized
Let’s summarize the entire design process by analyzing the flow of a request:
As shown, a simple example:
An example of a more detailed diagram:
Seconds after developer submitted code is merged, Merge Request Related deployment with ID 123 to the Intranet environment, trigger the central station hook, central station will Request End-to-end test Seconds service interface./run, which will create a process for processing for each application, Use message queuing to run the end-to-end tests for the deployment, and finally write the test status results (RUNNING/SUCCESS /fail) to Redis. In this process, the center can consult the polling interface./consult. This interface queries the test status of the related Merge Request ID in Redis.
At the end of the end-to-end test corresponding to the deployment, the test report platform content will be updated to facilitate developers to access the end-to-end test report and rich media information such as videos generated by the latest deployment. Notification and warning mechanisms are also triggered at this stage.
Key dependent services of the whole system are shown in the figure below:
But seconds after the whole Platform relies mainly on Koa, KOA-static, KOA-Router to process test service requests and provide queryable test reporting platform back-end services. Cypress is the main end-to-end testing framework which provides rich plugins and extensions. On the basis of Cypress, we have packaged a lot of plugins and extensions suitable for our business, such as @kfe/ Kos-image-snapshot for visual comparison testing, @ kfe/goalkeeper – image – the snapshot – runner. The Cypress corresponding test script Cases we maintain with a separate Gitlab repository, pulling the latest test cases code each time deployment occurs and end-to-end testing is started. Finally, @kfe/ Kosse-report-Generator is a repository for the entire queryable platform, which is a complete single page application based on React SSR, according to the React-Router, providing:
- Home page, Dashboard page (/ Dashboard) This page displays basic information about all connected applications
- Application Project Details page (/:app) This page displays basic information about the current application project
- Project Deployment List page (/:app/mrList) This page displays the list of all deployment information under the current application project
- Test Details page (/:app/mrList/:mrId) This page displays test information corresponding to the current deployment
- Test media query page (/:app/:mrId/media) This page displays rich media information about the current deployment test, including test screenshots and application screenshots
conclusion
In this article, we present a case study of a win-win project where NodeJS helps traditional end-to-end testing to break the ice and innovate. In terms of implementation, the article analyzed How to implement tests on containers, how to access CI/CD Pipeline, and how to build a Kossi platform for any project.
For front-end developers to learn and implement NodeJS, the key is architecture. We need to be familiar with NodeJS features, and we need to have what we call a “back end” mindset, an architectural mindset. I believe NodeJS will grow and fall, not because it has some inherent power, but because some of its features conform to technological trends or natural changes.
I believe that I was born like a bright summer flower, undying, charming as fire, bearing the burden of the heartbeat and the burden of breathing, never tired of it. — Tagore
I think developers will be burdened with the heartbeat and breathing, but will be happy with the development of technology.
Stay tuned for more front-end technology practices and knowledge!
Happy coding!
Eggs link
If you want to share more front-end knowledge with me, you can check out my new book: Advancing the Core Knowledge of Front-end Development: From Laying a Solid Foundation to Breaking Bottlenecks.
Students who like and forward more can get the sample book for free!!
The front-end field began to rise from about 2013, from Backbone to the three frameworks of the rise and fall, it is more difficult to sort out the learning points of the system. The author is an observer, practitioner, and thinker of this particular period, and I have briefly explained the front-end skills to benefit readers of all levels. Let us keep pace with The Times to open the front-end siege lion can not avoid the 33 core topics!
** Content Introduction: ** The book is divided into 8 parts, covering 33 topics, including basic JavaScript enhancement, JavaScript language advancement, HTML and CSS, front-end framework, front-end engineering, performance optimization, programming thinking and algorithms, network knowledge, etc. Focus on the basic knowledge and advanced skills of front-end development, focus on front-end engineering and systematization, clear structure, step by step, easy to understand. In terms of refactoring basics, this book combines standard specifications with practice code. In terms of developing advanced skills, this book provides an in-depth analysis of the principles and philosophies behind technology. The project design examples in this book cover a number of classic interview questions, which can help not only junior developers to lay a solid foundation, but also middle and advanced developers to break through the bottleneck.
** Hou Ce, who has worked in French ENGIE Group, Baidu and other well-known Internet enterprises at home and abroad, has rich experience in development and team management. He was a speaker at GIAC global Internet architecture conference and FDCon2019 China thousand front-end developers summit. He is the author of React State Management and Isomorphism.
Contact me: LucasHC