Are containers really useful for front-end development? The answer is yes.

When I first told my front-end classmates about container technology, a lot of them would say, “Container? Isn’t this back-end technology? I don’t understand it, and front-end development doesn’t work.”

But in fact, today we discuss the “front end” is not the traditional sense of the “front end”, first reflected in the diversity of terminal types, such as iOS, Android, small programs, etc.; In addition, with the rise of Node.js and other technologies, the frontier of front-end development is gradually extending to the server side. Coming to the big front-end era, how to carry out application development in the way of engineering, service and automation, and realize the continuous iteration, high availability and high concurrency of the business is the constant exploration of every successful Internet product, and the gradually mature container technology greatly improves the efficiency of this process.

This paper introduces the design and implementation principle behind the whole platform, some effects obtained and the optimization scheme of the problem based on the practical experience of the front-end application construction of the hornet’s honeycomb container platform.

The point where the container meets the front end

Generally speaking, the front-end development process is as follows: create service/project → local development → development environment test → production environment test → production gray scale → online.

The advantage of front-end development based on containerized platforms is that the front end and back end are completely separated, and we only need to focus on the project building of the front end, rather than packaging it with the back-end code. Each build and each access rule is also independent, and the failure of one build does not affect the build and access of other builds.

So, where does the container meet the front end? Where do container advantages come into play in front-end application development? We can look at it from the three stages of development, testing and production.

Development of link

Containers eliminate the environmental differences between online and offline, ensuring consistency and standardization of the application lifecycle. For front-end development, the task to be completed is often to complete the content of the presentation and response to the user’s input, processing is HTML, JS, CSS and other static resources, files directly sent to the client, do not need a running environment, it seems that there is no need to use the container.

What about Build time? After all, different projects are built on different versions of Node, and different containers can go into different versions of Node without polluting the native Node environment. But there is no container, and NVM can be used on the front end to manage Node versions, which can be easily switched over with a line or two of commands. And with the convenience of local development, there really seems to be no need for containers.

Arguably, the container itself does not help the front end become more convenient during development. Therefore, if you are not familiar with container technology, there is no need to use containers during development.

The test link

In the past, a common solution for testing with virtual machines was for front-end developers to upload their own code to a directory in the virtual machine, where QA could test directly through the domain name. The problem is, companies have many product lines, and many projects may be tested at the same time. VMS consume a large amount of system resources, are limited in number, and are difficult to be expanded, affecting test efficiency.

This concern does not arise if you use a containerized platform. Because the container is very light, low consumption, fast start-up, can be rapidly expanded, do not worry about the problem of insufficient.

Production process

Another advantage of containers is that they enable versioning of applications. For example, when we go online, we find that the version has problems and need to be rolled back, which is inevitable. The traditional way is to roll back through Git or SVN. Once the merged code wants to be rolled back or split, it is difficult to operate, and it is time-consuming to redeploy.

Based on the containerized platform, we can directly cut the flow to the old version through flow control, which takes a few seconds, greatly improving the rollback efficiency.

Another important indicator of front-end performance is page load time. A blank screen on the front page is very damaging to the user experience, especially when we are doing activities. Almost all traffic is directed to the active page, and a blank screen can be very frustrating. We found out that a server was down and we had to restart it. But there’s a lot of uncertainty about restarting the machine, and it’s possible that the machine won’t get up, which is very common.

But on containerized platforms, a container is a process, and if a machine goes down, the cluster can quickly pull services from another node, and in seconds, with little concern about user access problems.

In summary, containers have major advantages over VMS in rapid capacity expansion, second-level rollback, and stable maintenance. Therefore, for front-end development, containerization is more important to ensure the rapid iteration of services and the stability of online services.

Container knowledge that the front end needs to know

You’ve already got a sense of what changes container technology brings to front-end development. So in order to better use this technology, front-end students should also have some basic knowledge of containers.

What is a container

Let’s start by looking at what a container is and why it is lightweight and high-performance. The following image provides a more intuitive comparison between the virtual machine and the container:

Virtual machines (VMS) run hypervisors on physical servers to simulate hardware systems to improve server capabilities and capacity. Each virtual machine has a kernel that runs a different operating system and does things like process management, memory management and so on when it starts up. But for a front-end application that might just need a static Nginx server, using a virtual machine would be too heavy.

Containers are lightweight because they do not have a Hypervisor layer or a kernel layer; each container shares the host’s kernel and system calls. Therefore, a container contains only the minimum files needed by a program to run. Starting a container is to start the process, which costs less on resources and is easier to maintain.

Images, containers, and Dockers

These are three words that come up a lot when you’re talking about container technology, and here’s what they mean and how they relate to each other.

Mirroring: Can be simply understood as a collection of layers of file systems, or a collection of directories. For example, for our front-end code, the bottom directory might be the binary that Nginx needs to run, and then on top of that would be our code, such as index.html. This image layer all layers are read-only after they are generated, and files at each layer cannot be modified.

Container: Just add another directory on top of the directory above. But it is actually an empty directory, the difference is that the top layer of the container is readable and writable, that is, the container = mirror + read/write layer.

For example, if I want to modify the previous index.html, I do so by adding the new version to the previous image. That is, after the container is generated, all changes are made to the top image writable layer. The bottom layers are not allowed to write into it, but can be added over and over again, like stacking wood. The original image cannot be modified by the container, which is why images can be shared by multiple containers.

Docker: Container technology actually exists for a long time. Docker is a tool used to realize containerization technology, which is also the most common way in the industry at present. It helps us to make images, and then run the images into containers and manage them.

How does a container platform enable the front end

After introducing some simple concepts, let’s take a look at the overall architecture of the hornet’s nest container platform, how we empower the front end and what capabilities we give it.

We built a container cloud platform based on Docker and Kubernetes to abstract out the application construction, deployment, resource scheduling, application management and other capabilities, and provide them to developers as services to improve the stability and efficiency of online services. The diagram below shows the life cycle of a front-end application on a containerized platform from an application perspective:

Application center

Applications are the basic operation objects of container cloud platform. One of the big benefits of cloud platforms is that they mask the type of project, regardless of the front or back end. So under the shell of the application, whether the front-end code, or the back-end code, can enjoy the same service. For example, in the traditional sense, the back-end application can also be endowed with the front-end, so that the front-end students focus on business development, without paying attention to the implementation of the bottom.

This is a creation page in the Application Center. In just a few steps, an application can be created and hosted on our cloud platform:

Version management

Once the application is created, build the version. By using containers, we package the application, configuration, dependencies, and so on into code images and then tell the online servers how to make them run containerized. Therefore, versioning consists of code mirroring and runtime configuration.

1. Code mirroring

We use Pipeline + Docker-based Drone as a CI tool, which is very flexible and easy to expand. The flexibility of Drone is reflected in the configuration of Pipeline, which can control the process of building image in the project by setting.drone. Yml file.

To better support company-level applications, we built a common base image by injecting the image with some internally used packages. CI is done at the same time as the build, such as unit testing, bug detection, etc.

2. Run time configuration

Runtime configuration is divided into Nginx configuration and deployment runtime configuration

(1) Nginx configuration

Nginx configuration is primarily for Node front-end projects. Exposing Nginx configuration to applications has several benefits:

  • Front-end students can configure the history mode by themselves, and do not need to find the server to cooperate.

  • Customize multiple locations. In the case of multi-page applications, you can configure Nginx to forward requests to specified entry files for specified routing.

  • User-defined cache cache policy. More flexible selection of cache policies improves user experience and reduces the pressure on the server to process requests.

(2) Deployment and operation configuration

The deployment run configuration tells the system platform how to run the release pack. In fact, there is an extension for future deployment to various platforms such as the Kubernetes KVM host.

In summary, we implemented the following capabilities in the version management section:

  • Configuration file driven, one application multiple copies of flexible good expansion

  • Nginx configuration is open to applications, following the DevOps idea, efficient empowerment

  • Standardized releases, built in one place, run everywhere

Deployment management

Next we need to deploy the built version package to run on the cluster.

There may be many machines on the line, V1, V2, V3 refers to various versions. This version can have multiple instances. If the service fails, there are two main ways to ensure stable and high performance:

  • Efficient scheduling: Using the Kubernetes scheduler to schedule the specified running containers to the most suitable nodes that meet the resource requirements

  • Multi-copy support: Multiple copies of a container application are automatically deployed and continuously monitored. Automatic start copy if container hangs

As specified in the white page example we talked about earlier, we will continue to watch the container on the container platform, and if the service fails, we will quickly start it on another node. It is important to note that “multiple copies” does not just mean starting multiple copies on two machines. If both machines are in the same cabinet, or even in the same machine room, then starting multiple copies is meaningless.

At this point, we have the service deployed online and running stably. However, deployment does not mean that users will be able to access, nor will they be able to access the correct version, so the next step is service governance.

Service governance

Service governance is a large concept and can be applied in many scenarios. One of the things it does is give the user access to the version on the specified line.

Technical solution

First, the implementation principle is introduced:

We use a gateway that supports the XDS protocol. When the new configuration is pushed to the gateway through the XDS protocol, it automatically performs hot updates, hot restarts, and ADAPTS to the new configuration. If we want to point to V2, we just push the latest configuration to the gateway via the XDS protocol, and it will apply the new configuration. In this way, the specified version can be deployed online.

Push here we used the Pilot component and optimized it for push speed. The Pilot component constantly listens for data and pulls it out when it sees changes.

Application scenarios

For this design, we mainly applied it to three scenarios: rollback, shunting, and ABTest.

1. The rolled back

Rollback refers to flow control, such as the gateway pointing to version V2 at the beginning:

If there is a problem, I just push a new configuration to the gateway and it points to the previous version very quickly:

2. Tap

Shunting is mainly used in the lifting and testing scenarios mentioned at the beginning of the article. In the past, when using virtual machines, because different virtual machines have different domain names, front-end students either modify the code to adapt to virtual machines, or test students or product students need to modify their own host, which was very inconvenient.

In a containerized way, for example, if the default access is V2, but we need to test V1 or V3, we can push out a configuration to the gateway and tell it to forward the request to V1 if the cookie in the request contains the identifier V=V1. Similarly, if the cookie contains V=V3, the request is forwarded to V3, and all forwarding is done at the gateway layer.

To make the service easier to use, we provide a plug-in to automatically identify the services and versions deployed on the cloud platform. QA and product students only need to click on the version when testing, and the system will automatically complete cookie injection. Then when sending a request to the server, the gateway will find that the cookie carries a certain version and automatically complete the forwarding:

3. ABTest

In the same way, we can configure a user’s UID to control the user’s access to different versions of ABTest. There are many ways to support this, such as cookie injection, different headers, different request methods, etc. It is very flexible.

So that’s service governance. All in all, we can automate the deployment of access rules. It may only require the front-end students to do a Git-push tag operation, and the version has been prepared and deployed to the development environment or even production environment, while the whole process is unconscious to the users of the platform:

  • Automated deployment access rules, full CI/CD

  • Flexible shunting strategy, with second level rollback, gray level, ABtest and other functions

  • Combined with the Chrome plugin, the experience is smooth

This is an introduction to the capabilities we can add to the front end based on a containerized cloud platform. After some time of exploration, our current process has been smooth, but we will inevitably encounter some problems.

The 404 that we encountered in those years

1. After going online, js is found to access 404

This is bad for the user experience. After investigation, we found that the problem occurred in order to achieve high availability, our gateway configuration is multiple.

Because the forwarding configuration of the gateway is delivered by push, there will be a time difference between multiple gateways. Some gateways receive the new push first, some receive it later. When the user’s request reaches one of the gateways and gets an HTML, it is told which hash JS to access. If, unfortunately, the js of the hash accesses a different gateway and is forwarded to a different version, that is, to a different container, then the hash value will be different and the corresponding file will not be found, resulting in 404.

This is a problem not only for cloud platforms, but also for distributed deployment solutions. Our solution is to have all gateways connected to the same Pilot. Because the number of gateways is limited, one component is responsible for pushing all gateways. Because XDS protocol is based on GRPC, it is a long-connection operation, so it is very fast. When pushed by a node, the time difference between the configuration received by all gateways can be controlled within milliseconds, with little impact. That is, when gateway A receives the new configuration, gateway B also receives the new configuration. In this case, no matter which gateway is called, all the requests will point to the same version. In this case, there will be no more requests of 404 online.

2. Grayscale environment, JS access 404

As mentioned before, our grayscale scheme is to use plug-ins to make cookies. Theoretically, as long as the cookie configuration is correct, it can be forwarded to the specified version. So if MY HTML is fine, why do JS 404 appear?

After investigation, it is found that there is a label called “anonymous label” when js request is made. If we put an anonymous label when using JS, the browser will not carry any identity mark when sending JS request, and the gateway will think that it has accessed a default version, that is, the online version. At this point, if the request goes back to version V2, it will be 404.

The recent planning

1. Release the build Pipeline as much as possible

Currently, we mainly use NPM install and NPM run build to build the image. After that, we will try our best to release Pipeline, including basic image and Node version, so that front-end students can realize more customized requirements.

2. Optimize build and deployment times

At present, our scheme to build the image does not make good use of Docker’s cache mechanism, so it will affect the construction time. We are currently working on optimizations to minimize or eliminate most NPM install and build times.

3. Release the alarm monitoring capability

At present, we have completed the construction of part of the alarm monitoring capability, which is mainly used by the platform maintenance team to monitor the QPS status, whether the service is stable, whether the service is restarted, etc. The team will also receive many alarms. However, we believe that such alarm should be sent to the person in charge of the service. Later, we will gradually release this part of the ability, and constantly improve and optimize the alarm rules.

conclusion

Finally, a brief summary:

What does containerization actually empower the front end?

  • Improve test efficiency

  • More stable service, efficient operation and maintenance

How to further empower the front end of the Honeycomb cloud platform?

  • Application center: go to the cloud in one step and enjoy the services brought by the cloud platform without distinction

  • Version management: Practice DevOps and enable Nginx configuration; Configuration driven, flexible and good expansion

  • Deployment management: Intelligent scheduling and stable high performance

  • Service governance: second level rollback, second level recovery, gray access, ABTest and many other functions

At present, we have explored how to help the front end to complete application research and development through the way of container, and further empower through the way of cloud platform, hoping to bring you some inspiration on technical thinking.

The author of this article: Zhou Lei, The basic platform of Hornet’s nest tourism network service r&d engineer.

(Picture from Internet)