Multiple deployment patterns for microservices architecture
Disclaimer: The main content of this article is from “Microservices Architecture Design Patterns”.
Deployment consists of two interrelated concepts: process and architecture. The deployment process includes processes performed by developers and operations personnel to release software to production. The deployment architecture defines the structure of the environment in which the software runs.
The programs developed by micro-service architecture often involve multiple service components, back-end gateway module, user module, log module, business module, and front-end Node.js module. Taking the author’s company as an example, we need to communicate and coordinate with each other when deploying services to facilitate the invocation of services. Generally, it is deployed in jar form in the development environment. It is packaged and released to the server directly from the local, which is convenient and convenient. It is suitable for solving and repairing problems in time when debugging with other services. However, this also has some disadvantages. Because the local packaging is all the code modified by myself, if the change is not submitted to Git in time, it is easy to omit the code of other colleagues in collaborative development, and the debugging of my own code may be completed, but the problem of my colleagues may occur.
The figure abstractly describes the composition and invocation relationships of various services in the microservice architecture
When it comes to the deployment of the testing and production conditions, the need to pay attention to place more, packaging upload this way there is a big loophole, not enough routing, and server may run services written in various languages and frameworks, each service is a small application, this means that in a production environment you have dozens or hundreds of applications. It is no longer feasible for system administrators to manually configure servers and services. If microservices are to be deployed on a large scale, highly automated deployment processes and infrastructure are required.
Production environments must implement four key functions:
-
Service management interface: Enables developers to create, update, and configure services. Ideally, this interface is a REST API that can be invoked by command-line and graphical deployment tools.
-
Runtime service management: Ensure that the required number of service instances are always running. If the service instance crashes or is unable to process requests for some reason, the production environment must restart it. If the host on which the services are running crashes, you must restart these service instances on another host.
-
Monitoring: Gives developers insight into what the service is doing, including log files and various application metrics. Developers must be alerted if problems arise, also known as observability.
-
Request routing: Routes user requests to the service.
An overview of the
Therefore, this article explores three key deployment patterns and analyzes their benefits and drawbacks.
- Programming language-specific distribution package formats, such as Java JAR or WAR files. I do not recommend this approach, but I introduce this option because it has significant drawbacks that will make you think and choose other, more rational and modern deployment techniques.
- Deployment can be simplified by deploying services as virtual machines and packaging them as virtual machine images that encapsulate the technology stack of the service.
- Deploy services as containers, which are more lightweight than virtual machines.
Programming language-specific distribution package formats
Suppose you want to deploy a Java application based on Spring Boot. One way to deploy this service is to deploy the service using programming language-specific software packages. With this pattern, the content deployed in the production environment and managed by the service runtime are services in language-specific distribution packages. In the case of Restaurant Service, it is an executable JAR file or WAR file. For other languages, such as Nodejs, a service is a directory of source code and modules. For some languages, such as GoLang, a service is an executable file in a path specific to the operating system.
Pattern: Programming language-specific distribution package format, using programming language-specific software distribution packages to deploy services to production environments.
To deploy a Java application on your computer, you first install the necessary runtime, in this case the JDK. If it is a WAR file, you also need to install a Web container, such as Apache Tomcat. After configuring the computer, copy the program distribution package to the computer and start the service. Each service instance runs as a JVM process. Ideally, you have a deployment pipeline in place that will automatically deploy services to production. The deployment pipeline builds executable JAR files or WAR files, which then invoke the production environment’s service management interface to deploy the new version.
A service instance is usually a single process, but sometimes it can be a group of processes. For example, a Java service instance is a process that runs the JVM. The Node JS service may generate multiple worker processes to process requests simultaneously. Some languages support the deployment of multiple service instances in the same process. Sometimes, it is possible to deploy a single service instance on a machine while retaining the option of deploying multiple service instances on the same machine. For example, as shown in Figure 12-4, you can run multiple JVMS on a single machine, each running a service instance.
Some languages also allow multiple service instances to run in a single process. For example, as shown in the figure, you can run multiple Java services on a single Apache Tomcat. Process that deploys multiple service instances on the same Web container or application server. They may be instances of the same service or of different services. Operating system and runtime overhead is shared across all service instances. But because the service instances are in the same process, there is no isolation between them.
The pattern of deploying services as language-specific distributions has both advantages and disadvantages. Let’s look at the benefits first.
The benefits of using programming language-specific distribution package formats for deployment
Deploying a service as a programming language-specific distribution has the following benefits:
- Rapid deployment.
- Efficient resource utilization, especially when running multiple instances on the same machine or in the same process.
Let’s break it down one by one.
Rapid deployment
One of the main benefits of this pattern is the relative speed of deploying the service instance: copy the service to the host and start it. If the service is written in Java, copy the JAR or WAR file. For other languages, such as JS or Ruby, you can copy the source code. In either case, the number of bytes that need to be copied across the network is relatively small. In addition, it takes very little time to start the service. If the service is running on its own proprietary process, start it. Otherwise, if the service is one of multiple instances running in the same Web container process (such as Tomcat), it can be dynamically deployed to the Web container or restarted. Starting the service is usually quick because there is no additional overhead.
Efficient use of resources
Another major benefit of this pattern is that it can use resources relatively efficiently. Multiple service instances share a machine and its operating system. It is more efficient if multiple service instances are running in the same process. For example, multiple Web applications can share the same Apache Tomcat server and JVM.
Disadvantages of using programming language-specific distribution package formats for deployment
While attractive, there are several notable patterns for deploying services as programming language-specific distributions
- Lack of encapsulation of the technology stack.
- Resources consumed by service instances cannot be constrained.
- Lack of isolation when running multiple service instances on the same machine.
- It is difficult to automatically determine where to place service instances.
Let’s break it down one by one.
Lack of encapsulation of the technology stack
The operations team must understand the nuts and bolts of deploying each service and each service requires a specific version of the runtime. For example, Java Web applications that require a specific version of Apache Tomcat and JDK operations teams must install the correct version of each required package. Worse, services can be written in a variety of languages and frameworks. They may also be written in multiple versions of these languages and frameworks. As a result, the development team must share many details (manually) with the operations team. This complexity of communication increases the risk of errors during deployment. For example, the machine might have the wrong language runtime version installed.
Resources consumed by service instances cannot be constrained
Another disadvantage is that you can’t constrain the resources consumed by service instances. A process may consume all the CPU or memory of the machine, competing for resources from other service instances and operating systems. For example, if something goes wrong, this is very likely to happen.
Lack of isolation when running multiple service instances on the same machine
The problem is even worse when you run multiple instances on the same machine. The lack of isolation means that misbehaving service instances can affect other service instances. As a result, applications run the risk of being unreliable, especially when running multiple service instances on the same machine.
It is difficult to automatically determine where to place service instances
Another challenge with running multiple service instances on the same machine is locating the service instances. Each machine has a fixed set of resources, CPU, memory, and so on, and each service instance requires certain resources. It is important to assign service instances to machines in a way that uses them efficiently without overloading them. As I’ll explain later, virtual-based cloud hosting and container choreography frameworks take care of this automatically. When deploying services locally, you may need to manually determine where to place them.
As you can see, the pattern of services being deployed as language-specific distributions has some significant drawbacks. This approach should be avoided unless the value of the efficiency achieved trumps all other considerations.
2 Deploy the service as a VM
Mode: Deploy services as VMS and deploy the services packaged as VM images to the production environment. Each service instance is a virtual machine. Virtual machine images are built by the deployment pipeline of services. The deployment pipeline runs the virtual machine image builder, which creates a virtual machine image that contains the service code and any software needed to run the service. For example, the installation JDK for a service and the virtual machine builder for an executable JAR for a service. The virtual machine image builder uses Linux’s init system, such as Upstart, to configure the virtual machine image to run the application at virtual machine boot time.
The deployment pipeline can use a variety of tools to build virtual machine images. An early tool for creating EC2AMIS was Netflix aminator (https://github. Com/Netflix /a) Netflix AWS development Aminatorhtp: making. The cm/netflix/aminator), netflix to use it on the AWS deployed its streaming service packer IO (www.packer.) is a more modern virtual machine images builder, Unlike Aminator, it supports a variety of virtualization technologies, including EC2 Digital Ocean, Virtual Box and VMware. To create an AMI using Packer, you need to write a configuration file that specifies the base image and a set of configurators that install the software and configure the AMI.
About its advantages and disadvantages
Benefits of deploying services as virtual machines
- Virtual machine images encapsulate the technology stack.
- Isolated service instances.
- Use a mature cloud computing infrastructure.
Let’s break it down one by one.
Virtual machine images encapsulate the technology stack
An important benefit of this pattern is that the virtual machine image contains the service and all its dependencies. It eliminates the source of error and ensures that the software needed to run the service is properly installed and set up. Once a service is packaged as a virtual machine, it becomes a black box that encapsulates the technology stack of the service. Virtual machine images can be deployed anywhere without modification. The API used to deploy the service becomes the VIRTUAL machine management API. Deployment becomes simpler and more reliable.
Isolated service instances
Another benefit of virtual machines is that each service instance runs in complete isolation. After all, this is one of the primary goals of virtual machine technology. Each VM has a fixed amount of CPU and memory resources and cannot steal resources from other services.
Use a mature cloud computing infrastructure
When deploying microservices as virtual machines, you can leverage a mature and highly automated cloud computing infrastructure. Public clouds like AWS try to schedule virtual machines on physical machines in a way that avoids machine overloads. They also provide valuable features, such as traffic load balancing and automatic scaling across virtual machines.
The downside of deploying services as virtual machines
- Resource utilization efficiency is low.
- The deployment speed is relatively slow.
- Additional overhead of system administration.
Resource utilization efficiency is low.
Each service instance has the overhead of an entire virtual machine, including its operating system. In addition, a typical common IaaS virtual machine provides a limited set of virtual machine configurations, so the virtual machine may be underutilized. This is unlikely to be a problem for Java-based services because they tend to be relatively heavy. But this pattern can be an inefficient way to deploy lightweight Nodejs and GoLang services.
The deployment speed is relatively slow
Due to the size of the virtual machine, it usually takes several minutes to build a virtual machine image. There’s a lot of content that goes across the network. In addition, instantiating a VIRTUAL machine from an image is time consuming because the complete virtual machine image file must be transferred over the network. Operating systems running inside virtual machines also take some time to boot up, although slow is a relative term. This process can take a few minutes, which is much faster than the traditional deployment process. But it’s much slower than the more lightweight deployment patterns you’re about to learn.
Additional overhead of system administration
You have to take responsibility for patching the operating system and runtime. This may seem inevitable for system administration when deploying software, but deployment, which I’ll describe later in Section 12.5, Serverless, eliminates this approach to system administration.
Now let’s look at alternative ways to deploy microservices that are more lightweight but still each container is a sandbox for isolated processes with many of the benefits of virtual machines.
3 Deploy the service as a container
Containers are a more modern, lightweight deployment mechanism and an operating system-level virtualization mechanism. Containers typically contain one or more processes that run in a sandbox that isolates them from other containers. For example, a container running a Java service is typically made up of JVM processes. From the point of view of the process running in the container, it is as if it were running on its own machine. It usually has its own machine IP address, which eliminates port collisions. For example, all Java processes shared by all containers can listen on port 8080. Each container also has its own root file system. Container runtime uses operating system mechanisms to isolate containers from each other.
Pattern: Deploy services as containers Deploy services packaged as container images into production with each service instance as a container.
When you create a container, you can specify its CPU and memory resources, as well as I/O resources that depend on the container’s implementation, and so on. The container runtime enforces these restrictions and prevents the container from hogging resources on its machine. When using Docker orchestration frameworks such as Kubernetes, specifying the container’s resources is especially important. This is because the choreography framework uses the resources requested by the container to select the underlying machine running the container, ensuring that the machine is not overloaded. When building a service, the deployment pipeline uses the container image build tool, which reads the service code and image description to create a container image and store it in the image repository. At run time, container images are pulled from the mirror repository and used to create containers.
Enable Docker to use the deployment service
To deploy a service as a container, it must be packaged as a container image. Container images are file system images made up of applications and dependent software needed to run services. It is usually a full Linux root file system, but lighter images are also available. For example, to deploy a Spring Boot-based service, you need to build a container image that contains the service’s executable JAR and the correct JDK version. Similarly, deploying a Java Web application requires building a container image that contains the WAR file Apache, Tomcat, and JDK. It is mainly divided into the following steps.
- Build the Docker image
- Push the Docker image to the mirror repository
- Run the Docker container
Benefits of deploying services as containers
Deploying a service as a container has several benefits. First, containers have many of the benefits of virtual machines:
- Encapsulating technology stack, the container API can be used to achieve the management of services. Service instances are isolated.
- The resources of the service instance are limited.
But unlike virtual machines, containers are a lightweight technology and container images can often be built quickly. For example, it takes only a few seconds to package a Spring Boot application as a container image. It is also relatively fast to transfer container images over the network mainly because only a subset of the required image layers are transferred. (This is because Docker images have a so-called tiered file system, which allows Docker to transfer only part of the image over the network. The mirrored operating system, Java runtime, and application reside in different layers. Docker only needs to transfer layers that do not exist in the mirror repository. Therefore, transferring images over the network is especially fast when Docker only needs to move the application layer). Containers can also start quickly because there is no lengthy operating system startup process. When the container is started, it is the service that runs.
Disadvantages of deploying services as containers
One of the obvious drawbacks of containers is that you have a lot of container image management to do. You are responsible for patching the operating system and runtime. In addition, unless you use a managed Container solution (such as Google Container Engine or awsECS), you must manage the Container infrastructure and the virtual machine infrastructure that the Container may need to run.