Link: github.com/oopsguy/mic… Translator: Oopsguy
Microservices are getting a lot of attention right now: articles, blogs, discussions on social media and conference presentations. They are rapidly approaching the peak of the Gartner Hype Cycle. At the same time, there are skeptics in the software community who think microservices are nothing new. Opponents claim its ideas are just a reinvention of service-oriented architecture (SOA). However, whether hype or skepticism, there is no denying that the microservices architecture model has very clear advantages — especially in implementing agile development and delivering complex enterprise applications.
This chapter is one of seven that covers the design, build, and deployment aspects of microservices. You’ll learn about the origins of microservices and how they compare to traditional singleton application patterns. This ebook describes many aspects of microservices architecture. You’ll learn about the strengths and weaknesses of the microservices architecture pattern, whether it’s what it means for your project or how to apply it.
Let’s start by looking at why you should consider using microservices.
1.1. Build monomer applications
Let’s think about it. You’re starting a new ride-hailing app that’s going to compete with Uber and Hailo. After initial communication and requirements gathering, you will generate a new project either manually or using platforms such as Rails, Spring Boot, Play, or Maven.
The new application is a modular hexagonal architecture, as shown in Figure 1-1:
Figure 1-1. A simple taxi-hailing application
At the heart of the application is the business logic implemented by modules that define services, domain objects, and events. Around the core are the adapters that interface with the outside world. Examples of adapters include database access components, message components that produce and consume messages, and Web components that expose apis or implement a UI.
Despite a logically modular architecture, applications are packaged and deployed as a whole. The actual format depends on the language and framework of the application. For example, many Java applications are packaged as WAR files and deployed on application servers such as Tomcat or Jetty. Other Java applications are self-contained executable jars. Similarly, Rails and Node.js applications are packaged as directory hierarchies.
Applications written in this style are common. They are easy to develop because our IDE and other tools focus on building individual applications. These applications are also easy to test. You can easily implement end-to-end testing by simply launching and testing the UI using a test package such as Selenium. Standalone applications are also easy to deploy. You simply copy the packaged application to the server. You can also extend the application by running multiple replicas and combining load balancers. In the early stages of the project, it worked well.
1.2. Go to monomer Hell
Unfortunately, this simple approach has major limitations. Successful apps have a tendency to become bloated over time. With each sprint, your development team implements more user requirements, which means adding more lines of code. Over the years, your small, simple application will grow into a monolithic behemoth. To give an extreme example, I recently spoke with a developer who was writing a tool to analyze the dependencies between thousands of jars from their Lines of Code (LOC) applications. I believe it’s a lot of developers working together over the years to create this beast.
Once your application becomes a large, complex singleton, your development organization can be plunged into a world of pain. Any attempt at agile development and delivery will linger. One of the main problems is that the application is really complex. This is understandably too big for any developer. As a result, fixing bugs and implementing new features correctly becomes difficult and time-consuming. Moreover, the trend is like a downward spiral. If the basic code is hard to understand, the change won’t be right. What you end up with is a huge and incredibly large ball of mud.
The size of applications will also slow growth. The larger the application, the longer it takes to start. I’ve surveyed developers on the size and performance of individual apps, and some have reported launch times of 12 minutes. I’ve also heard strange stories of apps taking more than 40 minutes to launch. If developers have to restart application servers on a regular basis, a significant portion of their time is spent waiting, limiting their productivity.
Another big problem is that complex monolithic applications themselves are a barrier to continuous deployment. Today, SaaS applications have evolved to push changes into production multiple times per day. This is very difficult for complex singletons, because you have to redeploy the entire application to update any part of it. It also doesn’t help with the long startup times I mentioned earlier. In addition, because the impact of the change is often not clear, you will most likely need to do a lot of manual testing. Therefore, continuous deployment is not possible.
Individual applications may also be difficult to scale when different modules have conflicting resource requirements. For example, a module might perform CPU-intensive image processing logic, ideally deployed in an Instance of Amazon EC2 Compute. Another module may be an in-memory database, which is best suited for EC2 Memory-Optimized instances. However, because these modules are deployed together, you must make compromises on your hardware choices.
Another problem with monomer applications is reliability. Because all modules run in the same process. A bug in any module, such as a memory leak, can bring down the entire process. In addition, because all instances of the application are the same, this error affects the availability of the entire application.
Last but not least, monolithic applications make it difficult to adopt new frameworks and languages. For example, let’s say you have 2 million lines of code written using the XYZ framework. It would be very expensive (in terms of time and cost) to rewrite the entire application using the newer ABC framework, even if that framework was very good. So this is a very big barrier to the adoption of new technologies. Whatever new technology you choose at the beginning of a project will be confusing.
To recap: You have a successful business-critical application that has grown into a giant monomer that only a few, if any, developers understand. It was written using outdated, unproductive technology, which made it difficult to recruit good developers. Applications are hard to scale and unreliable. So agile development and application delivery are impossible.
So what can you do?
1.3. Microservices — Solve complex problems
Many organizations, such as Amazon, eBay, and Netflix, have addressed this problem with what is now called the microservices architecture model, rather than building a bloated monolithic application. The idea is to decompose your application into a set of smaller interconnected services.
A service usually implements a different set of features or functions, such as order management, customer management, and so on. Each microservice is a mini-application with its own hexagonal architecture that includes business logic and multiple adapters.
Some microservices expose an API for consumption by other microservices or application clients. Other microservices might implement a Web UI. At run time, each instance is usually a cloud Virtual Machine (VM) or a Docker container.
For example, the system described earlier might decompose as shown in Figure 1-2:
Figure 1-2. A single application decomposed into microservices
Each functional area of the application is now implemented by its own microservices. In addition, Web applications are divided into a set of simpler Web applications. For example, take our rough car as an example, one for passengers and one for drivers. This makes it easier to deploy different scenarios for specific users, drivers, devices, or specialized use cases. Each back-end service exposes a REST API, and most services consume apis provided by other services. For example, driver management uses a notification server to inform an available driver about a potential trip. The UI service calls other services to render the page. Services can also use asynchronous, message-based communication. Communication between services will be described in more detail later in this ebook.
Some REST apis are also exposed to mobile applications for use by drivers and passengers. However, applications cannot directly access back-end services. Instead, communication between them is handled by an intermediary called an API Gateway. The API gateway is responsible for load balancing, caching, access control, API metering, and monitoring, and can be implemented using NGINX. Chapter 2 discusses API gateways in detail.
Figure 1-3. Scale Cube in development and delivery
The microservices architecture pattern corresponds to the Y-axis coordinates of this scaling cube, a 3d scaling model from Architecture is the Future. The other two axes are X-axis scaling and Z-axis coordinates (or data partitions) made up of load balancers running multiple copies of the same application, where the requested properties (for example, the primary key or customer id of a row of records) are used to route requests to specific servers.
Applications often use these three types of coordinate methods together. The Y-axis splits the application into microservices, as shown in Figure 1-2.
At run time, multiple instances of a service run on the X axis, each of which works with a load balancer to meet throughput and availability. Some applications may also use the Z-axis for partitioning services. Figure 1-4 shows how to deploy the Trip Management service on Amazon EC2 using Docker.
Figure 1-4. Deploying the journey management service with Docker
At runtime, the journey management service consists of multiple service instances, each of which is a Docker container. To achieve high availability, containers are run on multiple cloud virtual machines. In front of the service instance is a load balancer such as NGINX that distributes requests across instances. Load balancers can also handle other issues, such as caching, access control, API metrics, and monitoring.
The microservice architecture pattern clearly affects the relationship between applications and databases. Instead of sharing a single database schema with other services, each service has its own database schema. On the one hand, this approach is inconsistent with the idea of an enterprise database data model, and it often leads to some data redundancy. However, if you want to benefit from microservices, each service should have its own database schema. Because it’s loose coupling. Figure 1-5 shows a sample database architecture application.
Each service has its own database. Furthermore, services can use a database that best suits their needs, known as a Polyglot Persistence Architecture. For example, driver management, where drivers who find proximity to potential passengers must use databases that support efficient geographic queries.
Figure 1-5. Database architecture of taxi application
On the surface, the microservices architecture pattern is similar to SOA. Microservices are a set of services. However, there is another way to think about the microservices architecture pattern, which is SOA without commercialization, without the integration of the Web Services Specification (WS-*) and Enterprise Service Bus (ESB). Microservices-based applications support simpler, lightweight protocols, such as REST rather than WS-*. They also try to avoid using an ESB, instead implementing the microservices themselves with ESB-like functionality. Microservices architecture also rejects other parts of SOA, such as the data access specification pattern concept.
1.4. Advantages of microservices
There are a lot of great things about the microservices architecture pattern. First, it solves complex problems. It breaks down potentially large monolithic applications into a set of services. Although the number of functions remains the same, the application has been broken down into manageable chunks or services. Each service has a boundary clearly defined in the form of a remote procedure call (RPC) driven or message-driven API. The microservices architecture pattern enforces a degree of modularity that is, in fact, extremely difficult to implement using singleton base code. As a result, individual services can be developed more quickly and are easier to understand and maintain.
Second, this architecture allows each service to be independently developed and focused by a single team. Developers are free to choose any technology that complies with the service API contract. More organizations, of course, want to avoid complete confusion by limiting technology selection. However, this freedom means that it is no longer possible for developers to start new projects with such freedom using outdated technologies. When writing a new service, they can choose the current technology. Also, because the service is small, it becomes more feasible to rewrite the old service using current techniques.
Third, the microservice architecture pattern can realize the independent deployment of each microservice. Developers do not need to coordinate services that deploy local changes to them at all. These changes are deployed as soon as they are tested. For example, the UI team can perform A | B test, and fast iterative UI changes. The microservice architecture pattern makes continuous deployment possible.
Finally, the microservices architecture pattern enables each service to scale independently. You can deploy only the number of instances that meet the capacity and availability constraints of each service. In addition, you can use the hardware that best matches the service resource requirements. For example, you can deploy a CPU intensive image processing service on an EC2 Compute Optimized instance and an in-memory database service on an EC2 Memory-Optimized instance.
1.5. Disadvantages of microservices
As Fred Brooks wrote in “The Myth of the Man-Month” nearly 30 years ago, there are no silver bullets. Like other technologies, the microservices architecture pattern has its drawbacks. One drawback is the name itself. The focus of the term microservices is too much on the scale of the service. In fact, some developers advocate building very fine-grained 10-100 LOC (lines of code) services. While small services may be better, it’s important to remember that small services are a means, not the main goal. The goal of microservices is to decompose applications sufficiently to facilitate agile application development and deployment.
Another major disadvantage of microservices is that they are complicated by the fact that they are a distributed system. Developers need to choose and implement message-based or RPC-based interprocess communication mechanisms. In addition, because the target request may be slow or unavailable, they must write code to handle some of the failures. None of this is very sophisticated, but modules call each other through language-level method/procedure calls, which is much more complex than a single application.
Another challenge with microservices is the partitioned database architecture. It is quite common to have more business transactions with multiple business entities. The implementation of these transactions in a singleton application is trivial because there is only a single database. In a microservices-based application, you need to update the databases that are useful for different services. Distributed transactions are not usually chosen, and not just because of the CAP theorem. They simply do not support today’s highly extensible NoSQL databases and message brokers. You end up having to use the ultimate conformance based approach, which is more challenging for developers.
Testing microservices applications can also be complex. For example, with a modern framework such as Sprig Boot, you only need to write a test class to launch a single Web application and test its REST API. By contrast, a similar test class for microservices requires starting the service and any services it depends on, or at least configuring stubs for those services. Again, this is not a highbrow matter, but don’t underestimate the complexity of doing so.
Another major challenge of the microservice architecture pattern is the implementation of change across multiple services. For example, let’s say you are implementing A requirement to change services A, B, and C, where A depends on B and B depends on C. In a singleton application, you can simply modify the corresponding modules, integrate the changes, and deploy them all at once. In contrast, in microservices you need to carefully plan and coordinate the changes that occur to each service. For example, you need to update service C, then service B, and finally service A. Fortunately, most changes affect only one service; Relatively few multi-service changes need to be coordinated.
Deploying microservices-based applications is also very complex. A single application can easily be deployed to a set of identical servers based on a traditional load balancer. Each application instance is configured with locations (hosts and ports) for infrastructure services, such as databases and message brokers. Microservice applications, by contrast, typically consist of a large number of services. Hailo, for example, has 160 different services and Netflix has more than 600, according to Adrian Cockcroft.
Each service has multiple runtime instances. There are many more moving parts to configure, deploy, scale, and monitor. In addition, you need to implement a service discovery mechanism that enables the service to discover the location (host and port) of any other service it needs to communicate with. Traditional troublesome ticket-based and manual operation methods cannot be extended to such a level of complexity. Therefore, successful deployment of microservice applications requires a high degree of developer control over deployment and a high degree of automation.
One way to automate this is to use an off-the-shelf platform as a Service (PaaS), such as Cloud Foundry. PaaS provide an easy way for developers to deploy and manage their microservices. IT frees developers from hassles such as purchasing and configuring IT resources. At the same time, systems and network professionals deployed with PaaS can ensure best practices and implement corporate policies.
Another way to automate microservice deployment is to develop your own PaaS. A common starting point is to use clustering solutions, such as Kubernetes, combined with container technologies such as Docker. At the end of the book we’ll see how software-based application delivery approaches such as NGINX handle caching, access control, API metering, and monitoring at the microservice level to help solve this problem.
1.6,
Building complex microservice applications is inherently difficult. The singleton architecture pattern is only good for simple, lightweight applications, and if you use it to build complex applications, you end up in a world of pain. The microservices architecture pattern is a better choice for complex, evolving applications. Despite its drawbacks and implementation challenges.
In later chapters, I’ll cover aspects of microservices architecture and explore strategies such as service discovery, service deployment solutions, and refactoring monolithic applications into services.
Microservices: NGINX Plus as a reverse proxy server
By Floyd Smith
NGINX is used by more than 50% of the 10,000 web sites, mainly because of its ability to act as a reverse proxy server. You can put NGINX in front of your current application or even a database server for all kinds of functions-higher performance, higher security, scalability, flexibility, and so on. Your existing application only needs to configure the code and make few or no changes. For sites that are under performance pressure, or where high loads are expected in the future, the effect may not seem so magical.
So what does this have to do with microservices? Implement a reverse proxy server and use other NGINX features to give you architectural flexibility. Reverse proxy servers, static and application file caches, SSL/TLS, and HTTP/2 are all excluded from your application. NGINX can also serve as a load balancer, a key player in microservice implementations, by letting applications do what they are supposed to do. Advanced NGINX Plus features complex load balancing algorithms, multiple ways of session persistence, and administrative monitoring are especially useful for microservices (NGINX also recently added service discovery support using DNS SRV records, a top feature). And, as described in this chapter, NGINX can automate the deployment of microservices.
In addition, NGINX provides the necessary functionality to support the three models in the NGINX Microservices reference architecture. The proxy model uses NGINX as the API gateway; The grid routing model uses an additional NGINX as the interprocess communication backbone; The Fabric model uses one NGINX per microservice to control HTTP traffic, implementing SSL/TLS between microservices, which is quite groundbreaking.