What is cloud native? I think for those of you who have never met, the first question mark comes to mind. First, LET me change the definition of cloud Native Computing Foundation (CNCF) :
Cloud native technologies enable organizations to build and run applications that can scale flexibly in new dynamic environments such as public, private and hybrid clouds. Cloud native technologies include containers, service grids, microservices, immutable infrastructure, and declarative apis.
The definition is official enough. If my title is not big enough, you probably don’t want to read it. Let’s take it literally, splitting Cloud and Native:
So what is a Cloud? Cloud is cloud computing. Cloud is essentially a network. In a narrow sense, cloud computing is a network that provides resources. Broadly speaking, cloud computing is a service related to information technology, software and the Internet. This computing resource sharing pool is called “cloud”.
The origin of cloud computing is closely related to the development and maturity of virtualization technology. Based on virtual machine technology, IaaS, PaaS, and FaaS have been successively introduced. Machines are deployed in public cloud, private cloud, and hybrid cloud modes from physical machines to virtual machines to containers.
With the increasing maturity of Docker containerization technology, it perfectly solves the elastic expansion and shrinkage capacity for stable operation of the cluster in distributed cluster with the outbreak or decrease of business. In the end, Kubernetes(K8S) took the lead and became the standard for cloud native. There will be more articles on this topic. For now, just know that K8S is an aircraft carrier with cloud native and is as important as the Linux operating system in virtual machines.
It is easy to understand what things are Native to, “they are born with knowledge”, that is, things should be done as they were when they were born.
What is the understanding of the combination of Cloud and Native? Throw out a definition: cloud native means design native for the cloud. Applications native are designed to run on the cloud in the best way to take full advantage of the cloud.
Now that the application concept has been proposed, what should the application look like on the cloud? Or what is the best way to do it?
Before thinking about this, let’s take a look at how applications worked before cloud native:
Prior to cloud native, the underlying platform was responsible for providing basic operational resources upward. The application needs to meet business and non-business requirements. In order to better code reuse, the implementation of general non-business requirements is often provided in the way of class library and development framework.
Non-business requirements are: for example, communication between services, service registration and discovery; Message-oriented middleware used between businesses to reduce coupling; In order to achieve service observability, the introduction of call link tracing, log monitoring and so on.
In addition, in the ERA of SOA/ microservices, some functions that are not required by the business will also exist in the form of back-end services, which in the application will be reduced to code that calls its clients. The application then packages these non-business functions, along with its own business implementation code, into a growing application.
Back to cloud native design, cloud support should make the application more business-focused.
Here non-business requirements related functions are moved into the cloud, the cloud infrastructure, and the middleware that enables the business requirements is sunk into the infrastructure. Taking the communication between services as an example, middleware needs to do the following things:
It seems that there are so many, and it fits very closely with the application. Is there a better way to realize the above functions? If it is introduced in the way of SDK, it may look simpler:
The idea behind the SDK is to have a robust client that does away with all the middleware functionality needed to communicate between services, but at the end of the day, it’s still intrusive.
In cloud native, the idea of Service Mesh mode is adopted to separate SDK clients and put them into SideCar to make applications lighter.
In this sense, having an application focused solely on business needs, with the infrastructure in the cloud (including middleware) providing the required resources and capabilities upward, is more in line with the cloud native design approach that cloud native applications should strive for.
Once these concepts are understood, it can be inferred that the impact of cloud native on the entire industry is a revolution, because cloud native covers a wide range of areas, including development, testing, operation and maintenance. There may be business development and infrastructure development, business testing and infrastructure testing, and so on.
Perhaps by now you are not quite clear, why there is such a technology model, cloud native why? To illustrate why:
1. In order to solve the complexity problem of single application, microservice architecture emerges;
2. In order to solve the problem of a large number of applications deployed under the micro-service architecture, container technology appears;
3. In order to solve the problem of container management and scheduling, Kubernetes(K8S) was born;
4. In order to solve the intrusion of micro-service framework on applications, Service Mesh technology is introduced;
5. In order to provide better underlying support for the Service Mesh, run the Service Mesh on K8S
Speaking of which, for those of you who have never touched it, you may be a little confused…
What is the Service Mesh mode? How does communication between services work in SideCar mode? Why do applications only do business, and how is middleware dynamically enabled?
Before going back to cloud native, take inter-service communication as an example and use SDK to package middleware for inter-service communication invocation. The pattern is as follows:
After using the Service Mesh mode, SideCar implements complex operations such as Service registration discovery, load balancing, routing, fusing and flow limiting, and interconnects with infrastructure and other middleware products. All that is left in the application is a lightweight Sidecar-SDK (for example).
(SideCar translates to “SideCar” in Chinese, which means the SideCar of a motorcycle. It is like the Japanese little tricycle in a war movie. The soldier on the left side rides a motorcycle, and the commander on the right side wielns a knife.
Now that you know what it is, let’s take a closer look at how Service Mesh works:
1. Develop business applications in cloud native mode: Applications only need to have the most basic capabilities, such as producing a service and invoking a service
2. Dynamic Sidecar insertion during deployment: When we deploy the cloud native application we developed to the cloud, specifically in the POD of K8S, we will automatically deploy another Sidecar in the POD to enable the application
3. At run time, change the behavior of cloud native applications: as mentioned above, the service provider produces a service to the caller service invocation, which is changed to hijack the invocation request to Sidecar. Note that this change is transparent to the application
4. Implement various functions in Sidecar: You can implement various functions of the original SDK client in Sidecar, such as service discovery, load balancing, routing, etc
5. As Sidecar implements these capabilities, it can connect to more infrastructure and other middleware products, in which case the Service Mesh product serves as a bridge between the application and the infrastructure/middleware
6. Sidecar behavior can be controlled through the control plane, and these controls can be independent of the application
Now that you know how ServiceMesh works, does the term dynamic empowerment feel a little familiar?
When splitting the working principle of the Service Mesh, the second step is to dynamically insert the Sidecar container into the pod where the application is located during deployment. As mentioned in the third step, Sidecar hijacks the traffic originally requested by the registry when the application is transparent and unaware. And switch to a locally deployed Sidecar, completely changing the invocation behavior.
This dynamic SideCar insertion at run time and transparent hijacking of traffic is dynamic empowerment.
Want to know how the dynamic enabling process goes in more detail this time? In fact, when Pod is started, traffic hijacking rules are set and enabled, and then processed and forwarded in SideCar according to the rules. The control panel will connect with other Sidecars, identify the service call according to the access rules, and finally forward it to the target application.
The biggest advantage of transparent hijacking is that it is non-intrusive to the code, and other advantages are as follows:
1. Business applications do not need to pay attention to the implementation details and invocation methods of various functions at the beginning, nor do they need to rely on SDK. These capabilities will be dynamically endowed at run time. 2. For existing applications, the old code can be directly run in the Service mesh without modification, thus avoiding the complex process of modifying the code, repackaging and publishing. 3. Transparent hijacking supports direct connection (without SIDECAR), single-hop (after only one SIDECAR), and double-hop (after two SIDecARS), which is convenient for development and debugging and easy to achieve compatibility with the existing system.
In fact, the definition of Service Mesh is Service grid, but at the beginning of the Service Mesh, I was confused. Once I know how it works, let’s think about it from point to point, it is easy to understand.
Let’s take a look at the transparent hijacking of multiple application services, multiple pods, deployed on K8S:
Not like a net, the image on the left is an enlargement of a single POD. The green square represents the application container, the blue square represents the Sidecar container, and the blue line represents the communication between services.
This is the end of the introduction of cloud Native. It mainly introduces the appearance of cloud native and the working principle of its operation in Service Mesh mode. Of course, there are many advantages that have not been carefully explained.
Cloud native has a huge knowledge system, and it takes a long time to accumulate and use it to get familiar with it. In the follow-up, Kubernetes’ exploration, the specific landing mode of Service Mesh, and how to carry out smooth migration with the current situation are all topics left to us. There is still a long way to go, and it is only the beginning now.