Author: shiva


Translator: Ye Zhan


Proofreading: Wang Kai


原 文 :
ServiceMesher community

The original:
medium.com/@tak2siva/w…

Unless you’ve lived under a rock for a long time, you’ve probably heard of Kubernetes, who has called it a formula for fast-growing Internet companies. One of the hottest topics of recent years is Service Mesh, which is being used by these fast-growing companies to solve specific problems. So if you want to understand what a Service Mesh is, I can give you a better explanation.

Evolution of Internet applications

To understand the importance of Sevice Mesh, we briefly review the development of Internet applications through four stages.

Stage 0: Single application

Remember those years? All code bases are packaged into an executable and deployable package. Of course, this method still works well in some scenarios. But for some fast-growing Internet companies, scalability, rapid deployment and ownership of applications face resistance.

Phase 1: Microservices

The idea of microservices is simple, breaking up individual applications into modules according to SLAs (Service Level agreements). This way of operation effect is significant, so widely accepted by enterprises. Now, each team is free to design their microservices in their favorite language, framework, and so on. And then it starts to look something like this.

We used to joke in one of my projects that there were microservices in all languages 🙂

While microservices solved some of the problems of monolithic applications, the company now has some serious problems.

  • Define a VM (virtual machine) specification for each microservice

  • The maintenance system level depends on operating system versions, automation tools (such as Chef), and so on

  • Monitoring each service

This is a nightmare for the people responsible for the build and deployment.

Moreover, these services share the same OS in the virtual machine, but for portability, they need to be isolated from each other or encapsulated into separate VM images. Typical architecture design of microservices is shown in the figure below:

However, installing each service/copy on a separate virtual machine is very expensive.

Stage 2: Containerization

Container is a new operating system level virtualization technology that uses Cgroups and Namespace in Linux to isolate the operating environment for different applications by sharing host operating systems. Docker is currently the most popular container runtime.

So we will create a container image for each microservice and publish it as a service as a container. Not only can you isolate the application running environment on one operating system, but starting a new container is faster and cheaper than starting a new VM! Microservice design with container technology looks something like this.Containerization solves build and deployment problems, but there is no perfect monitoring solution yet! So what do we do? Do we have any other questions? Manage containers!

There are several important points to note about running a reliable infrastructure layer with containers:

  • Availability of containers

  • Generated container

  • Capacity expansion/reduction

  • Load balancing

  • Service discovery

  • Schedules containers to multiple hosts

Stage 3: Container Choreography

Kubernetes is one of the most popular container choreography tools that has revolutionized the way we think about infrastructure. Kubernetes focuses on health checking, availability, load balancing, service discovery, scalability, cross-host scheduling containers, and more. Amazing!

Is this what we want?

Not really, this alone does not address the service monitoring/observation issues mentioned in the microservices phase. This is just the tip of the iceberg. Microservices are distributed, so managing microservices is not an easy task.

We need to consider some best practices for running microservices easily.

  • Metrics (delay, success rate, etc.)

  • Distributed link tracking

  • Client load balancing

  • fusing

  • Traffic migration

  • The speed limit

  • Access log

Companies like Netflix have rolled out several tools and embraced those that run microservices.

  • Netflix Spectator (Metrics)

  • Netflix Ribbon (Client Load Balancing/Service Discovery)

  • Netflix Hystrix (Fuse)

  • Netflix Zuul (Border Routing)

Right now, the only way to meet these best practices is to use a client library on each microservice to solve each problem. So the structure of each service looks something like this.But this is for services like Service A written in JAVA. What about other services? What if I don’t have a Java-like library for another language? How do I get all teams to use/maintain/upgrade library versions? Our company has hundreds of services, do I need to modify all applications to use the library?

Do you see that? This has been a problem since the inception of microservices (language restrictions, application code modifications).

Phase 4: Service grid

Several proxies, such as Envoy, Linkerd, and Nginx, provide solutions for Service Mesh. This article focuses only on Envoy Service Mesh.

Envoy is a service proxy designed to address these issues with microservices.

Envoy can run alongside each application as a SideCar, forming an abstract application network. As all service traffic in the infrastructure flows through an Envoy grid, it becomes easy to address problem areas with consistent visibility.

As shown in the figure below, when an Envoy is added to a service as a SideCar, all inbound and outbound traffic from microservices goes through their respective Envoy proxies

Envoy has many handy features

  • Supports HTTP,HTTP/2 and gRPC

  • Health check

  • Load balancing

  • Metrics

  • tracking

  • Access log

  • fusing

  • Retry strategy

  • The timeout configuration

  • The speed limit

  • Supports Statsd and Prometheus

  • Traffic migration

  • Dynamically adjust configuration (XDS) by discovering services…

So by abstracting the entire network from the service and using the Envoy as SideCar to form the grid to form the data plane, it allows us to control the capabilities listed above.

Welcome feedback, thank you!