Special instructions

This is a translation document of cloud-native sharing of architecture evolution by simviso team, mainly about Service Mesh sharing, which is shared by Kong’s CTO. The company has a well-known open source project: github.com/Kong/kong

Let’s take a look at how the CTO transitioned from a single application architecture to a microservice architecture, what Kong did, what role K8S played in it, what is a state machine (to get a design based on event architecture), and what pitfalls and experiences will be encountered in this process, all in the video sharing

Video Address:

【 Foreign cutting-edge technology sharing – Cloud Native Topic – Chinese subtitle 】 From single application to micro service development journey – on

【 Foreign cutting-edge technology sharing – Cloud Native topic – Chinese subtitle 】 From single application to micro service development journey – next

All rights reserved by Simviso:

Incidentally recommend a professional programmer back-end wechat group circle:

preface

My name is Marco Palladino, AND I am the founder and CTO of Kong

As he said, I’m from San Francisco, California, and TODAY I want to talk to you about Service Mesh

In order to create a modern architecture, our architecture organization used Service Mesh to make the transition

What I’m trying to say is that as we get bigger and break down into distributed management, the organization will become as complex as a multicellular organism.

In fact, both our teams and our software decoupled and distributed once they got complex

As you can imagine, modernizing the architecture with Service Mesh is more than just technology adoption

Our approach to software development is also changing in three directions

One is technological renewal

The second is the transition in organizational structure

So how can architecture organizations change to cope with the new microservices architecture

In other words, we had to change the way we operated with monolithic architecture systems. The new microservices architecture could not be deployed, extended, versioned, and documented in the same way as before

These three directions are changing the way we develop software

With the release of Docker and Kubernetes in 2013 and 2014, a real software revolution began

Docker and Kubernetes provide us with a new way to create applications. As time goes by, we can expand these applications in a better way based on them, not only from the technical benefits, but also as our first choice for business development

In fact, aside from the technical aspects, in this talk I’m going to talk about the business goals that we want to achieve

If we can’t coordinate the use of new technologies while meeting our business goals, we won’t be able to move forward with the technology transformation

In the process, we have to become more pragmatic

The process of moving from a single architectural Service to an aggregate Service (such as Maven multiple modules) to microservitization is like a pocket with a bunch of candy in it. Each candy is a microservice, and each microservice may be a normal or functional Service

At the same time, apis change their role in our systems

Application Performance Monitoring (APM) is usually managed in API granularity. Since the advent of mobile in 2007-2008, we’ve needed a way to interact with our individual applications. So we have North-South traffic (north-south exchange, usually refers to data interaction between multiple terminals and the server, that is, we can access the server through external developers, mobile apps or other external forms, with emphasis on external)

As we decouple our software, apis play an increasingly important role in our systems

East-west traffic, which usually refers to the data interaction between the systems developed by our different teams and between the products of our different systems. Next, we’ll discuss the use of Service Mesh to separate the data layer from the control layer. In order to continuously optimize and expand our system

Some relevant points of transition microservices

1 What does it mean?

But taking a step back, from a practical point of view, why use Service Mesh for the microservice transition

Our goal in refactoring monolithic applications is to free up team productivity and improve business scalability

The key to the entire transition at this point is team productivity and scalability of the associated business

In the transition to microservices, if we don’t do either of those things, it’s fair to say that what we’re doing is just following the lead and so we have to ask ourselves before we do this, are we using it for production or are we using it for the technology

In this process, the business should be the primary driver, because the purpose of writing the project is to achieve the business goals, determine the business strategy and then adopt the microservices architecture to achieve it, and then determine when to start, and more importantly, when to end

2 Should we do it?

And then the second question that people who want to move to microservices want to ask is should we do it, should we move to microservices, should we abandon the current architecture?

In terms of technology trends, we have to ask ourselves, do we adopt a technology based on actual need, or do we adopt it because it is mature

The transition to microservices is much more complex than running standalone applications. It’s hard to have all the functions in one system and then split it into a bunch of systems with a single function

Now, we almost never deploy or scale large monolithic applications in the same way anymore but we can do that by taking a whole system and breaking it up into hundreds or thousands of monolithic systems with different functions

If microservices can’t solve our current problems of managing monolithic applications, it’s going to make things worse

Therefore, the technical architecture and team members must be mature enough to actually solve the existing problems

In fact, I really recommend taking a step-by-step approach to the transition to microservices

For several reasons. First of all, by taking steps to achieve the milestones, it builds the team’s confidence and makes them feel that they can handle it and we can adapt and change our process as we go along

At the same time, business leaders become more confident with the transition because it is moving in the right direction, expanding and evolving the business over time

In fact, when we look at microservices, when we look at the big companies like Netflix or Amazon they actually made the transition to microservices a long time ago, even when Kubernetes and Docker didn’t exist

What is driving these companies to make this transition?

In the case of Netflix, in order to expand its business and get out of one country and one user system, it wanted to go global, so it changed

So that’s their business goal as well

Their previous architecture did not meet the business needs they were trying to meet. So they turned to microservices architecture to meet their business needs

The same is true of Amazon, because it is not content to sell only one thing. It prefers diversity of products, so architecture changes can meet more complex needs

So microservices are tightly wrapped around these business requirements. In order to accomplish our goals we can’t do it alone whether it’s the development team or management. Amazon, for example, is a successful example of microservices

Do you know who made the decision to move Amazon to micro-services?

Amazon CEO Jeff Bezos once imposed six rules that people who don’t follow can be fired.

It’s hard to believe that this change was made not by the development team, but by their CEO

It is the right way forward for leaders to make gradual changes to achieve their goals and to become more confident

Judging from all of the above, we will have to go through untold hardships to reach our goals

We can determine whether microservices are large or small based on complexity

In fact, the size of these microservices is determined by the needs of the business itself, as appropriate

3 Let’s do it!

But only if we feel that microservices is what we need, and we’re on the right track, can we continue to transform

In the process of transition to the micro service, I defined the two different strategies One of what I call the digging up ice cream, if there is a VAT of ice cream, you can have a scoop of a spoon to dig out what you want, buckets of ice cream can dig out their respective services and business logic, and then you can separate in different process implementation and deployment

The second method is Lego (building blocks), which means we don’t completely get rid of the original monolithic application, which is usually used for older projects

But new functionality will be built on a new architecture

So you have to find a way to reconnect microservices with older apps, or it’s going to be hard to transition

The third method, which I call nuclear, also means we build everything from scratch

In the transition to a microservices architecture, we need to maintain the original monolithic applications

From day one of the transition to microservices, we couldn’t build from scratch because it was the worst option

It requires a strong technical team, but it also requires that the technical boundaries be defined according to the actual situation. When you have two different teams working at the same time, one can deliver continuously, and the other can refactor and microserve the project

When we rebuild the new codebase, we refactor the functionality of the old codebase into the new codebase, so the question is, where do we create the new functionality, because at this point our business is still running on the old codebase, not using the new codebase

In other words, we have a very complicated situation that is difficult to solve, so let’s not throw everything away and start from zero. Next up is our ice cream strategy for today

For example, we have an object-oriented singleton service architecture that has all the functionality we need to interact with different objects and classes in this singleton service architecture

Different services share the same database. If we want to scale this monolithic application, we can only deploy more instances of this monolithic application (to relieve the pressure on one extended service, other services are forced to scale unnecessarily horizontally, adding complexity).

If the individual application is an e-commerce site, such as Amazon.com, we have many different services, such as user management, order management, inventory management, in-site search, etc

This is a typical example of a monolithic service architecture, in which each individual square represents a different business logic

Because this is a large monolithic service architecture, we need multiple teams to maintain it and in this case, three different teams are maintaining the monolithic application, creating new features for it and putting them into production

Now you know some of the pros and cons of singleton versus microservice architectures. One of the problems with a monolithic architecture is that if one of the development teams commits frequent changes, then a lot of coordination between the other teams is required to properly deploy the changes in production

As the code base grows larger, so do product development teams. As time goes on, the problem gets worse. This could seriously affect the efficiency of our new release cycle

When team 2 (multiple teams working together) also started to change code frequently. The problem is more complicated, how do we change the situation at this point?

Since team 2 is working on search services, inventory, and other modules, how can we isolate a module if it requires many iterations?

So the ice cream strategy we used was to take the business logic out of the monolithic architecture and isolate it

These services are important, but they are not microservices. And that’s what we need to do today to gradually transform them into microservices

We want to make sure that the new services extracted do not rely on the database used by the old system

Our intention is to extract new services that are truly independent

If the old system makes too many requests to the database

The database will become unavailable

Because we don’t want the service to be affected, we want to split and isolate the service

Then we test run it in production, and when we encounter one or more parts (in this case the squares) that also need a lot of work, we point them out and extract them

So we now have three different services (as shown), one is our old system, followed by a slightly larger service, and finally a small service

Realistically, we can solve our current problems by extracting services from individual applications over and over again, or extracting further from separated services to make them smaller in granularity

The entire iteration process cannot be completed in one go, we can only be pragmatic and step by step agile development for the extracted business

Through this step-by-step process, we can modify other relevant logic in the project organization

To provide more decoupling support for subsequent iterations of the project

In effect, during the transition, this is a refactoring, similar to the service refactoring we used to do inside a single service architecture. There are a few things we need to clarify here

The first thing we need to understand is what the model is. If we don’t understand what the code does, then it’s hard for us to define service boundaries, right

You know that the logic of most monolithic architecture services is too coupled. That’s why we need to define service boundaries and decouple services and if we don’t know that, we can’t extract it

In order to be able to rebuild the old system, we also need to understand the clients used in the current system, and what will happen to the client if an error occurs

Before the microservices transition, we need to know which services will be affected

So the third thing that we did was, like all refactoring, we had to go through integration testing to verify that the logic that we wanted to change was correct to make sure that our refactoring didn’t have an impact on the original system

We wanted to scale our system without affecting the internal workings of the system

We don’t want to interrupt essential services, or at least we can’t extend or change the business logic at the same time, it has to be done step by step

When the old system was running, something might have been a small method change, so we didn’t have to extend it the same way as before

For example, upgrading our client or updating the route to properly route requests to different services

Clients make requests to our monolithic application, but now those requests are not handled by our monolithic system but by other services. So how do we reroute these requests?

In a traditional business model, we would place a load balancer (such as Nginx) between the client and the server.

For example, our client can be a mobile application, and when we deploy a monolithic system, we only need to run multiple instances of the application behind the load balancer

But as we extract and decouple our system into many microservices, we need to make some aspects of the entire operating architecture more intelligent

One of the first things to do is to adopt a sensitive API gateway

If this gateway can do intelligent load balancing

So when we decouple the services, we can use the API gateway as a proxy between the server and the client and implement functional routing. This allows us to route requests to these new services without upgrading the client. It was very important to us that if we didn’t do that, we would have to upgrade the client, and the original version would break, and the customer experience would be poor, and that would have a very bad impact on our business

The proxy between the client and the server, the gateway, has to be smart enough to help us deal with some of the operations that we have in our architecture and the transition to microservices that I talked about earlier, it’s not just the adoption of technology like we talked about Kubernetes, it’s also the transition of operations management

We cannot deploy microservice architectures and SOA architectures in the same way we deploy monolithic applications

We need some strategies to reduce the risk

For example, if a new version of a service comes out, we can direct 10% of the traffic to the new version of the service by using a grayscale publishing strategy

Only when it’s stable can we completely redirect traffic to the new version of the service and get rid of the old version of the service and at the same time keep the old version running in case we need to roll back to the previous version

Before delving into microservices and Service Mesh, I want to take a moment to talk about hybrid and multi-Cloud

When it comes to microservitization, we need to use platforms like Kubernetes to run our services

Because we still have many services running on the old system, we will not run all services in Kubernetes. We will still run the old system on the platform we used before, such as virtual machines

As we plan to move to microservices, we must develop a strategy to effectively connect the old and new application platforms

There’s no way we can run everything on Kubernetes overnight

So, what can we do to reconnect the services running on Kubernetes with the dependent services of the original system

As I said earlier, thinking of the entire organization as a complex human tissue, we can imagine that different teams, products, and business requirements have the common goal of being independent of each other

From a higher level perspective, there is no need to use hybrid or Multicloud for the entire organization, it requires a top-down decision

The adoption of hybrid and Multicloud means that, over time, the different teams choose different cloud platforms based on the needs of the project organization

At a higher level, why did they adopt Hybrid Cloud, not because they wanted to use it, but because they had to

Ideally, we want all services to run on the same platform

But let’s face it, the current situation is that we need to have multiple systems running in different places, so we need a higher level of abstraction architecture, where we need tools to manage Multicloud and Hybrid in this project organization

4 Service Granularity

As we extract more and more microservices from this monolithic service architecture

One piece of advice I’d like to give is that I think it’s important to define the boundaries of microservices in a reasonable way, in line with the current goals. This is what I have learned from the very successful extraction cases

Like my previous example, we can’t microservize all at once. We can take a relatively large service and decouple it over time, making it smaller and smaller

When we look at Service Mesh and microservice architecture, ideally these services should be small and connected to each other in the same way.

But the reality is that the size of these services depends on the size of the business goals we achieve in the best way possible

Let’s not force microservices just for the sake of microservices, sometimes what we need is a larger service

5 network

Ok, so now we have a lot of microservices that are decoupled, and these decoupled microservices are created and deployed independently by different teams. It’s important to emphasize that these services don’t exist on their own, they need to communicate with each other to run

So, we are going to abandon some of the ways of using it in monomer applications. We base our interfaces and function calls on objects in the code base (objects here mean interface implementation classes, not our POJOs), and by executing these function calls perfectly, we can call and access different objects

But when we decouple them into microservices, these interface implementation class objects become services

Interfaces will still exist, but all access between microservices will be over the network

For example, in a Java singleton application, when a function call is made (calling a method in the defined interface), because the underlying JVM is based, we don’t have to worry about the call at all. It will be handled successfully, and the Java VIRTUAL machine will receive the call and route it to the correct implementation class

But in microservices, we won’t be able to do that

Since we will be making function calls over the network, we will not be able to guarantee where exactly in the network this function request will be invoked

The network is a very unreliable element in our system. The network can be slow, it can be delayed, it can be down but the difference between the network and the individual application level is that it can have all kinds of problems that affect our system services

In microservices architecture, we have different teams to run and deploy these services

The team’s mindset also needs to change

In a single application, it can be in two states, working or down

But in microservices, any time a service has a problem running, we need to change the way we think about it, that there should be a service degradation, that there should be a service degradation part of our architecture

Because each of these different services will have problems, the more degraded services we create, the team can deploy them independently

At the same time, in terms of organizational structure, our project team should also change the corresponding thinking mode

At some point, if something goes wrong with the service, the service degradation system has to figure out how to handle it while providing a great experience for the end user

All these things we do for the end user, we’re not doing it for us, we’re doing it for the end user to have a better product experience

delay

And then latency, the latency that comes from calls between apis that we can’t ignore

As we all know, network latency is very important for traditional mobile to single application access

If the delay of mobile App accessing our system reaches hundreds of milliseconds, the situation is not optimistic for us. We can reduce the delay through CDN or cache to improve the speed of network processing

In microservices, the delay is caused by more and more mutual requests between different microservices. Service A is consuming service B, and service C is also consuming service. It can be seen that this will lead to the delay of mutual invocation between services

In the microservices transport architecture, once a service outage occurs in an underlying service, the caller calling the outage service will also have a new outage. In a single service architecture, once a service fails, its related calls will fail

In microservices, when a response times out, it affects other related microservices, creating a chain reaction, as if the whole system doesn’t work anymore

So latency is the number one problem that we need to solve, it’s not something that we fix after the fact like security and performance, they’re not functional components, we can’t add them later we have to keep that in mind when we’re writing systems, how do we write to ensure performance, how do we write to ensure safety

Security has also become another component of this (see figure), so now that we have all these communications on the network, we want to be able to encrypt and protect those interactions

For example, by enforcing two-way TLS or using what you know to encrypt network traffic between different services

By the way, these services don’t have to run through restful apis

We can use any transport protocol that is good for our scenario, you can use REST or gRPC or any other transport protocol

We are not bound by any particular mode of transport or protocol, we need to know how to transparently protect and encrypt all of these communications

As I mentioned earlier about routing, now that we have different services, how do we set up routing services somewhere in a system to route to our different versions of services, to route requests to those microservices that we deploy on different cloud hosts

Also if our service is no longer running, then we need to do exception handling and we need to be able to interrupt traffic to those instances or services that are not working. So here we need a circuit breaker and a health check

In a unit service architecture, we don’t necessarily have to have circuit breakers or health checks but when you get to a point where you run a thousand unit applications, they become extremely important and if we don’t do that from the beginning, then we’re in danger of transitioning to microservices

Also, the observability is different than before in the singleton architecture where we don’t care where the request goes (because it’s clear), but now in the microservice architecture, we have to know, right

Because we need to be able to locate weaknesses in our microservices architecture so we need to track requests from one service to another, and collect performance metrics and logs so that we can understand what microservices are doing at any given time, and if something goes wrong, where is it going wrong

Without defining service boundaries, it’s hard to make the transition to microservices because at some point there will inevitably be problems, and we won’t be able to use the right tools to dig and find them

Service Mesh Pattern

Let’s move on to the Service Mesh

First of all, Service Mesh is not a technology, but a design pattern, and it can be implemented in a variety of different ways

But often, we have an agent running with our service that can make unreliable networks reliable

So we don’t have to worry about handling latency, observability, two-way TLS, etc., when we build our services

Because the connection point is no longer service to service, but Data Plane (DP) to Data Plane

These points are connected via an external network, and it is these points that communicate via a Data Plane

Also, since Data Plane can execute logic without service awareness, exceptions, delays, and observability processing can be done here

In a Service Mesh, the Data Plane is both a proxy and a reverse proxy, depending on where the request goes

For example, if this service wants to invoke that service, then the Data Plane will act as a proxy because the requests being made by the service will be brokered through that Data Plane

But when that Data Plane receives a request, that Data Plane acts as a reverse proxy that receives the request and proxies it to the service associated with it

We need to have a Data Plane. We can implement the Service mesh in different ways. Each underlying virtual machine or copy of the Service you are running can have a Data Plane

The latter is called the Sidecar loop proxy because we will provide a Data Plane instance for each service instance on the server

The idea here, whether we have a Data Plane per replica or a Data Plane per underlying virtual machine, is that whenever there is a service-to-service communication, the traffic is first forwarded to the Data Plane and then received by the Data Plane

These services no longer communicate directly with each other, but must go through this different system (Data Plane). This allows us to implement additional logic in the Data Plane, which means that service awareness is best because all two-way TLS, all observable operations are out of the box and the team does not have to build them in the service

As you can see, these Data planes act as a point of contact, and from these points we can do anything including integrate the database into the system administration. It’s also because when a service uses a database, we still want to observe the usage process and we still want to have all the functionality that Data Plane provides, so that everything will be communicated through Data Plane

Therefore, we are making a lot of network calls in different services through decentralized proxies

It is called decentralized because, unlike traditional ESBs, we do not centralize broker instances, but spread them across services

The definition of Kubernetes

So, the Sidecar proxy I mentioned earlier, we should know what it is by now. Next, let’s return to the definition of Kubernetes

Kubernetes abstracts the way we deploy Pods on the underlying virtual machine

We use Kubernetes to make our virtual machine cluster look like a host

Kubernetes will decide where the Pods go

When we tell Kubernetes I want this proxy to be a Sidecar container

For the other pods of this designated service container we tell Kubernetes that I want to deploy this proxy on the underlying virtual machine where these services are located (i.e. each service has a Sidecar container in its pod)

We want the Sidecar agent to always be deployed on the underlying virtual machine because we want the communication between the service and the agent to always be on localhost (for each replica)

The assumption is that it always has a 100% success rate in localhost

Because we don’t leave here (into the external network) but remain in the virtual machine

Network problems between here and there will be handled by Data Plane.

We will have an instance of this broker, incorporating the Sidecar model for each instance of our service

Now, this adds to the problem with Data Planes, which really have to remain very low utilization and have to be very small in terms of occupancy

Because we will be running one for each copy of each service on our system, this means that if the agent itself consumes many resources, we will run out of memory resources (for example in the underlying virtual machine)

So if we use more circular agents, we need to make sure that the agent is very, very small and has very little resource footprint

By the way, the same is true for our own services, which can’t take up too much memory otherwise either we’re running on very large virtual machines or we’ll run out of memory very quickly

Therefore, both services and agents must be very small in terms of resource utilization

That’s why you can’t put an ESB there

You can imagine running an ESB instance for each service instance and your service will never work

Now, we want these different services to run on different Data planes, and we need to configure these Data planes as soon as possible

When our service is communicating, it is actually these Data planes that rely on the protocol certificate set by us to communicate. On the whole, the whole protocol is very complicated, and it is difficult for us to communicate with them (we need to make a lot of potential preparations, which may lead to problems if we are not careful).

So, we don’t want to manually push configuration to these different Data planes

Here we are talking about the Control Plane

Data planes and Control planes are very, very well known for working at the network layer

You have a bunch of Cisco switches running in your data center, and you want to push your configuration to these Casco switches

So we wanted a Control Plane that would allow us to do this

Apply the same concepts to software

We push the configuration to each Data Plane through a Control Plane

Or the Control Plane allows the Data Plane to fetch the configuration from a unified location

But it can also be a way for us to collect metrics (operating parameters)

Now, the Data Plane is collecting all of this traffic (traffic behavior) at once, and we want a component that allows us to collect logs, collect metrics, and still be on the Control Plane

The Data Plane is on the execution path of the request

The Control Plane is only used for configuration, it’s only used to get these metrics, the Control Plane is never on the execution path of our API request

Basically, our North-South Gateway just becomes another Data Plane, another proxy that’s not our Data Plane, but allows us to communicate with the underlying Data Plane

Because they are all part of the Mesh, we can enforce bidirectional TLS between these different Data planes to ensure that our system is protected

We could still do this without a Service mesh, so we would have to build our own service to do it

Technically, microservices do not have to be done through the Service Mesh. The problem is, if you want to have them, you have to build them yourself

Kubernetes are not necessarily needed for service mesh, but the concept of service Mesh can be applied to any platform

Kubernetes just makes it easier to run microservices on a large scale, but nothing stops us from running service Mesh across virtual machines

We just need to deploy our Data Plane instance on the virtual machine where the service is running, and we have a Service Mesh running on the virtual machine

In fact, we want the Service Mesh to run in Kubernetes

But as mentioned earlier, we still have monolithic applications running on virtual machines or legacy platforms

And we want it to somehow become part of the Serivce mesh so we want the data platform to run not just in Kubernetes, but in our individual application environment

We want this pattern (the singleton application above) to be part of the Service Mesh as well

Platform independence is definitely important if you want to make the move to microservices more practical

As mentioned earlier, different services can be built in different languages, which is one of the advantages of microservices

Because we can build services with few dependencies and independent functions on the Data Plane, this means we don’t have to do the same logic over and over again

Of the two microservices written in two different technologies, the Data Plane belongs to an external processing agent, which runs in the same virtual machine with the corresponding microservice

Event-based architecture

Let me talk a little bit more about event-based architecture

When we talk about service mesh, that is, when we talk about microservices, we usually talk about communication between services, but that’s not the only way to implement these systems

Another way to create them is with an event-based architecture

This means that we will propagate an event in the architecture to handle things like capture state in a more efficient way

Suppose there are two different microservices, one for ordering and one for invoicing

Every time we create an order, we issue another invoice, which sounds good

However, if the invoicing microservice becomes unavailable for some reason, we will continue to retry until the request times out

The final invoice will not be created, but it is best not to be

If it does happen, the propagation of state in the system will be interrupted and costly to repair later

Therefore, in the case of state propagation, we can consider using events

For example, let the invoice microservice be ready before handling the order creation event so that our state is not lost

The risk with this approach is that we can use something like Kafka as an event collector, but other services in the system might actually have some exceptions

So we want to have a data plane in front of these services, because we want to make sure that service events reach the event collector (Kafka)

So we can use Kafka or some kind of log collector to handle some of our use cases, but of course we have to make sure that the log collector is available and doesn’t fail

It is often easier to focus on keeping a service running (log collection, for example) than on propagating status information to each service

Because if we have one component we just need to make sure that it keeps running properly, whereas if we have a lot of components, then all the components have to be running properly and we can focus on making the operation a little bit more reliable, making the propagation of state a little bit more reliable

As I said, the organizational structure of the enterprise is changing, and our systems are becoming more and more complex organisms, so I like to use the nervous system analogy

Our bodies are made up of two distinct parts of the brain, CNS(central nervous system) and PNS(peripheral nervous system)

The peripheral nervous system stores all the information that our bodies can understand and then passes it on to the CNS so that the CNS can process it

This is very similar to the concept of Control plane and data Plane

Data Planes will be located around each of our services, individual applications, and functions (such as lambdas)

But configuration, monitoring, and visibility will all be handled by the Control plane

What did Kong do

I mentioned earlier that I am the co-founder and CTO of Kong, which provides an effective open source Control plane and Data Plane that allows the management of different architectures in an organization

With over a million Instances of Kong running worldwide to help developers manage data Planes, we provide technical support to teams that need to make the architectural transition from monolithic architecture to microservices to Service Mesh to Serverless

The transition to microservices is a complex topic because it affects three different aspects

But more importantly, we have to align the process with the business goals we’re trying to achieve, or as I said, it’s just micro-services for the sake of micro-services

The first is business transformation

We have to be realistic. We don’t need to convert to microservices or have thousands of microservices immediately if we don’t have to

The main thing is that we always have time to make the service smaller and smaller

So, take your time, extract the service down to medium size first, and then make it smaller over time

In fact, I would say that only when the service needs to be smaller can we make it smaller. That’s the best way to transition

Because it enhances both our productivity and our business

By adopting an approach that uses technology, we are able to transition to these new architecture microservices

But you can also connect in a very pragmatic way to older applications that are now providing business services (programs that need refactoring)

Those are still the most critical components, and connecting them to our new Greenfield microserver architecture will be the future of our systems

So it’s about making a connection between the old and the new

Thank you for listening

Ask questions

So do you have any other questions?

Part questions: for example, a single application, according to the speaker before about the meaning of the decoupling of service gradually becomes more and more small Amazon means for them to do so would have cost a lot, so they will give up halfway again directly, directly to the comprehensive service (to abandon the original system), what’s your opinion on this

Like I said, it’s a choice. As long as the organization/development team is ready to fully microserve and has a clear mind, this kind of transition to microservices is also desirable

In adopting that plan, we must analyze its advantages and disadvantages. Of course, by analyzing their pros and cons, Amazon could choose to reinvent them instead of slowly transitioning from standalone apps to microservices, in which case the pros far outweigh the cons

In my opinion, it’s still a strategy that very few companies have successfully transitioned to microservices

Of course these are things we need to take into account

There are a lot of calls between microservices. However, the latency of our network communication via Data Plane is much higher than that of local calls between services. Is there any way to reduce the latency?

The system can give us an indicator of where the delay bottleneck is right so if we need to go over the network, we’ll get more delay than if we need to go through local function calls, which is feasible

In fact, this network processing can add even more latency if the agent is not set up correctly

Because you know two-way TLS and observability operations add latency to our processing

So we can do is in the Data Plane layer cache some information, such requests may not be needed to enter to the external network, so I think of Between these different service organizations to achieve a sense of global cache, assuming that this way is feasible, because in some cases (scenarios) that we are unable to realize (cache)

For others, they fully accept the concept of an ultimately consistent architecture, and latency is only part of it

And that’s how the client is set up, they take that into account, so everything ends up being consistent, there’s a little bit of delay and we’ll see the information on the client side, it’s not necessarily up to date, but it ends up being consistent

When considering latency, the Service Mesh can help us locate latency in a number of ways, not just network latency, but also Service latency

In order to be able to determine if we need to upgrade services that are already causing performance bottlenecks to other services, we need an observation mechanism so that in practice, we can implement caching methods to improve the performance of our architecture

Of course, since we are making requests based on the network, we should be realistic about which delays are normal and which are not. If not, then we must ensure that the system is ultimately consistent, and based on that premise, we build our client