The original link: www.infoq.com/articles/mi…
By Bilgin Ibryam
English proofread: Daniel Bryant
Translator: Yin Longfei
Editor’s note: A few years before Kubernetes was born, microservices were the most popular architectural style for distributed systems. But Kubernetes and the cloud native movement have changed every aspect of application design and development. In this article, I’m going to question some of the ideas behind microservices and point out that they won’t be as powerful in the post-Kubernetes era as they were before. I just published a call to arms yesterday Kubernetes Handbook V1.4 was released and the post-Kubernetes era began.
The key points
-
Microservices architecture remains the most popular architectural style for distributed systems. But Kubernetes and the cloud native movement have largely redefined application design and development.
-
On the Yunyuan platform, observability of services is not enough. A more basic prerequisite is to automate microservices by implementing health checks, responding to signals, declaring resource consumption, and so on.
-
In the post-Kubernetes era, service grid technology will completely replace the use of libraries to implement operational network problems (such as Hystrix circuit breakers).
-
Microservices must now be designed for “recovery” by implementing idempotency from multiple dimensions.
-
Modern developers must be proficient in programming languages to implement business functions, as well as in cloud native technologies to meet non-functional infrastructure-level requirements.
The microservices hype started with a bunch of extreme ideas about organizational structure, team size, service size, rewriting and throwing out services instead of fixing them, avoiding unit testing, and so on. In my experience, most of these ideas turn out to be wrong, impractical or at least generic. Today, most of the remaining principles and practices are so generic and loosely defined that they may hold true for many years to come without making much sense in practice.
A few years before Kubernetes was born, microservices were the most popular architectural style for distributed systems. But Kubernetes and the cloud native movement have changed every aspect of application design and development. In this article, I’m going to question some of the ideas behind microservices and point out that they won’t be as powerful in the post-Kubernetes era as they were before.
Not only observables, but automated services
Observability has been a fundamental principle of microservices from the beginning. While this is true for distributed systems in general, today (especially on Kubernetes), a large part of it is out of the box at the platform level (such as process health checks, CPU and memory consumption). The minimum requirement is that the application log in to the console in JSON format. From then on, the platform can track resource consumption, request tracking, collect all types of metrics, error rates, and so on without much service level development effort.
On yunbara’s platform, observability is not enough. A more basic prerequisite is to automate microservices by implementing health checks, responding to signals, declaring resource consumption, and so on. You can run almost any application in a container. But to create a containerized application that can be automated and orchestrated through the cloud native platform, certain rules need to be followed. Following these principles and patterns will ensure that the resulting containers behave like good cloud-native citizens in most container choreography engines, allowing them to be scheduled, extended, and monitored in an automated manner.
We want the platform not to observe what is happening in the service, but to detect anomalies and coordinate according to the claims. It doesn’t matter whether you stop directing traffic to the service instance, restart, scale up and down, migrate the service to another healthy host, retry failed requests, or whatever. If the service is automated, all corrective actions will occur automatically, and we only need to describe the required state rather than observe and react. Services should be observable, but can also be rectified automatically through the platform without human intervention.
Smart platforms and smart services, but with the right responsibilities
The concept of “smart endpoints and dumb tubes” is another fundamental shift in service interaction in the transition from SOA to the microservices world. In microservices, services do not depend on the existence of a centralized intelligent routing layer, but rather on intelligent endpoints with certain platform-level capabilities. This is achieved by embedding some of the functionality of a traditional ESB in each microservice and converting to a lightweight protocol with no business logic elements.
While this is still the popular way to implement service interaction at the unreliable network layer (using libraries such as Hystrix), it has now been completely replaced by service grid technology in the post-Kubernetes era. Interestingly, service grids are even smarter than traditional ESBs. The grid can perform dynamic routing, service discovery, delay-based load balancing, response types, metrics and distributed tracking, retries, timeouts, you name it.
Unlike an ESB, where there is only one centralized routing layer, each microservice typically has its own router — a Sidecar container with broker logic attached to the central management level. More importantly, pipes (platforms and service grids) don’t have any business logic; They focus entirely on the infrastructure side, keeping services focused on business logic. As shown in the figure, this represents the evolution of ESB and microservice learning to accommodate the dynamic and unreliable nature of cloud environments.
SOA vs MSA vs CNA
Looking at other aspects of services, we notice that cloud native affects more than just endpoints and service interactions. The Kubernetes platform (with all the other technologies) is also responsible for resource management, scheduling, deployment, configuration management, scaling, service interaction, etc. Rather than calling it “smart agents and dumb tubes” again, I think it better describes it as a smart platform and smart service with the right responsibilities. It’s not just about endpoints; It is a complete platform that automates all infrastructure aspects of business function services.
Don’t design for failure, design for recovery
Microservices running in cloud native environments where the infrastructure and network itself are unreliable must be designed for failures. There’s no doubt about it. However, the platform detects and handles more and more failures, and fewer failures are captured from microservices. Instead, consider designing your recovery services by implementing idempotency from multiple dimensions.
Container technology, container choreographer, and service networks can detect and recover from many failures: Infinite loops — CPU allocation, memory leak, and OOM — health check, disk occupancy — quota, fork bomb — process limits, batch processing, and process isolation — memory limits, latency and response-based service discovery, retry, timeout, automatic expansion, and more. Not to mention, in the transition to a serverless model, services only need to process a request in milliseconds, and garbage collection, thread pools, and resource leaks become less and less of a concern.
Handle all this and more through the platform, treating your service as a sealed black box that will start and stop multiple times, enabling it to restart. Your service will scale in and out of multiples, making it safe to scale by making it stateless. Assume that many incoming requests will eventually time out, making the endpoint idempotent. Assuming that many outgoing requests will temporarily fail, the platform will retry them for you to ensure that you use idempotent services.
To be suitable for automation in a cloud native environment, services must be:
-
Idempotent restart (a service can be killed and started multiple times).
-
Idempotent scaling/shrinking (services can automatically scale to multiple instances).
-
Idempotent service producer (other services may retry the call).
-
Idempotent service consumer (service or mesh network can retry outgoing calls).
If the service always behaves the same when you do the above once or more, the platform will be able to recover your service from a failure without human intervention.
Finally, keep in mind that all the recovery provided by the platform is only local optimizations. As Christian Posta points out, application security and correctness in distributed systems are still the responsibility of the application. A business-process-wide mindset, which may span multiple services, is necessary to design an overall stable system.
Mixed development responsibilities
More and more microservice principles are implemented and provided by Kubernetes and its complementary projects. Therefore, developers must be proficient in programming languages to implement business functionality, as well as proficient in cloud native technologies to meet non-functional infrastructure level requirements while fully implementing functionality.
The boundaries between business requirements and infrastructure (operational or cross-functional requirements or system quality attributes) are always blurred, and it is impossible to take one aspect and expect others to do the other. For example, if the retry logic is implemented in the service grid layer, the business logic in the service or the services used by the database layer must be idempotent. If timeouts are used at the service grid level, service consumer timeouts in the service must be synchronized. If repeated execution of services must be implemented, Kubernetes job execution must be configured.
Looking ahead, some service functions will be implemented in services as business logic, while others will be provided as platform functions. While using the right tools to do the right task is a good separation of responsibilities, the proliferation of technology has greatly increased the overall complexity. Implementing simple services in terms of business logic requires a good understanding of the distributed technology stack, as responsibility is spread across each layer.
Kubernetes is proven to scale to thousands of nodes, tens of thousands of pods, and millions of TPS. The size of your application, the complexity, or the key to introducing “cloud native” complexity, I don’t know yet.
conclusion
It’s interesting to see how the microservices movement has provided such momentum for the adoption of container technologies like Docker and Kubernetes. While it was originally the microservices practices that drove the development of these technologies, Kubernetes now defines the principles and practices of microservices architecture.
As a recent example, we are not far from accepting functional models as valid microservice primitives, rather than as antipatterns for nanoservices. We don’t have good reason to question the usefulness and applicability of cloud native technology for small and medium sized cases, but we’re jumping a little out of excitement.
Kubernetes has a lot of knowledge about ESBs and microservices, so it is the ultimate distributed system platform. It is the technique of architectural style, not the other way around. Time will tell, for better or worse.
About the author
Bilgin Ibryam (@Bibryam) is lead architect, committer, and ASF member at Red Hat. He is an open source evangelist, blogger and author of Camel Design Patterns and Kubernetes Patterns. In his day job, Bilgin enjoys coaching coding and leading developers to successfully build cloud-native solutions. His current focus is on application integration, distributed systems, messaging, microservices, DevOps, and cloud native challenges in general. You can find him on Twitter, Linkedin or his blog.
ServiceMesher community information
Wechat group: Contact me to join the group
Community official website: www.servicemesher.com
Slack:servicemesher.slack.com requires invitation to join
Twitter: twitter.com/servicemesh…
GitHub:github.com/
For more ServiceMesh consultation, follow the wechat public account ServiceMesher.