In 2019, many people will be confused or even misunderstood about “cloud native.” That’s why we keep hearing different definitions of cloud native in different contexts. Some people say that cloud native is Kubernetes and containers; Others say cloud native is “elastic scalability”; Others say that cloud native is Serverless; And later, someone simply make a judgment: cloud native itself is “Hamlet”, because everyone’s understanding is not the same. Yonyou Developer Competition (2020.diwork.com/index.html) conducted a more in-depth study and interpretation through the holding of the competition.

In fact, since the keyword CNCF and Kubernetes technology ecology “borrowed” at the beginning, the meaning and connotation of cloud native is very certain. In this ecosystem, the essence of cloud native is a combination of best practices; More specifically, cloud native specifies the best path for practitioners to maximize the power and value of the cloud in a scalable and replicable manner, with a low mental burden.

So cloud native does not refer to an open source project or technology, it is a set of ideas that guide the design of software and infrastructure architectures. The idea, in a word, is “application-centric”. It is precisely because of the application-centric nature of cloud technology architecture that there is an infinite emphasis on enabling infrastructure to better match applications and “deliver” infrastructure capacity to applications in a more efficient manner, rather than the other way around.

Accordingly, Kubernetes, Docker, Operator and other open source projects that play a key role in cloud native ecology are technical means to make this idea come to the ground. Application as the center is an important mainline guiding the whole cloud ecosystem and Kubernetes project to flourish so far.

In 2020, with the rapid development of containers, especially Kubernetes, CNCF rapidly built a huge ecosystem consisting of hundreds of open source projects based on such a “seed”, making the trend of cloud native implementation more and more clear: Landing in the form of a container, the “application-centered” to the end.



The idea of DDD (Domain-driven design) originated in 2004 and has been tepid for more than a decade. It has only gained more and more attention in the last two years. Some people say that it is thanks to micro-services that DDD has become popular. As a matter of fact, DDD is the most widely recognized method of microservices, but the core idea of DDD is not limited to microservices themselves. Because microservices are an architectural style, DDD is an idea. The nine core characteristics of the microservices definition are perfectly aligned with the principles of DDD, which is partly why the industry is willing to adopt DDD approaches and practices in the microservices context.

Despite the growing focus on DDD, there is an agile development dilemma in practice: how do you adapt your organization to DDD? In the past, agile development has been said to be too demanding on the individual, but it is not. On the surface, agile requires high skills for developers, but in fact, agile development requires a change in organizational structure, and many people are reluctant to move, so business and technical collaboration issues are difficult to solve.

The same is true for DDD. In the past, the technology line might be divided into development 1 and development 2, and the business line might be divided into business 1 and business 2. However, the concept of DDD is to divide the DDD into fields from the business perspective, and then divide the fields into services. The micro-service architecture is adopted when the DDD is implemented. The previous division method is completely unsuitable, so the organizational structure directly causes the difficulty of DDD implementation.

Specific performance is not up to cooperate, each line blows the pot to each other, the leader complains that the team personnel ability is not enough. It is expected that DDD will become more popular in 2020 as microservices and the mid-platform ideas continue to heat up, but the problems will also become more prominent.



Since 2018, the popularity of Service Mesh has skyrocketed, and as the Kubernetes ecosystem has been established and improved, the size and complexity of Kubernetes-based applications will increase, and Service Mesh will become everything necessary to effectively manage those applications. Demand for it will grow rapidly.

We believe that Istio will continue to play a central role in the Service Mesh space as a technical implementation of the control plane in 2020. The reason Istio has gained so much attention in the industry is that it is backed by Google’s internal engineering practices, and that they have been rethought and refined. In China, there are also big players such as Alibaba. There may be room for other players in the future, but consolidation will begin in 2020.

In the long run, we’re likely to see something like Kubernetes, where a winner emerges and companies start standardizing that winner. Currently, the industry is building an ecosystem around Istio, and Istio seems the most likely to become the de facto Service Mesh.

In 2019, the use cases of Service Mesh solutions were relatively simple. Looking forward to 2020, we believe that more companies will realize the value of Service Mesh through practice, and accelerate the adoption of Service Mesh by creating more successful user stories and cases. Perhaps 2020 will be the year of Service Mesh.

References are made to yonyou Research Institute, Yonyou Developer Competition and YonBuider Development Center