This article is modified and organized from the third go-Zero live broadcast of “Go Open Source Talk”. The video content is long and divided into two parts. The content of this article has been deleted and reconstructed.

Hi everyone, I’m glad to come to GO Open Source talk to share some of the stories, design ideas and usage of open source projects. Today’s project is Go-Zero, a Web and RPC framework that integrates various engineering practices. I’m Kevin, author of Go-Zero, and my Github ID is Kevwan.

An overview of the go to zero

Although GO-Zero was open source only on August 7, 20, it has been tested on a large scale online, which is also my accumulation of nearly 20 years of engineering experience. After open source, I got positive feedback from the community, and gained 5.9K star in more than 5 months. It has repeatedly topped the github Go Language list of the day, week and month, and won gitee’s Most Valuable Project (GVP) and the Open Source China’s Most Popular Project of the Year. At the same time, the wechat community is very active, with a community group of more than 3000 people. Go-zero enthusiasts share their experience in using Go-Zero and discuss problems in using it.

The middle three layers in the figure below are service governance related components supported by Go-Zero, which basically cover the main capabilities of microservice governance and do not need to be configured by developers. The default scheme has been tuned by large-scale online projects.

Pain points in microservice system design

1. How to split the microservice system?

  • Coarse first, then fine, not too fine, do not one interface one service

  • Split horizontally, not vertically, and we try not to call more than three layers

  • One-way call, no circular call

  • Do not share the same data definition between different layers. Otherwise, one change may affect others

  • Serial calls without dependencies can be changed to parallel, reducing response latency without increasing system load through core/ Mr Packages

2. How to ensure high concurrency and high availability?

  • Good data boundaries

    Data boundaries are at the heart of microservice splitting. Shared data should not be displayed between different services, but should be shared through RPC.

  • Efficient cache management

    Caching is critical for a service to support high concurrency. Not only should caching be well designed, but tools should be used to make business developers as error-free as possible. Because caching code can be quite difficult to write, GoCTL does a good job of generating code that automatically manages caching.

  • Elegant fusing load drop protection

    Micro service system is usually made up by a large number of services together, more services will naturally there is the risk of a service failure, we can’t let a service fault causes the entire system is not available, this is the avalanche of services, to avoid an avalanche, we need to effectively isolate defective services, thus relegated to be available. Circuit breaker and load drop is one of the most effective means to prevent service avalanche.

  • Elastic expansion capacity

    For the system with high concurrency, the system needs to be able to expand horizontally in time in the case of sudden traffic peak. The self-adaptive load reduction of Go-Zero can well match the automatic horizontal scaling capacity of Kubernetes cluster.

  • Clear definition of resource usage

    In order to keep the system stable, we must have a clear definition of resource usage, such as when we consider expanding resources, whether the resource usage reaches 50% or 70%, and must have a clear definition of system resource usage.

  • Efficient monitoring and alarm

    I keep saying it internally: No measurement, no optimization. We must have efficient monitoring and timely alarm mechanism for the system. So that we can have a good understanding of the operating state of the whole system.

Where to start with large microservices projects?

Microservices generally look like this, but we don’t have to start our business with microservices. Let’s take a look at how a typical microservice system has evolved. Here’s a rough look at how a large microservice system has evolved.

  • Start with individual services

    At the beginning of the project, we should not blindly pursue the advancement of technology, because most projects will not reach the day of high concurrency. What we need is to meet the business in the first time.

  • Business first, technical support

    I say in the team: Architecture comes from the business, to the business. Any technology that is not part of the business is self-fulfilling, and we need to make meeting the business needs as a top priority, and the technology needs to support the current and expected growth of the business. Of course, it’s nice to have frameworks and best practices in place that fit your business, but don’t make the technology stack so complex that the focus shifts from the business to the technology itself.

  • Service Indicator Monitoring

    As the business evolves, we may need to upgrade and remodel the technology, but we always say: no measurement, no optimization. Therefore, we need to add the monitoring of the key indicators of the entire service in time, so that we can carry out the necessary transformation under the premise of understanding the system.

  • Data splitting + cache management

    As the business grew to a certain point, based on monitoring, we found that the service had to be split. So the first step is to split the data clearly. After splitting the data, we can add the corresponding cache management to ensure the stability of the data level.

  • Service split

    Compared with data splitting, it is relatively easy to split services. Based on the split data, the upper level services can be mapped out, because the services are stateless, so the split is relatively easy.

  • Support system construction

    With the development of business, the daily system maintenance work is more complicated and error-prone. At this point, we need to build the support system, how to deploy the new service, how to update the old service, whether to use Kubernetes, etc.

  • Automation + engineering construction

    When the business grows to a certain point, engineering efficiency becomes a big issue. Goctl is designed to solve automation and engineering efficiency problems. The automatic generation of built-in apis, RPC, Model, Dockerfile, K8S deployment files and so on saves us a lot of time and avoids errors in business development.

Go-zero Component Profiling + Go-Zero Best Practices (to be continued)

If you want to get a better feel for the Go-Zero project, head over to the official website to learn about specific examples.

Video Playback Address

www.bilibili.com/video/BV1Jy…

The project address

Github.com/tal-tech/go…

Welcome to Go-Zero and star support us!