The core of BaaS is to encapsulate our back-end applications into RESTful apis and provide services externally. In order to make the back-end applications easier to maintain, we need to disassemble them into o&M free micro-services

The disassembly and merger of micro services, there is a degree to grasp, because we are in a disassembly and merger, there are costs. If we disassemble too much, our call links will inevitably grow. Call link longer, the first impact is network delay, this is easy to understand, after all, you have a long way, there may be more “traffic jam” places; Secondly, the increase of operation and maintenance costs. The longer the call link is, the more vulnerable the whole chain will be, because the failure of one link will lead to the failure of the whole call chain, and it becomes more difficult for us to troubleshoot problems.

On the other hand, if we disassemble too much, the call link is shorter, but the reuse of the microservice is poor, not to mention the complex and redundant database table structure due to high coupling, which makes it difficult to maintain later. I drew a picture. You feel it.

! [](https://ask.qcloudimg.com/http-save/5395074/s2ikpasw4f.webp?imageView2/2/w/1620/format/jpg)

Dismantle the

So if we want to properly disassemble microservices, how should we disassemble them? In fact, as I mentioned last time, the dominant solution right now is domain-driven design, or DDD. DDD is an idea proposed by Eric Evans in his 2004 book of the same name, but it has been confined to Java circles until 2014, when it was discovered that it could guide the break-up of microservices, which came to the attention of most people. In a nutshell, DDD is a methodology: by abstracting business hierarchically, analyzing and defining domain models, using domain models to drive our system design, and finally disintegrating complex business models into independent operational domain models.

In fact, I found in the process of using microservice development that microservice as a whole should be a dynamic network structure, which will change with the development of business. DDD can help us to analyze a better network structure in the early stage, but in fact, we should think more about how to optimize the dynamic network as a whole: reduce core nodes, protect core nodes, reduce the network depth and so on.

How to understand dynamic network optimization? We can do a thought experiment: suppose we break down all the functions into microservices, and any microservice nodes can call each other, and the more frequently they are called, the closer they are to each other. So let’s consider, when the access request traffic of our website is stable, what is the state of the network composed of our entire micro-service nodes?

First of all, the mutual restriction of network nodes always makes those nodes that are strongly dependent on each other and highly coupled get closer and closer, and finally gather into a cluster of nodes. Secondly, those nodes that have nothing to do with business logic are gradually marginalized or even disappear. If we look at the cluster of nodes, if the cluster of points is too close together, it is not suitable for splitting, they should be a microservice. After the clusters that are too close to each other merge into one microservice node, we can see that the clusters that are not too close together are microservices.

So, when we started the project, we didn’t have to worry too much about how to dismantle microservices. Instead, keep an eye on and think about the rationality of each microservice node. Just like a dynamic network, constantly tweak and optimize, removing core nodes. Eventually it will follow you through the stages of your business, to a stable dynamic network structure.

! [](https://ask.qcloudimg.com/http-save/5395074/q5ekgpl26j.webp?imageView2/2/w/1620/format/jpg)

At the

As we have seen above, the disassembled architecture is a dynamic network, so how should we merge or orchestrate it? It is also possible to process the result of each HTTP request through an array or object, as SFF does, and then return the result. But HERE I want to introduce you to another choreography idea, workflow.

! [](https://ask.qcloudimg.com/http-save/5395074/8w3va0p56m.webp?imageView2/2/w/1620/format/jpg)

We can think of user requests as our respiratory system. Our lungs are the SFF, and the microservices and FaaS nodes are the organs that need oxygen. We take a breath, the oxygen goes to the lungs, and the blood circulation flows the oxygen through each of our organs in a sequence called the request link. As soon as each organ receives fresh blood, it takes oxygen back to carbon dioxide, and eventually the blood circulation takes the carbon dioxide to the lungs and exhales it. This is the data return link. Our various organs are connected by the event of requesting a link through the arrival of new blood, and this is the event stream, that is, connecting FaaS or microservices with one event after another.

authentication

In fact, the security protection provided by FaaS is usually placed on the trigger. The trigger authorization type or authentication mode can be set to anonymous or function. Anonymous means that anonymous users can access without signature authentication. As for the functional method, signature authentication [4] is required. The algorithm parameters of signature authentication need to use our account access secret key AK/SK [5], ak/ SK is equivalent to the bank card password of our cloud account. Such important account information can only be limited to the server, and must not appear in the front-end code.

To solve the security of back-end intermodulation, we use VPC or IP whitelist, which are easy to solve. What is more difficult to deal with is the trust problem at the front and back ends. JWT just provides a solution of trust chain. Of course, there are also cloud services that offer more secure and easy-to-use BaaS services for authentication, such as IAM and Cognito from AWS.

Security is an important part of how we think about architecture, because failure to design a security architecture directly leads to the loss of our assets. Authentication is used to identify user identities and prevent user information leakage and malicious attacks. But according to my statistics, 99% of the problems we have in our daily life happen during the launch of the new version.

As our project becomes Serverless, the quality of the code becomes even more important. If you think about it, before Serverless, you accidentally launched a bug that affected only one application. But after Serverless, if it is the core node that releases a serious bug online, then the scope of the impact is all the online applications that depend on it

However, don’t worry too much; both microservices and FaaS have the ability to iterate independently quickly. We used to have an app iteration cycle of one to two weeks. However, for Serverless applications, each node can be published online anytime and anywhere with the help of independent operation and maintenance.

In summary, both microservices and FaaS are fast iterative and fix problems quickly, but we can’t rely on this capability every time a problem occurs. Is there any way to detect problems ahead of time and make sure we’re both quick and steady? The best practice in software engineering today is the release pipeline of the code pipeline.

Release pipeline

The release pipeline pipeline has three main parts:

  1. Verification before code release, code test coverage CI/CD;
  2. Simulated flow regression test passed, released to gray environment;
  3. Code officially online, gray environment to replace the formal environment. The result produced by each node in the pipeline is used as the necessary start parameter for the next node.
! [](https://ask.qcloudimg.com/http-save/5395074/1twegmspox.png?imageView2/2/w/1620)

Let’s take a look at the picture above, and I’ll explain the process.

  • After we merge our code into a given branch, I usually use the Develop branch.
  • Git’s hooks trigger the pipeline that starts the build, package, and test process.
  • All the test node does is run all the test cases and count coverage.
  • The coverage was verified and the code example was verified by recorded traffic simulation.
  • The simulation verification passes, and the code instance is released to the grayscale environment.
  • On line, according to the gray strategy, a small amount of traffic is imported into the gray environment to verify the gray version.
  • In the gray window period, for example, two hours, if there is no abnormality in gray verification, the gray version will be replaced with the official version; Otherwise, immediately discard the grayscale version, stop loss.

This process is basically run in this way by some large-scale Internet companies at present. If you are not familiar with it, you can try to build it by yourself with Serverless workflow or workflow tools provided by cloud service providers. On the basis of this process, many enterprises will set up environmentally isolated assembly lines and safety bayonets in order to pursue higher stability. For example, isolate the test environment and the online environment, where the test environment is used to reproduce the failure. Each time the code enters the release pipeline, it must first run in the test environment, and then release the safety bayonet before entering the assembly line of the online environment.

! [](https://ask.qcloudimg.com/http-save/5395074/5m9v6pah3s.webp?imageView2/2/w/1620/format/jpg)