Author: Li Xin, the technical director and the first one of the mobile platform Department of Tianhong Fund, mainly responsible for the overall technical framework and technical team management of tianhong Mobile direct selling platform.
Prior to that, I worked in Huawei middleware technology team as a technical expert of level 6, leading the planning, design, construction and implementation of many Huawei software cloud computing products, including APaaS, ASPaaS, service governance platform, distributed service commissioning framework and other products. Earlier, I worked as the technical director in the operation product Center of Dangdang, mainly responsible for the reconstruction and optimization of storage, logistics, customer service and other systems and technical management in the background of e-commerce.
I have been working in many technical fields for more than ten years, from project version research and development to middleware research and development, and then to platform research and development, involving many technical fields. In parallel computing, large-scale distributed services and governance, middleware cloud and service (PaaS), APM monitoring, basic development platform, data integration and other fields have some technical accumulation, if you have any questions or good suggestions in these fields, welcome to add my wechat to discuss.
At the ArchSummit shenzhen conference in early July, Li Xin shared the topic “Technological Innovation of Large-scale Service of Yu ‘ebao”, which was warmly received by the audience. Now PPT and lecture notes sorted out, share with you, I hope to give you some inspiration, also welcome to leave a message to discuss.
The sharing is divided into three parts:
-
The evolution history of the overall structure of Yu ‘ebao;
-
Talk about how we carry out the service transformation of real-time fund sales platform and big data platform;
-
The influence of “servitization” on our operation and maintenance and R&D mode and our coping strategies;
The evolution of the overall structure of Yu ‘ebao
Yu ‘ebao is essentially a monetary fund, backed by Tianhong Zenglibao Fund. Due to its relationship with Alipay, Yu ‘ebao is both a financial product and a payment product. With its dual properties of financial management and payment, and the unique minimalist user experience of Internet products, Yu ‘ebao quickly became a “popular model” after its launch, achieving an amazing development in just five years. It is no exaggeration to say that the emergence of Yu ‘ebao has opened the era of national financial management in China.
In order to adapt to the explosive growth of business, the technical architecture of Yu ‘ebao has been changed four times in the past five years. Behind each change, tianhong’s technical team took into consideration and balanced the system scale, scalability and upgrade cost.
When Yu ‘ebao was launched, it solved the problem of “starting from scratch”.
At that time, there were no other similar Internet financial products for reference, and we adopted the traditional typical enterprise-level financial architecture. As we use the self-built IDC room, we use hardware load balancing equipment to access the transaction request. The front end is a small pre-processing cluster composed of two WebLogic. After the request is slightly processed, it is sent to the back-end cluster constructed by Jin Zheng’s fund transaction middleware for business processing. Among them, KCXP and KCBP are the message-oriented middleware and business middleware of Jinzheng Company. The business data storage adopts a small cluster built by two Oracle database servers, which is an online database. Meanwhile, there is a history database service with the same architecture. Hardware is minicomputer equipment, data backup is EMC products.
This is a typical IOE architecture with a full set of commercial hardware and software configurations and high cost. Although many links of the architecture system adopt the cluster mode, more of them use the master/standby mode rather than the distributed mode of load balancing, so the load pressure of single point resources of the system is great, especially the database, which is basically the single library mode (the other library is the standby library), and the scalability is poor.
At that time, the upper limit of the system capacity design was to support tens of millions of users. The traditional fund sales model was the way of agency sales, and investment fund users were mostly for financial management. Therefore, tens of thousands to hundreds of thousands of accounts can be opened every day. Since The docking of Yu ‘ebao is Alipay, the user scale of Alipay reaches tens of millions, which is the positioning of demand in product design. According to estimates, this design capacity will last for years anyway. As a result, we underestimated the popularity of Yu ‘ebao. Within about 10 days of its launch, the number of new accounts has exceeded 100W. At this rate, the database capacity will be exceeded in about 3 months. Indeed, the number of our customers reached 10 million in three months, and the life cycle of yu ‘ebao phase I system was only three months!
So as soon as the first phase of the system went live, the team had to start thinking about capacity expansion, with plans to expand capacity by 30 to 50 times, as well as computing scale. If we still use the mode of pure commercial software and hardware of the first-phase system for horizontal expansion, the cost will reach 100 ~ 200 million yuan, which is unaffordable for a small fund company like Tianhong at that time. So, at Ali’s suggestion, we decided to cloud it as a whole.
At that time, there was a lot of pressure to make this decision, because there was no financial company and no fund company playing this game, and we became the first one to do it. But under great cost pressure, we took this step, which is Yu ‘e Bao 2.0!
In the architecture of Yu ‘ebao 2.0, the hardware load balancer is replaced by SLB, a soft load balancer on Ali Cloud. The virtual computing unit (ECS) on Ali Cloud is used to replace the minicomputer as the server of pre-service and middleware service. Replaced the Weblogic on the front server with the Web service of Ali Cloud.
The biggest change is at the database level. The original Oracle single database has been replaced with a RDS cluster with 50 nodes on Ali Cloud, which is actually split into 50 groups of business nodes according to calculation. However, considering the expansibility, it is not simply split into 50 groups. Instead, the core business table is split into 1000 pieces based on the user account ID as the shard primary key, and each node processes 20 pieces of data (physical child tables). The advantage of this is that in the future, if the system encounters bottlenecks and needs to be expanded, there is no need to modify the split algorithm, and data average migration only needs to be carried out at the library level, thus avoiding table splitting.
After cloud, not only the total database capacity has been increased by dozens of times, but also the transaction processing efficiency has been increased from 120W pens/hour in the first phase to nearly 2000W pens/hour. The maximum clearing time was reduced to less than two hours from more than seven hours previously; And the total cost, it’s less than doubled. Therefore, the cloud on the overall efficiency of our system to improve and reduce the cost of a very obvious role.
The architecture of Yu ‘ebao 2.0 has been running stably for nearly three years, during which it has undergone several small upgrades and optimizations, which has strongly supported various business innovations in the process, but has also laid some “holes” in supporting the rapid development of the business.
In 2016, we decided to do a major upgrade of the system. The keynote of this upgrade is “business logic reconstruction and optimization of clearing process”, which is Yu ‘e Bao 3.0!
The business logic in this phase is much more complex than it was three years ago, so the 3.0 architecture has significantly expanded the cell compared to 2.0. Due to the large buffer reserved in the early stage of the database, it did not become a bottleneck at this stage and still maintained the scale of 50 nodes.
After this upgrade, the overall computing capacity has been greatly improved. Processing efficiency has more than doubled since the 2.0 era, supporting the peak of more than 400 million daily transactions during the 2016 Spring Festival. However, the clearing time has increased significantly compared to the 2.0 era, which has become a “hidden danger” of the system.
In 2017, we planned the upgrade of Yu ‘ebao 4.0 in order to cooperate with Alipay to expand the offline payment scene, expand the business promotion to the third and fourth tier cities, and solve the hidden trouble of long liquidation time.
The scale of the system at this stage is already large, and the cost will increase linearly if the system continues to scale horizontally based on 3.0 architecture. After comprehensive consideration, it is decided to optimize the processing performance of a single node before upgrading, improve efficiency and load, and then expand capacity to reduce the overall upgrade cost.
Therefore, this upgrade will first combine direct selling and commission sales into one, and the same computing unit will handle both direct selling and commission sales, improving the efficiency of single point of processing. On this basis, we will expand the computing unit from 340 nodes to 480 nodes and expand the database by 4 times. Through these two steps of optimization and expansion, the capacity and computing capacity of the system were increased by 4-8 times. This new architecture supported us to smoothly go through the transaction peak of Double 11 and Spring Festival in 2017.
See here, you may have a question: generally in the system service transformation, services will be dismantled more and more detailed, why we go the other way, but the integration of services? This is because the business model of Yu ‘ebao is mature, and the need to ensure scalability through fine-grained service modes is not so strong. On the other hand, the scale of the system is so large that the cost of expansion becomes a key consideration. Combining these two considerations and increasing the complexity of a single point can reduce the overall cost.
At present, a complete set of technology ecosystem has been established for yu ‘ebao. Its core is yu ‘ebao’s product, account, transaction and clearing modules. Based on the two sets of interfaces of real-time call and file interaction, an analysis system of e-commerce big data and a series of auxiliary support systems have been built. At the same time, yu ‘ebao system also has a lot of interaction with other third-party systems, including Alipay, supervision, direct sales channels and so on.
How did we transform the real-time fund sales platform and big data platform into services
The construction of Yu ‘ebao system directly trained the technical team of Tianhong, let us understand what kind of play is a large-scale Internet application, which also directly promoted the service transformation of Tianhong’s own fund direct selling platform. Next, I will introduce this in detail from the two aspects of real-time fund trading platform and service-oriented transformation of big data platform.
Before starting this piece of content, I will introduce the business of fund companies simply.
Fund company is the most important is the “buying and selling” fund, we from direct sales and commission sales channels to the transaction request “receive” come in, in our core trading system for purchase, subscription, subscription, redemption, conversion and so on operation, this process, will involve with payment channels, banks and a series of delivery; At the same time, there are also a large number of clearing and settlement and TA clearing, in this process, there should be data interaction with a series of regulatory agencies such as China Banking and Securities Regulatory Commission and Zhongdeng. The huge amount of funds gathered will be controlled and invested in the stock market, bond market, money market and other investment markets to make profits. Around this business, there are corresponding investment research, fund product management, risk control, customer service and other middle and background business support.
Above, it is the daily business model of fund company.
In the early construction of Fund sales system of Tianhong, there was no concept of servitization. The model at that time was to develop an independent sales and clearing system for the type of business. Due to the generally small volume of business, these systems tend to adopt a monolithic architecture pattern without considering horizontal scalability. After years of development and accumulation, multiple internal direct selling and commission sales trading systems coexist, but accounts between the systems are not open, and users’ asset data cannot be unified, resulting in poor user experience. On the other hand, there is a serious phenomenon of function duplication among systems, which not only occupies software and hardware resources repeatedly, but also is very troublesome to control the version.
This continued long after we migrated to the cloud as a whole, so “servitization” isn’t just about going to the cloud, it’s more about a shift in architectural thinking.
After reflecting on the pain, we decided to extract the common capabilities of these systems, including user accounts, transactions, payments, assets, settlement, etc., and provide common services externally in the form of independent services. In this way, the original business system acts more as a trading floor and can be made lighter, making it easier to scale. Users’ transaction requests are uniformly accessed through the security gateway layer. Currently, we use two sets of gateways, one is SLB and the other is mobile gateway. The whole platform is built on the basis of the I and P capabilities of Ali Cloud and Ant Financial Cloud, on which we also build a series of corresponding log monitoring, service governance, APM, operation and maintenance control and other capabilities.
The above is the overall service-oriented architecture of our current real-time trading platform.
Data analysis of the financial firm is an important work, our big data platform is from real-time online business systems, customs clearing system, operational activities, web and APP users bury some dimensions to collect data, and the ODS (operational data) to a data warehouse, according to a predefined data model, extract the corresponding master data. On this basis, based on users, operations, assets, transactions, risk control and other topics to build multi-level data mart. After having this data mart, we can make a layer of servitization encapsulation on the data in the form of services, and carry out data application on it. One kind of application is offline analysis of data, including retention analysis, retention analysis, marketing analysis and so on. Another type of application is the use of these heavy aggregate data in real-time business, especially marketing campaigns. We have built some advanced operation activities on these data marts, including financial contests based on users’ comprehensive assets, such as “Lucky List”, “Annual Bill”, “Financial Planner”, etc. In addition, based on these different dimensions of data, we have developed an index system to measure users’ comprehensive financial management ability, namely “financial quotient Index”.
Our capabilities are built based on the big data processing and analysis capabilities of The Ali Cloud, using a series of capabilities such as MaxCompute, DTS and QuickBI of the Ali Cloud. In terms of capacity building of big data platform, we follow the following pattern: First, data model planning; After having the model, ETL is used to extract, transform and clean the data. Then a series of rules are used to monitor the data quality. Share grid data for other applications; In addition, the data assets are evaluated regularly, and the model is constantly optimized and adjusted based on the evaluation results. In this way, a closed loop operation of the data is formed.
Data quality monitoring is important here. For off-line analysis, data accuracy and real-time are generally not required. But for the second use of summary data, has high accuracy and real-time requirements, such as for user competition ranking and a reward of some of the operations, because each transaction records are tied to money, the user is sensitive to the accuracy, the result will be a mass of customer complaint, not taken so we in the building of the rule engine, based on the use of hundreds of pre-defined rules, Sampling or full testing of data, once anomaly is detected, will automatically trigger manual correction or automated data correction tasks.
The underlying framework of servitization we currently use is SOFARPC provided by Ant Financial Cloud. SOFARPC provides a relatively easy way to expose and access services. As shown in the above, it can be based on the extension tag mechanism of Spring, a Spring bean instance in a specific agreement to exposure to a remote service, also can be in the form of the interface to a remote service access to come in, at the same time, it also provides chain filter mechanism, service call request of all the processing of a serial type, We can also use it to implement some custom service governance strategies, including service Mock, online data collection capabilities, and so on.
Distributed service framework alone is not enough to ensure the smooth landing of servitization. To run smoothly on the road of enterprise servitization, there must be two legs. One leg is distributed service framework, and the other leg is service governance. Only when both legs are strong, can the road run smoothly. Ali Cloud, combined with its online resource scheduling and resource scheduling capabilities, provides SOFARPC with relatively perfect service life cycle management capabilities, which can realize operations such as service online, offline, capacity expansion and capacity reduction. At the same time, Ant also provides a component called Guardian, through which it can realize the automatic protection mechanism of fusing and limiting the flow of online services. It is similar to the Hystrix component provided by Netflix. You can learn about it if you are interested.
The mobile development department I work for adopts the mPaaS framework of Ant’s mobile development platform. MPaaS is a modular plug-in management framework similar to OSGi. Various basic capabilities required by APP applications can be integrated in the form of plug-ins, and its remote service invocation capability is based on SOFARPC. Taking a look at the figure above, mpaaS provides a unified service gateway that enables remote invocation of SOFA services on the server side. At the same time, it also provides log gateway and message gateway, which can realize the collection service of buried information and message push service on APP. Through the mPaaS framework, we can achieve relatively convenient development of mobile applications.
The above is a situation of the overall servitization of our real-time fund sales platform at present. Next, we will introduce the influence of servitization on our RESEARCH and development, operation and maintenance.
The influence of “servitization” on our operation and maintenance and R&D mode and our coping strategies
The essence of servitization is “disassembling”. The original single application is divided into large and small application clusters and service clusters, which are dispersed to each node of the network. Different teams are responsible for each section. In this context, operational and peacekeeping r&d will encounter a series of new problems and challenges, including fault locating, call relationship sorting, debugging in cluster environment and transaction consistency guarantee in distributed environment.
Let’s see how we can solve these problems.
Only by collecting logs of online services more efficiently can services be better monitored and controlled.
Traditional log collection uses a log component such as Log4j to drop disks and a log collection component such as LogStash or Flume to collect incremental drop disks. In this way, a large number of disk I/OS are required for log collection. For online servers, the biggest performance bottleneck is disk I/O, especially in high-concurrency and high-load environments, the impact of disk I/O on system performance will be multiplied. Our tests showed that the overall performance cost of log collection accounted for about 40% of the total resources when the entire system was loaded.
To reduce system resource usage and collect service logs more efficiently, we develop a log collection mode without disk I/O. The specific process is as follows: adopt the way similar to Spring AOP to block the service request, collect the service call delay, service status and other information; At the same time, specific incoming and outgoing parameters are captured according to the custom configuration. All of this information is encapsulated in a message object and thrown into an in-memory message queue for caching. At the same time, a separate thread preprocesses the messages (if necessary), and the results are pushed into the in-memory message queue for caching again. Finally, the raw log or preprocessed data in the in-memory message queue is sent to the remote logging phone by an independent sending thread.
Log in log collection terminal, then came in a unified thrown into memory cache in the message queue, to be distributed to different segments of time corresponding to the secondary message queue, set by an independent parser instance analysis and disk storage, in this pure memory + full asynchronous approach, we can maximize avoid competing resource locks, and extract the performance of the server, In order to achieve efficient log processing.
With this system, the failure of any service node can be detected by our analyzer in one to two seconds without blocking.
If the distributed services framework is compared to the “coffee”, then the application performance management call chain monitoring and analysis is the “milkshake”, coffee and milkshake are what, a perfect match!
Invocation chain than conventional log collection way more focused on the relationship between the log, it through a unified traceId log on different service node together, calls for a complete description to a request, through the call link, we can find that for the service performance bottlenecks, lack of the buried point situation, the quality of the network, and so on a series of information, Aggregation of call chains can also capture more complex information about the load and health of a service cluster.
The call chain can effectively solve the monitoring requirements in the distributed environment, but in the process of using the call chain, we also need to balance the full collection or sampling collection, automatic code burying point or manual burying point, real-time statistics or pre-statistics, and so on. How to balance, we need to make decisions according to our own characteristics and technical strength. Today, due to the lack of time, I will not expand here. If you are interested, you can pay attention to my in-depth training “Exploration and Practice of Micro-service Governance” two days after the conference.
As we said before, the implementation of enterprise servitization needs two “legs” to walk, one is the service framework, and the other is service governance. To go smoothly on the road of servitization, both legs must be strong.
Through several years of efforts, we have preliminarily built a servitization governance system, which can cover most of the needs of service monitoring and control. Most of the control capabilities are built based on ant Financial Cloud, and the monitoring capabilities are based on SOFARPC. By integrating the capabilities of conventional logging and APM monitoring.
In terms of service monitoring, we extract service call delay, call status, call abnormality and other information from each service node, and summarize the information to the log center for comprehensive statistics, so as to obtain various monitoring reports and monitor the overall market. Through error information, we can locate and delimit fault on line. We can obtain the real-time “water level” on the line through the summary of adjusting quantity at all levels, and then carry out objective capacity planning. Call latency and error rates can be used to infer the health of online services. Based on these data and time dimension, we can also obtain the quality evolution of services over time. In this way, we can have a comprehensive and real-time understanding of the entire service cluster, and make correct management and control decisions on this basis.
Through service management and control, the scheduling instructions of service life cycle management can be delivered to the distribution system to perform operations such as online, offline, capacity expansion, and capacity reduction of services. In addition, the instructions of traffic limiting, degradation, and load adjustment are directly delivered to each service node, and the SDK of the service framework adjusts the corresponding actions.
Thus, there is a closed-loop operation for service governance through monitoring of services (left) and management and control of services (right).
In the process of servitization, the first difficulty encountered by r&d must be debugging. Services in the original monolithic application were split into different teams and deployed on different servers, with only one local service interface. This is where debugging is needed, either P2P direct connection or Mock. With traditional mocks, you write a bunch of mocks. For example, with mockito, you write a bunch of when… . ThenReturn… . Statement, coupling degree is very high.
Using the filter mechanism provided by the distributed services framework, we developed a Mock filter and defined the name, input, and output parameters of the service to be Mock in detail through the Mock data file. Then, when the request comes in and the service name and input parameter match the Mock data definition, the Mock data file’s output parameter is deserialized and returned as the result of the call to the service, and all subsequent operations of the remote call are terminated. This simulates a real remote service through the Mock data.
Mock filters can be enabled via configuration files for “on/off control” and can only be enabled in development and test environments and turned off in production environments.
By building Mock capabilities for services in this way, we don’t need to write a bunch of Mock code, and the whole process is completely insensitive to business logic, sinking Mock capabilities into the underlying service framework.
The Mock capability of distributed services built in this way, besides Mock filters, is at the heart of Mock data construction.
The quality of your Mock data directly determines the quality of your commissioning!
Speaking of Mock data, all it does is match which service, input parameters, and output results. But the reality is more complicated. You can’t build a “current time” from static data. Therefore, Mock data needs to support dynamic matching patterns, namely script matching, in addition to static input and output data matching. The service Mock framework we built supports the use of both BSH and Groovy scripts in Mock data. In addition, there are often different versions of the same service in a cluster of services, so Mock data must also have a concept of version in order to truly simulate the real world.
In addition to the above two matching patterns, we have developed a third Mock pattern for real-world situations, in which we can partially modify the callback parameters of a real request to simulate a third party result, which is useful when there are a large number of parameters.
So, it takes a lot of work to build a Mock, so who does it? This actually involves management practices. Our rule is that whoever provides the service builds the mock data, and the service caller can modify or replace it on this basis. This is not enough, however, because there are so many services, management of Mock data must be systematic and engineered. My recommendation is that Mock data be managed and published independently by a separate project.
Currently, we have developed full Mock capabilities for both the front end and the server side, allowing developers to perform localized functionality commissioning in a basically “non-responsive” state, as well as providing homegrown widgets to automatically generate Mock files to reduce Mock construction difficulty and human effort.
In order to reduce the cost of Mock files, we also developed an “online data fetch filter” based on the filter mechanism of the service framework. It can capture the input parameters and return results of specified service requests and write Mock data files directly. Mock data files obtained by scraping tend to have better data quality because they reflect a more realistic business scenario. Of course, there is also a compliance issue, the capture of online data is a sensitive behavior, most companies will be very careful to do so, generally do a good job of data desensitization processing. For us, data fetching is currently only done in a test environment.
Transaction consistency and availability is a “cliche” problem in a distributed environment, and I believe you have more or less encountered this kind of problem. For distributed transactions, we adopt a three-level response strategy.
First of all, in depots table operations, we will adhere to the “entity group” priority strategy, try to be carried out in accordance with the unified “bonding” depots table operation, for example, if a unified according to the account of “user ID” as depots table button, the user pay related transactions, assets, and other relevant information will fall to the same physical library instance, in this way, Local transactions can take effect for the user’s actions, eliminating the need for distributed transactions. For any distributed transaction, with or without resource locking, additional resource processing consumption is required to effectively guarantee transaction state.
In addition, we provide our own TCC service that supports multilevel transactions to address the inevitable need for distributed transactions. The main reason for adopting TCC is that it is relatively simple compared to other resource managers and we don’t have to focus on the resource layer, just the service interface.
The second diagram above is a typical TCC architecture pattern diagram. I believe that all students who have studied TCC must have seen this diagram. The third diagram is a more detailed interactive architecture diagram of TCC developed by us, which can reflect more technical details.
Overall, in my personal understanding, realize a TCC framework (independent service) is not trouble, the most core is to solve the problem of two core, one is involved in the transaction of business data caching and playback problems, and we are doing the TRY operation, you need to transaction data cached, can save to the database, When the framework does Confirm and Cancel, the cached transaction data is used to run the specific service logic, so the TRY, Confirm, and Cancel methods must be constructed with constraints that allow each other to recognize which inputs are transaction data. The other is the transaction ID transfer problem of parent-child transactions. We solve this problem by “non-business transfer parameter” of distributed services framework. Once a transaction builds a transaction ID, the transaction ID is placed into the context and passed to the remote service along with RPC calls. If the remote service detects that a transaction ID already exists in the context, it does not build a new transaction ID, so that parent and child transactions are associated by the same transaction ID.
Finally, the guarantee of final consistency must be “reconciliation, reconciliation, reconciliation”. This is the most traditional and reliable last line of defense, and it is also the basic capability of financial companies. I won’t go into details here.
After servitization, each team is responsible for part of the service, and often a business involves the coordination between multiple teams. Tianhong has also made different attempts on how to make the coordination within and between teams more efficient. In terms of comprehensive effect, the agile model is more suitable.
To our mobile platform team, for example, we now use two iterations on Monday, a fixed version of the model, at the same time within each iteration, the model “released” train, a train system, starting on time, in this case, other cooperation department in long before can know about our a release plan, and probably know to put the demand into which iteration. In this way, communication costs between departments can be effectively reduced. During each workload assessment, we usually reserve some workload buffer to deal with some temporary requirements, which are released on demand without release restriction. If there are no such urgent requirements during this iteration, we fill the buffer with architectural optimization requirements from the backlog.
The least controllable thing for each iteration is the design of the UI. In the process of UI design, there are more emotional factors, which may be repeated many times, not as clear as the program code. As a result, UI design is generally not included in an iteration, but as part of the requirements. In the workload assessment before each iteration, complete UI materials must be provided, otherwise the workload will not be assessed and the requirements will not be included in the iteration.
For agile to progress smoothly, there needs to be a system of DevOps development tools to support it. In this regard, Ant Financial Cloud has a relatively complete R & D management tool system, but we have not used it at present, because the team size is different. Our team is still small, so we still use some of the most common open source products in the industry (including Jekins, Jira, Wiki, etc.) to integrate the tool chain for building our own DevOps. However, we will integrate a series of capabilities of financial cloud through scripts in our R&D Pipeline. This includes a range of IaaS capabilities such as package uploading and publishing capabilities. In this way, research and development under the cloud and publishing capabilities on the cloud are integrated. At the same time, we will collect the data of all links in the DevOps tool chain, and summarize and present the data of all dimensions through the lean kanban developed by ourselves, so as to achieve strict control of the development status and quality.
conclusion
The above is the main content of this sharing. As we have already introduced, many of our serviceability capabilities are built on the basis of the I layer and P layer capabilities of Ant Financial Cloud. At present, Ant has gradually opened source its cloud native architecture engine – SOFA middleware, especially SOFARPC I introduced before. If you are interested, you can pay attention to this public account.
Click follow to get the latest distributed architecture
Welcome to jointly create SOFAStack https://github.com/alipay