Micro-service implementation is a complex issue involving IT architecture, application architecture and organizational architecture
















Cloud Architect Advanced Walkthroughs




































Phase 1: Single architecture group, multiple development groups, unified o&M group


Stage 1 organizational status





















Phase 1 operation and maintenance mode








Phase one application architecture








  • Files: NFS, FTP, Ceph, S3
  • Cache: Redis Cluster, primary/secondary, Sentinel, Memcached
  • Distributed frameworks: Spring Cloud, Dubbo, Restful or RPC
  • Sub-table: Sharding- JDBC, Mycat
  • Message queues: RabbitMQ, Kafka
  • Registry: Zk, Euraka, Consul














Any questions about stage one?
































When do you think stage one is a problem?






































Stage 2: Organization as a service, architecture as a SOA, infrastructure as a cloud


The organizational form of stage two
































Phase 2 application architecture


































































































Service decomposition and service discovery in microservitization












  • API package: All of the interface definitions are here. For internal calls, the interface is also implemented, so that once you split out, local interface calls can become remote interface calls
  • Access to external service packs: If this process accesses other processes, the wrapper for external access is here. For unit testing, this part of the Mock allows functional testing without relying on a third party. For service split, invoke other services, also here.
  • Database DTO: Define the atomic data structure here if you want to access the database
  • Accessing the database package: The logic for accessing the database is all in this package
  • Services and business logic: This is where the main business logic is implemented and where the split comes from.
  • External services: The logic for providing services externally is here, and for the provider of interfaces, the implementation is here.














  • Internal configuration items (no change after startup, need to restart)
  • Centralized configuration items (Configuration center, which can be delivered dynamically)
  • External configuration items (external dependencies, environment specific)
































Phase 2 OPERATION and maintenance mode





























































































































































Any questions about stage two?
































When would phase two be considered problematic?
























































Stage 3: DevOPs-oriented organization, microservitization of architecture, containerization of infrastructure


Phase 3 application architecture
























  • How to keep functionality unchanged without introducing bugs — Continuous integration, see: The Cornerstone of Microservices — Continuous Integration.
  • Static resources should be split and cached at the access layer or CDN, and most traffic should be intercepted at the edge nodes close to users or cached at the access layer. For details, see Access Layer Design of MicroServices and Static Resource Isolation.
  • The state of the application should be separated from the business logic to make the business stateless, which can be extended horizontally based on containers. See stateless and Containerization of Microservitization.
  • Core business and non-core business should be separated to facilitate the expansion of core business and the degradation of non-core business, refer to “Service separation and Service discovery in Microservitization”.
  • Only in the case of large data volume, the database has the ability of horizontal expansion and does not become a bottleneck. Reference: Database Design of Microservitization and Read-write Separation
  • Layer upon layer caching, only a small amount of traffic reaches the PLA database, refer to “Design of Microservitization Cache”.
  • Message queues are used to shorten the core logic by asynchronizing multiple services that were previously invoked consecutively into listening message queues.
  • Fuses, traffic limiting and degradation strategies should be set between services. Once the call blocking should fail quickly, instead of being stuck there, the sub-healthy services should be fuses in time without chain reaction. Non-core businesses are downgraded and no longer invoked, leaving resources for core businesses. To call the current limit within the capacity measured by pressure, it is better to deal with it slowly than to put it all in at once and break down the whole system.
  • There are too many services to be configured one by one. A unified configuration center is required to deliver configurations.
  • There are too many services to view logs one by one. A unified log center is required to summarize logs.
  • Too many services are separated, which makes it difficult to locate performance bottlenecks. You need to use APM full-link application monitoring to detect performance bottlenecks and rectify performance bottlenecks in time.
  • There are too many services to be divided into. Without pressure test, no one knows how much it can withstand, so a full-link pressure test system is needed.









Phase 3 OPERATION and maintenance mode
























































































The organization of stage three














































How to implement microservices, containerization, DevOps












Why is Kubernetes a natural fit for microservices


























Scenario 1: How does the regression test function set remain the same when the architecture SOA is split


















Scenario 2: When the architecture is soA-oriented, how to centrally manage and provide mid-platform services


















Scenario 3: How to ensure the security of calling key services when services are soA-based











Scenario 4: After the ARCHITECTURE is SOA-based, API services are provided to build an open platform











Scenario 5: Grayscale publishing and A/B testing in the Internet scenario








Scenario 6: Pre-delivery test in the Internet scenario


Scenario 7: Performance Pressure Test in the Internet scenario








Scenario 8: Fuses, traffic limiting, and degradation in microservice scenarios









Scenario 9: Fine-grained traffic management in micro-service scenarios









Original link:To traditional enterprise friends: not enough pain micro service, pit(Source: Popular Cloud Computing by Liu Chao)