Author: YYGCui

Reference: http://blog.cuicc.com/blog/2015/07/22/microservices/

In the past few years, the term “microservice architecture” has sprung up to describe a specific way of designing software applications as a set of independently deployable services. Although there is no clear definition of this architectural style, there are some common characteristics in organizational and business capabilities: automated deployment, endpoint intelligence, and decentralized control of language and data.

“Microservices” – Software architecture there’s a new term on the crowded street. While our natural tendency is to brush it off with a dismissive glance, we find that the term describes an increasingly attractive style of software systems. We’ve seen this style used on many projects over the years, and so far with good results, that it has become the default architectural style our colleagues use when building enterprise applications. Unfortunately, however, there isn’t much information outlining what a microservice style is and how to use it.

In simple terms, the microservice architecture style [1] is a way of developing a single application into a set of small services, each running in its own process, communicating between services using a lightweight communication mechanism (usually using HTTP resource apis). These services are built around business capabilities and can be deployed independently through a fully automated deployment mechanism. These services share a minimal centralized management and can be developed in different languages using different data storage technologies.

A comparison with the monolithic style helps to begin to explain the microservice style: monolithic applications are built as a single unit. Enterprise applications typically consist of three parts: a client-side user interface (consisting of HTML pages and Javascript running in a browser on the developer machine), a database (consisting of many data tables inserted into a generic relational database management system), and a server-side application. The server application processes HTTP requests, performs domain logic, retrieves and updates data from the database, and selects and populates HTTP views that will be sent to the browser. The server application is a single logical executable unit [2]. Any changes to the system will involve rebuilding and deploying a new version of the server.

Such a single server is the most natural way to build such a system. All the logic for processing requests runs in a single process, allowing you to use basic features of the programming language to divide your application into classes, functions, and namespaces. You carefully run the test application on the development machine and use the deployment pipeline to ensure that changes are properly tested and deployed into production. Horizontal scaling of this singleton can be achieved by running multiple instances behind the load balancer.

Monolithic applications can be successful, but people are increasingly frustrated with them, especially as more applications are deployed on the cloud. Change cycles are bundled together — even if only one part of the application is changed, the entire singleton needs to be rebuilt and deployed. Over time, it is often difficult to maintain a good module architecture, making it difficult for changes to occur only in the modules that need to be changed. Application extension requires extension of the entire application rather than of parts of the application that require more resources.

Figure 1: Singleton and microservice

These frustrations lead to the microservices architectural style: building applications as suites of services. In addition to the fact that services are independently deployable and extensible, each service provides a fixed module boundary. It even allows different services to be developed in different languages and managed by different teams.

We wouldn’t claim that the microservices style is new and innovative, but its essence goes back at least to the Unix design philosophy. But we do think that not enough people have thought through the microservices architecture, and that many software implementations would be better off using it.

Characteristics of microservices architecture

We can’t give a formal definition of the microservices architectural style, but we can try to describe some commonalities that we see fit into the architecture. To any definition that Outlines commonalities, not all microservice architectural styles have these commonalities, but we expect most microservice architectural styles to exhibit most characteristics. Although the author of this article has been an active user of this rather loose community, our purpose is to try to describe what we have seen in our work and the similar efforts of some teams that we know of. In particular, we’re not going to lay down some definitions that we can abide by.

Componentization through services

As long as we’ve been in the software business, one aspiration has been to build systems by plugging components together, just as we see things being constructed in the real world. In the last two decades we have seen great progress in the extensive compilation of common libraries that are part of most language platforms.

When it comes to components, we encounter the difficulty of defining what a component is. A component is a unit of software that can be replaced and upgraded independently.

Microservice architectures will use libraries, but the main way to componentize software is to decompose it into services. We define a library as a component that is linked to a program and invoked using in-memory function calls, whereas a service is an out-of-process component that communicates through web service requests or RPC (remote procedure call) mechanisms (different from the concept of a service object in many object-oriented programs [3]).

One of the main reasons to use services as components rather than libraries is that services are independently deployable. If you have an application [4] that consists of multiple libraries in a single process, changes to any one component will cause the entire application to have to be redeployed. But if the application can be decomposed into multiple services, then changes to a single service only require redeployment of that service. Of course this is not absolute, some changes will change the service interface and lead to some collaboration, but the purpose of a good microservice architecture is to minimize these collaborations by consolidating service boundaries and evolving by contract.

Another result of using services as components is a more explicit component interface. Most languages do not have a good mechanism for defining an explicit publishing interface. Often only documentation and rules prevent clients from breaking the packaging of components, which leads to tight coupling between components. Services can easily avoid this with explicit remote invocation mechanisms.

Using a service like this does have some disadvantages; remote calls are more expensive than in-process calls, so remote apis are designed to be coarse-grained, which tends to be less convenient to use. If you need to change the assignment of responsibilities between components, such actions are harder to achieve when you cross process boundaries.

An intuitive estimate, we observe that services map to run-time processes one by one, but this is just an intuitive estimate. A service may consist of multiple processes that are always developed and deployed together, such as application processes and databases that are used only by the service.

Organize around business capabilities

When trying to break a large application into parts, management often focuses on the technical side, leading to the division of the UI team, the service logic team, and the database team. When teams are divided along these technical lines, even simple changes can result in time and budget approvals across teams. A smart team will work around these optimizations, the lesser of two evils – forcing business logic only into applications that they will access. In other words, logic is everywhere. This is an example of Conway’s Law [5] in action.

Any organization that designs a system (broadly defined) will produce a design whose structure is the organization’s communication structure.

— Melvyn Conway
1967

Figure 2: Conway’s Law in action

Microservices are divided into services organized around business capabilities using different segmentation methods. These services take a broad stack implementation of the business domain’s software, including user interfaces, persistent storage, and any external collaborations. As a result, teams are cross-functional, including the full range of skills needed for development: user experience, database, project management.

Figure 3: Team boundaries enhanced service boundaries

www.comparethemarket.com is one company organized in this way. Cross-functional teams are responsible for creating and operating products that are divided into individual services that communicate over a message bus.

Large monolithic applications can also always be modularized around business capabilities, although this is not always the case. Of course, we would urge large teams that create monolithic applications to split the teams themselves along lines of business. The main problem we see with this is that they tend to organize around too much context. If units span multiple module boundaries, it is difficult for individual team members to fit them into their short-term memory. In addition, we see a modular approach that requires a lot of rules to enforce. The more explicit separation required by service components makes it easier to keep team boundaries clear.

Sidebar: How big is microservices?

While “microservices” has become a byword for this architectural style, the name does lead to an unfortunate focus on the size of the service and debate over what “micro” consists of. In our conversations with microservices practitioners, we found services of all sizes. The biggest service coverage follows Amazon’s two-pizza team (i.e., the entire team eats two pizzas and is full) philosophy, which means the team has no more than 12 people. On a smaller scale, we see arrangements where teams of six will support six services.

This leads to the question of whether, within the size range of serving every 12 people and serving every 1 person, there are enough dozen differences that they cannot be lumped together under the same microservice tag. At the moment, we think it’s best to combine them. But as we explore the style further, it’s definitely possible to change our perspective.

It’s a product, not a project

We see most application development efforts using a project model: the goal is to deliver some software that will be done. The finished software is handed over to the maintenance organization, and its construction team is disbanded.

Microservices proponents tend to avoid this model and instead believe that one team should be responsible for the entire product lifecycle. A common inspiration for this is Amazon’s concept of “You build, you run it”, where development teams are responsible for the entire product cycle of the software. This gives developers constant contact with how their software works in a production environment and increases contact with their users, as they must shoulder at least some of the support.

Product thinking is tied to business capabilities. Focus continuously on how the software helps users improve their business capabilities, rather than seeing the software as a set of functions to be done.

There’s no reason why the same approach can’t be applied to individual applications, but the service’s smaller granularity makes it easier to establish personal relationships between service developers and users.

Smart endpoints and dumb pipes

When creating communication structures between different processes, we have seen many products and methods that impose significant intelligence on the communication mechanism itself. A good example is enterprise Service Bus (ESB), which typically introduces advanced facilities for message routing, choreography, transformation, and application of business rules in ESB products.

The microservices community advocates another approach: smart endpoints and dumb pipes. The goal of applications built on microservices is to be as decoupled and cohesive as possible – they have their own domain logic, and they behave more like filters in classic UNIX concepts – receiving requests, applying the appropriate logic, and generating responses. Instead of using complex protocols like WS-Choreography or BPEL or orchestration via central tools, orchestrate them using simple REST-style protocols.

The two most commonly used protocols are HTTP request-response using resource apis and lightweight messaging [6]. The first protocol can best be described as

It is the Web itself, not hidden behind it.


Ian Robinson

The rules and protocols used by the microservices team are the same rules and protocols used to build the World Wide Web (and, to a greater extent, UNIX). From a developer and operations perspective, commonly used resources can be easily cached.

The second common approach is to deliver messages on a lightweight message bus. The infrastructure of choice is typically dumb (dumb in this case only acts as a message router) – simple implementations like RabbitMQ or ZeroMQ provide only a reliable asynchronous exchange structure – in a service where intelligence still lives in the endpoints, producing and consuming messages.

In a singleton application, components execute within the same process and communicate with each other through method calls or function calls. The biggest problem with turning cells into microservices is the change in communication mode. One naive conversion is from in-memory method calls to RPC, which results in frequent communication and poor performance. Instead, you need coarse-grained communication instead of fine-grained communication.

Decentralized governance

One consequence of centralized governance is the tendency to standardize on a single technology platform. Experience shows that this approach is shrinking – not every problem is a nail, not every problem is a hammer. We prefer to use the right tools to get the job done, and it’s unusual for a monolithic application to take advantage of a language to some extent.

Split individual components into services and have a choice when building those services. Do you want to develop a simple report page using Node.js? Go ahead. Implementing a particularly crude, near-real-time component in C++? B: Great. Do you want to switch to a different database style that is more suitable for components to read and manipulate data? We have the technology to rebuild it.

Of course, just because you can do something doesn’t mean you should – but dividing the system this way means you have a choice.

Teams also prefer a different approach when building microservices. They prefer the idea of producing useful tools, rather than standards written on paper, that other developers can use to solve similar problems they face. Sometimes these tools are often harvested in implementation and shared with the wider community, but not entirely using an internal open source model. Now that Git and Github have become the de facto version control systems of choice, internal open source practices are becoming more common.

Sidebar: Microservices and SOA

When we talk about microservices, a common question is whether it is just the service-oriented architecture (SOA) we saw ten years ago. There is merit in this, because the microservices style is very similar to some of the propositions that SOA espouses. The problem, however, is that SOA means many different things, and most of the time the style we encounter with SOA is markedly different from the style we describe here, often due to SOA’s focus on ESBs for integrating monolithic applications.

In particular, we have seen too many botched service-oriented implementations, from a tendency to hide complexity in esBs [7], to multi-year failed initiatives that cost millions and produce no value, to centralized governance models that actively suppress change, and it is sometimes hard to see these problems in the past.

Of course, many of the technologies used by the microservices community have grown from developers’ experience integrating services in large organizations. The Tolerant Reader pattern is an example of this. The use of simple protocols is another approach derived from these experiences, and efforts to use networks have responded far from central standards, which have, frankly, reached mind-blowing complexity. (Any time you need an ontology to manage your ontology, you know you’re in deep trouble.)

This common manifestation of SOA has led some microservices advocates to reject the SOA label altogether, although others consider microservices to be a form of SOA [8] and perhaps service orientation is doing the right thing. Either way, the fact that SOA means so many different things means that it is valuable to have a term to define this architectural style more clearly.

Netflix is a good example of adhering to this philosophy. In particular, sharing useful, market-proven code in the form of a library encourages other developers to solve similar problems in similar ways, while opening the door to different approaches. Shared libraries tend to focus on common issues of data storage, interprocess communication, and infrastructure automation, which we’ll discuss in more detail next.

The cost is particularly unattractive for serving communities. This is not to say that the community does not value service contracts. Quite the opposite, because they have more contracts. It’s just that they’re finding different ways to manage these contracts. Patterns like Tolerant Reader and Consumer-driven Contracts are commonly used for microservices. These aid service contracts have evolved independently. Implementing consumer-driven contracts as part of the build increases confidence and provides faster feedback on whether the service is working. In fact, we know of a team in Australia that uses a consumer-driven contract model to drive the construction of new businesses. They use simple tools to define contracts for services. This has become part of an automated build, even if the code for the new service hasn’t been written yet. Services are created only when the contract is satisfied – an elegant way to avoid the “YAGNI”[9] dilemma when building new software. The technologies and tools that have grown up around these limit the need for centralized contract management by reducing temporary coupling between services.

Sidebar: Lots of languages, lots of options

The growth of the JVM as a platform is the latest example of mixing languages within a common platform. For decades, it has been common practice for high-level languages to take advantage of high-level abstractions. Just like pulling down to machine hardware and writing performance-sensitive code in a low-level language. However, many singletons do not require this level of performance optimization and the higher levels of abstraction common, and are not DSL. Conversely, monomers are usually monolingual and tend to limit the number of technologies used [10].

Perhaps the pinnacle of decentralized governance is Amazon’s popular build it/ Run it concept. Teams are responsible for all aspects of the software they build, including 24/7 operations. This level of delegation of responsibility is definitely not standard, but we are seeing more and more companies give development teams more responsibility. Netflix is another company that adopts this concept [11]. Being woken up by a pager at 3am every day is a powerful incentive to focus on quality while writing code. Here are some ideas about moving as far away from the traditional centralised governance model as possible.

Decentralized data management

Decentralization of data management can be presented in many different ways. At the most abstract level, this means conceptual models of the world that make the difference between systems. When integrating a large enterprise, the customer’s sales view will be different from the support view, which is a common problem. Some things in the customer’s sales view might not appear in the support view. They may indeed have different properties and (worse) common properties that are subtly different semantically.

This problem is common between applications, but can also occur within applications, especially if the application is divided into separate components. A useful way of thinking is the concept of domain-driven Design (DDD) within a Bounded Context. DDD divides a complex domain into bounded contexts and maps the relationships between them. This process is useful for both singleton and microservice architectures, but there is a natural correlation between service and context boundaries that help clarify and enforce separation, as described in the business Capabilities section.

Sidebar: Tried and tested standards and enforcement standards

This is a bit of a split, with the microservices team preferring to eschew strict implementation standards as dictated by the enterprise Architecture group, but happy to use and even evangelize open standards such as HTTP, ATOM, and other Vickers.

The key differences are how standards are customized and enforced. Standards managed by organizations such as IETF become standards only when there are several useful implementations worldwide, which tend to grow from successful open source projects.

These standards are far from the corporate world. Often developed by an organization with little recent programming experience or overly influenced by vendors.

Like decentralized decisions for conceptual models, microservices decentralize data storage decisions. While monolithic applications prefer a single logical database for persistent storage, enterprises tend to prefer a single database shared by a series of applications – decisions driven by vendor-licensed business models. Microservices tend to let each service manage its own database, or different instances of the same database technology, or completely different database systems — this is known as Polyglot Persistence. You can use mixed persistence in singleton applications, but it’s more common in services.

Decentralized responsibility has an impact on management escalation for data across microservices. A common way to handle updates is to use transactions to ensure consistency when updating multiple resources. This method is usually used in monomers.

Using transactions like this helps consistency, but creates significant temporary coupling, which can be problematic when spanning multiple services. Distributed transactions are notoriously difficult to implement, so the microservices architecture emphasizes transaction-free collaboration between services, with the explicit recognition that consistency may only be final consistency and that problems are handled through compensation operations.

Choosing to manage inconsistencies in this way is a new challenge for many development teams, but it usually matches business practices. Usually the business handles some degree of inconsistencies to quickly respond to requirements, and there is some type of reversal process to deal with errors. This trade-off is worth it, as long as the cost of fixing bugs is less than the cost of losing business under greater consistency.

Infrastructure automation

Infrastructure automation has changed dramatically over the past few years, especially as the evolution of the cloud and AWS has reduced the operational complexity of building, deploying, and operating microservices.

Many products or systems built with microservices are built by teams with extensive experience in continuous deployment and its predecessor continuous integration. The team builds software in this way, making extensive use of infrastructure automation. The build pipeline diagram looks like this:

Figure 5: Basic build pipeline

Since this is not an article about continuous delivery, we focus on a few key features of the Light. We want to have as much confidence as possible that our software is working, so we run a lot of automated tests. Facilitating section work software “up” the pipeline means we automate deployment to each new environment.

A monolithic application can happily be built, tested, and pushed through these environments. As it turns out, deploying more applications doesn’t seem any scarier once you’ve committed to automating production for units. Keep in mind that one of the goals of continuous deployment is to make deployment boring, so whether it’s one or three applications, it doesn’t matter as long as it’s still boring [12].

Sidebar: Make it easy to do the right thing

We have found that a side effect of increased automation as a consequence of continuous delivery and continuous deployment is the creation of useful tools to help developers and operations. Tools for creating artifacts, managing code bases, standing up simple services, or adding standard monitoring and logging are now common. The best example on the Web is probably Netflix’s open source toolset, but there are other tools we use widely, like Dropwizard.

Another area where we see teams using a lot of infrastructure automation is when managing microservices in a production environment. In contrast to our assertion above (as long as the deployment is boring), there is not much difference between singleton and microservice, and each operating scenario can be significantly different.

Figure 6: Module deployments are often different

Design for failure

One consequence of using services as components is that applications need to be designed to tolerate service failures. Any service invocation can fail because the provider is not available, and the client must handle this failure as gracefully as possible. This is a disadvantage over monolithic application design because it introduces additional complexity to handle it. As a result, microservices teams are constantly rethinking how service failures affect the user experience. Netflix’s Simian Army induced service and even data center failures during the workday to test application resiliency and monitor.

This kind of automated testing in a production environment is enough to give most operations teams that shiver, usually before the end of the work week. This is not to say that individual styles cannot be well monitored, but are rare in our experience.

Sidebar: Breaker and product readiness code

Circuit Breaker and other modes such as Bulkhead and Timeout appeared in Release It! In the. These patterns are implemented together and are critical when building communication applications. This Netflix post does a great job of explaining applications that use these patterns.

Since the service can fail at any time, it is important to be able to detect faults quickly and, if possible, automatically restore the service. Microservice applications devote significant weight to real-time monitoring of the application, examining both configuration elements (how many data requests per second) and business-related metrics (such as how many orders are received per minute). Semantic monitoring can provide an early warning system that triggers development teams to follow up and investigate.

This is particularly important for microservice architectures because microservices favor orchestration and event collaboration, which can lead to emergent behavior. While many experts extol the value of emergent behavior, the truth of the matter is that emergent behavior can sometimes be a bad thing. Monitoring is critical to quickly detect bad bursts of behavior, so it can be fixed.

Monomers can be built to be as transparent as microservices – in fact, they should be. The difference is that you absolutely need to know if a service running in a different process is broken. This transparency is unlikely to be useful for libraries in the same process.

Sidebar: Synchronous calls are considered harmful

Any time you have a large number of simultaneous calls between services, you will have a multiplier effect of outages. Simply put, it is the downtime of your system as a product of the downtime of the programming components. You have a choice, make your calls asynchronous or manage downtime. At www.guardian.co.uk, they have implemented a simple rule on the new platform – one synchronous call per user, while at Netflix, their platform API has been redesigned to establish asynchrony in the API exchange structure (fabric).

The microservices team would like to see sound monitoring and logging set up for each individual service, such as on/off status and various operational and business related metrics on the control panel. Details of circuit breaker status, current throughput, and latency are other examples we often encounter.

Evolutionary design

Microservice practitioners, who generally have an evolutionary design background, see service decomposition as a further tool that enables application developers to control change in their applications without slowing it down. Change control does not necessarily mean less change – with the right attitude and tools, you can change software frequently, quickly and well controlled.

When you try to componentize a software system, you are faced with the decision of how to slice it up – what are the principles by which we decide to slice our application? The key feature of components is the idea of independent replacement and upgrade [13] – this means we need to find the point where we can imagine rewriting components without affecting their collaborators. In fact, many microservice groups further identify these points by explicitly expecting many services to be deprecated rather than evolving over time.

The Guardian website is a good example of a stand-alone application that was designed and built, but has evolved into a microservice. The core of the site is still singleton, but they like to add new functionality by using microservices built to invoke singleton apis. This approach is especially handy for natural temporary features, such as feature pages for sports events. Such parts of the site can be quickly put together using rapid-development language and immediately removed once the event is over. We see a similar approach in financial institutions, adding new services to a market opportunity and then discarding them months or even weeks later.

The emphasis on substitutability is a special case of the more general principle of modular design, which is driven by changing patterns [14]. Things you want to keep changing at the same time in the same module. Parts of the system that change little should be placed in a different service than those that are experiencing a lot of disturbance. If you find yourself constantly changing two services together, this is a sign that they should be merged.

Placing components in services opens up an opportunity for more granular release planning. For singleton, any change requires a complete build and deployment of the entire application. With microservices, you just need to redeploy the service you’ve modified. This simplifies and speeds up the release process. The downside is that you have to worry that changes to a service will turn off its consumers. Traditional integration approaches try to solve this problem with versioning, but the microservices world’s preference is to use versioning only as a last resort. We can avoid extensive versioning by designing services to be as tolerant as possible to changes in their providers.

Are microservices the future?

Our main purpose in writing this article is to explain the main ideas and principles of microservices. By taking the time to do this, it became clear to us that the microservices architectural style is an important idea – one that deserves serious consideration for enterprise applications. We have recently built some systems in this style and know others who use it and approve of it.

Those we know of that have pioneered this architectural style in some way include Amazon, Netflix, the Guardian, the UK Government Digital Service, realestate.com.au, The Vanguard and Comparethemarket.com. The conference circuit in 2013 was full of companies moving into the microservices category — including Travis CI. There are also a number of organizations that have been doing things that fall under the microservices category for a long time, but haven’t used the name yet. This is often referred to as SOA – although, as we said, SOA comes in many contradictory forms. [15])

Despite these positive experiences, we do not believe that we are convinced that microservices are the future of software architecture. While our experience so far has been positive compared to individual applications, we have not been aware of the fact for long enough for us to make a full judgment.

Often, the real consequences of your architectural decisions don’t show up until years after you make them. We’ve seen projects built with monolithic architectures by good teams with a strong desire for modularity languish for years. Many people believe that micro-services are unlikely to see this kind of decline because the service boundaries are clear and hard to patch around. However, until we see enough systems over enough years, we can’t really assess how mature the microservices architecture is.

One certainly has reason to hope for how immature microservices are. Success in any effort at componentization depends on how well the software is adapted to componentization. It’s hard to figure out where component boundaries are. Evolutionary design acknowledges the difficulty of getting the boundaries right and the importance of making them easy to refactor. But when your component is a service with remote communication, refactoring it is much harder than refactoring a service with an in-process library. Moving code across service boundaries is difficult, any interface changes need to be coordinated among participants, backward compatibility layers need to be added, and testing becomes more complex.

Sidebar: Building Microservices

Our colleague Sam Newman spent much of 2014 writing a book that captures our experience building microservices. If you want to dive deeper into this topic, this should be your next step.

Another problem is that if components don’t come together cleanly, then all you’re doing is shifting complexity from within components to the connections between components. It doesn’t just move complexity around, it moves complexity to a less defined, unmanageable place. Without messy connections between services, it’s easy to think things are better when you’re looking inside a small, simple component.

Finally, there is the element of team skills. More skilled teams tend to adopt new technologies. But a technique that works better for more skilled teams may not work for less skilled teams. We’ve seen plenty of examples of less skilled teams building messy monolithic architectures, but it takes time to see what happens when microservices get this messy. A bad team always creates a bad system – it’s hard to tell whether microservices in this case will reduce this clutter or make it worse.

One of the reasonable arguments we’ve heard is that you shouldn’t start with a microservices architecture. Instead, start with a single unit, keep it modular, and break it down into microservices when the unit becomes a problem. (Although this recommendation is not ideal, because a good in-process interface is usually not a good service interface.)

So we write this article with cautious optimism. So far, we’ve seen enough about the microservice style to think this is a path worth exploring. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information currently available to you.

MORE | MORE excellent articles

  • Micro services practices for Small and medium Internet companies – lessons learned

  • Can Spring Cloud be used by small and medium sized companies in China?

  • The boss and the tendentious story