Microservices are gaining traction on blogs, social media discussion groups, and conference presentations, and are ranked very high on Gartner’s 2014 Hype Cycle. At the same time, there are plenty of skeptics in the software community who think microservices are nothing new. Naysayers sees this as a repackaging of SOA architecture. However, despite the different debates, the microservices architecture pattern is providing a significant contribution to agile deployment and the implementation of complex enterprise applications.

First let’s look at why we should use microservices.

Develop monolithic applications

Let’s say you’re developing taxi scheduling software to compete with Uber and Hailo. After initial meetings and requirements analysis, you might start the new project either manually or using a generator based on Rails, Spring Boot, Play, or Maven. Its hexagonal architecture is modular, as shown below:

The core of the application is the business logic, which is accomplished by modules that define services, domain objects, and events. Around the core are the adapters that interact with the outside world. Adapters include database access components, message components that produce and process messages, and Web modules that provide API or UI access support.

Although modular logic, it will eventually be packaged and deployed as a monolithic application. The exact format depends on the application language and framework. For example, many Java applications will be packaged as WAR and deployed on Tomcat or Jetty, while other Java applications will be packaged as self-contained JAR formats, and Rails and Node.js will be packaged as hierarchical directories.

This style of application development is common because ides and other tools are good at developing a simple application that is easy to debug, simply running the application and using Selenium linked UI to do end-to-end testing. Standalone applications are also easy to deploy, simply copy packaged applications to the server and scale applications easily by running multiple copies on the back end of the load balancer. In the early days such applications worked well.

The deficiency of monomer application

Unfortunately, this simple approach has major limitations. A simple application gets bigger over time. In each sprint, the development team is faced with a new “story” and then develops a lot of new code. In a few years, this small, simple app will grow into a giant monster. Here’s an example. I was recently talking to a developer who was writing a tool to analyze the dependencies between JAR files in their application with millions of lines of code. I’m pretty sure this code is a monster that many developers have been working on for years.

Once your app becomes a big, complex monster, it’s a pain in the neck for the development team. The main problem with agile development and deployment is that the application is so complex that it is impossible for any single developer to understand it. As a result, fixing bugs and adding new features correctly becomes very difficult and time-consuming. Plus, team morale goes down. If the code is difficult to understand, it can’t be changed properly. It will eventually lead to a great, incomprehensible quagmire.

Singleton applications also slow down development. The larger the application, the longer the startup time. For example, a recent survey showed that apps sometimes took longer than 12 minutes to launch. I’ve also heard that some apps take 40 minutes to start up. If the developer has to restart the app frequently, then most of the time will be spent waiting and productivity will suffer.

In addition, complex and large monolithic applications are not conducive to sustainable development. Today, the norm for SaaS applications is to change many times a day, which is very difficult for a monolithic application model. In addition, the impact of this change is not well understood, so a lot of manual testing has to be done. Then it’s going to be tough to keep deploying.

Singleton applications can be difficult to scale when resources conflict between different modules. For example, a module that completes a CPU sensitive logic should be deployed in AWS EC2 Compute Optimized Instances, while another in-memory database module is more suitable for EC2 Memory-Optimized Instances. However, because these modules are deployed together, a hardware choice has to be compromised.

Another problem with monolithic applications is reliability. Because all modules run in a process, a bug in any module, such as a memory leak, can bring down the entire process. In addition, because all application instances are unique, this bug affects the reliability of the entire application.

Finally, monolithic applications make it difficult to adopt new architectures and languages. For example, imagine that you have two million lines of code written in the XYZ framework. If you want to change to ABC framework, both time and cost are very expensive, even if ABC framework is better. So it’s an unbridgeable gap. You have to bow to your first choice.

To recap: You start with a successful business-critical application and then it becomes a giant, incomprehensible monster. Hiring potential developers is difficult because of outdated, inefficient technology. Applications cannot scale, reliability is low, and ultimately, agile development and deployment becomes impossible.