It has been more than three years since microservices architecture began, and the early Spring Cloud Netflix architecture has matured and been integrated by Spring Cloud into new solutions to common Cloud problems, such as Sleuth, Zipkin, Contract, and others.

But now architecture tends to move in a different direction. In this article, we will analyze the path of microservices architecture to date and the tools and technologies that will accompany us in the future.

The birth of microservices

To return to the origins, we have to go back to early 2015, when the concept of “microservices” began to gain strength in Spain. The first development stack for microservices was released, which achieved relative popularity with the Netflix stack, which was released in March 2015.

Today it remains the most talked about and popular of all cloud computing solutions, including Spring:

Two other solutions (Consul and Zookeeper) use different components from the Netflix stack, which includes Zuul, Ribbon, and Hystrix. Initially, the architecture consisted of the following parts:

Configuration server: Externalizing the configuration server allows us to centralize all configuration of the ecosystem. It’s not part of Netflix (because Netflix uses Archaius), but it was developed by Spring.

  • Eureka: Server for registering microservices and metadata about them.
  • Ribbon: Library for balancing requests in the client. It communicates with Eureka to get registers for available instances of each microservice.
  • Hystrix: library for cascading error management using circuit breaker mode.
  • Zuul: Servers that will act as API gateways/edge services, as entry points to the microservices ecosystem.

If we are now used to monolithic monolithic architectures, this set of architectures seems larger, but addresses the main requirements of distributed architecture: registration, centralized configuration, load balancing, failure resistance…

In terms of deployment logic, tied to the use of microservices, we deploy using a container solution, which in this case we all know and is the most popular solution on the market: Docker.

Another problem is the container choreography solution. We are one of the few early adopters of OpenShift 3, a Red Hat solution based on Kubernetes, which was launched in June 2015.

But the reality is that there were already various container choreography solutions. Of course, none of them are very mature, and they don’t have much market share.

The establishment of microservices

Microservices architecture has rapidly become important since its inception in 2015 and has increased over time. Both complement each other here as their primary architectural solution, driven by the success of the cloud solution.

As with any successful architecture or tool, a range of applications and other libraries cover areas of functionality not initially considered. This is the traceability of requests, a common requirement in distributed systems that initially did not go beyond a manually implemented solution.

These and other requirements are reflected in the new library packages that complete our ecosystem, some of which are:

Sleuth: The library allows us to distribute requestable requests between different applications/microservices based on the combination of headers. Zipkin: A server that stores temporary data, referencing distributed requests for correlation and latency studies. Contract: The library allows us to implement a consumer-driven Contract model to increase confidence that our changes will not break any API conditions. In addition, evolution has followed, not only in part, but also in defining standard stacks for other functions, such as components that are critical to recording and monitoring.

At this point, tools for documenting and monitoring these uses such as ElasticSearch-Logstash or Fluentd-Kibana became an integral part of these new architectures, increasing their popularity.

With all of these new tools/library packages, we enjoy a much more complete ecosystem than we do now, and a much more complex ecosystem that actually covers all the requirements we have.

On the other hand, the need for non-blocking communication arose in the microservices architecture design, where vert. x was used without a pure integrity solution, later supported by Spring 5’s React.

The rise of Kubernetes

As we commented earlier, there really weren’t many container choreography solutions on the market when these new architectures emerged.

Kubernetes, Openshift, Docker Swarm, all appeared in version 1.0.0 in 2015, Mesos…… in 2016 There is no dominant solution in the market.

As time went on, it looked like we were the obvious dominator, and that was Kubernetes, or the Kubernetes-based Openshift solution.

Because of this, we can already find the management solution Kubernetes implemented on different platforms: Google’s Kubernetes engine, Amazon AWS EKS, etc.

Also, some of the functions discussed at the beginning of this post, such as load balancing, registration, and centralized configuration performed by the Ribbon, Eureka, and Config Server, are available from PaaS. So why use Spring Cloud’s Netflix features?

This is a question that several clients often ask us. The answer is simple: at the beginning of the architecture, no choreography solution was built on the market.

Include these parts in the software architecture (Eureka, Ribbon……) Make it more portable. Because these services are contained within the artifacts themselves, applications can be moved between different cloud solutions without fear of exhaustion of these horizontal services.

Similarly, Spring Cloud Netflix offers a solution that is more powerful than what Cloud solutions typically offer. These are some additional features that provide:

In addition to allowing us to implement our own balancing algorithms, the Ribbon provides different balancing algorithms that offer more flexibility than typical Round Robin or Random with PaaS. Eureka allows us to include and view additional information about instances in the registry: urls, metadata…… In a PaaS solution, we usually do not have the option to merge the information into the registry. Config Server provides us with a hierarchical property system that allows us to view branches and/or tags of git repositories. We had an architecture with all of these possibilities, but we didn’t take full advantage of them, which usually happens in most clients: they don’t need this high-level architectural functionality because they think they can do it through PaaS.

Today, Kubernetes cloud solutions are the dominant PaaS in the market, and if we think about the PaaS concept, its purpose is to abstract away from lower-level functionality/resources so that applications can focus on business logic. All of these functions are clearly outside the scope of the business.

This allows us to separate our application, which is our business logic, which makes for a clearer separation between the layers of the system.

These are the structural features of Spring Cloud Netflix that Kubernetes can absorb:

1. Registration, load balancing and health check (Eureka and Ribbon)

A new Pod that appears in a Kubernetes system loads a microservice, but unlike the Eureka + Ribbon combination, load balancing is not done on the client side, so in Kubernetes the application does not have to know all existing instances of the service (this is done through the Eureka client).

What the application in pod knows is the Kubernetes service layer, which is an abstraction that condenses service instances. In this way, the client invokes the service layer, which maintains a constant address and performs the balancing of specific target instances.

Kubernetes will also be responsible for periodic health checks to check the health of the instance, which in turn, in the case of Eureka, notifies the server if it has correct availability.

2. Centralized configuration (Server configuration)

Since the latest version of Kubernetes has ConfigMaps available. These allow us to store properties as environment variables separately as properties files (local or remote).

However, Kubernetes still has some features that don’t cover what Spring Cloud Netflix does, which won’t keep us completely separate. These features are cascading error management, gateway apis, request traceability…… This is our next big step into microservices architecture.

The birth of a new favorite

If we think about the part of the microservices architecture that gives us the most problems, most people agree that these problems are related to the network. Specifically, everything has to do with latency, management of remote call failures, balancing, traceability of requests, calls or drops to non-existent instances…

There are different levels of responsibility in these cases. For example, the PaaS (or registration service) is responsible for providing us with a list of health instances. Hystrix is responsible for managing external calls to control timeouts and manage failures……

It’s in this gray area, nested between the application layer and the PaaS, that we’ll find a new revolution in microservices architecture at more problematic moments.

Istio

Istio is a service network solution based on Google’s experience and good practices in implementing large-scale services. Developed with IBM and Lift, it was released as Opensource in May 2017, and they plan to release a version every month.

For those unfamiliar with the service grid concept, this definition seems to be the best:

“The service Grid is a dedicated infrastructure layer used to make service-to-service communication secure, fast and reliable. If you are building cloud native applications, you need a service Grid. “, Blog Buoyant: What is a service grid? Why do I need one?

Service Mesh is a concept that has proliferated over the last year. Evidence of this is that large companies with a lot of traffic like Paypal or Ticketmaster are already using it, and Envoy and Linkerd are part of the Cloud Native Computing Foundation.

Before discussing why these big changes are coming to the microservices world, let’s take a look at how it will work.

Istio is a tool that collects the functionality that we place underneath it (PaaS) and immediately above it (applications) to take care of managing everything related to network communication.

Instead of introducing new functionality, Istio moves existing functionality to the middle tier where it will be placed.

What it does is place an agent next to our applications that will intercept all their network traffic and manage them to provide reliability, resilience, and security.

Placing this proxy next to our application is called sidecar-Proxy sidecar proxy mode. In Kubernetes, an additional container with this agent is deployed in our application’s container deployed pod, as shown below:Istio defaults to using the Envoy as the Sidecar-proxy, which will accompany all of our microservices. You can also use Linkerd for the data plane.

The fact that Istio runs in a separate container for our application leads to greater separation between the service grid itself and the application.

In addition, it completely frees applications from managing architectural complexity when implementing collections from libraries such as the Ribbon and Hystrix.

Istio provides us with a number of capabilities when dealing with all things related to network communication, including:

  • Routing requests: We can route requests based on different criteria, such as source application, destination, application version, request header routing request…… In addition, we can get A percentage of traffic or repeat what allows us canary to deploy and A/B test.
  • Health check and load balancing: Control health instances and balance them using different available algorithms.
  • Manage timeout and disconnection: we can configure timeout and disconnection for different services, retry configuration……
  • Fault injection: To test the resiliency of our application, we can insert two types of faults: delayed and cancelled requests.
  • Quota management: Allows you to set call limits.
  • Security: Secure communication between various services, access to roles based on two parts of authenticated communication, whitelist and blacklist……
  • Monitor and record: Record, capture service grid metrics, distributed traceability…… It can be deployed on different infrastructures: Kubernetes, environments based on Eureka or Consul registrations and soon registered in CloudFoundry and Mesos.

If we take a closer look at its features, we’ll see that it collects many of the responsibilities of the Netflix suite: disconnect and Hystrix timeout management, load balancing Ribbon requests……

In addition, Istio is integrated with some of the solutions already used by Spring Cloud and, as in the case of Zipkin, can work in environments that use Eureka as a record.

It also integrates with other existing solutions on the market for metric storage, logging, quota management…… Examples are Prometheus, FluentD, Redis……

conclusion

Finally, thank you for watching, the above article is personal opinion, if it is helpful to you, remember to focus on praise and forwarding support!