This is the sixth day of my participation in Gwen Challenge. This article is participating in “Java Theme Month – Java Development in Action”, see the activity link for details.

1. Introduction to micro services

1.1 What are microservices

Microservices are the splitting of a large single application or service into several independent, small-grained services or components.

1.2 Why use microservices

The dismantling of microservices, a strategy that allows individual components to be extended without the need for the entire application stack to be modified to meet service level agreements (SLAs). The benefit of microservices is that they are faster and easier to update. When developers make changes to a traditional monolithic application, they must do detailed, complete QA testing to ensure that the changes do not affect other features or functions. But with microservices, developers can update individual components of an application without affecting other parts. Testing microservice applications is still required, but making them easier to identify and isolate speeds up development and supports DevOps and continuous application development.

1.3 Architectural composition of microservices

With the rapid development in recent years, microservices have become more and more popular. Among them, Spring Cloud is constantly being updated and used by most companies. Representative examples are Alibaba. Around November 2018, Spencer Gibb, co-founder of Spring Cloud, announced on the blog page of Spring’s official website that Alibaba was open source Spring Cloud Alibaba and released the first preview version. This was later announced on Spring Cloud’s official Twitter account. There are also many versions of Spring Cloud:

Spring Cloud Spring Cloud Alibaba Spring Boot
Spring Cloud Hoxton 2.2.0. RELEASE 2.2. X.R ELEASE
Spring Cloud Greenwich 2.1.1. RELEASE 2.1. X.R ELEASE
Spring Cloud Finchley 2.0.1. RELEASE 2.0. X.R ELEASE
Spring Cloud Edgware 1.5.1. RELEASE 1.5. X.R ELEASE

Take Spring Boot1.x as an example, including Eureka, Zuul, Config, Ribbon, and Hystrix. In Spring Boot2.x, the Gateway uses its own Gateway. Of course, in the Alibaba version, the components are even richer: Alibaba’s Nacos is used as the registry and configuration center. Sentinel is used as current limiting and fuse breaker.

2. Gateway to microservices

2.1 Common gateways

Currently, Zuul is the most commonly used gateway in Spring Boot1.x. Zuul is an open source Gateway service from Netflix, and Spring Boot2.x uses its Own Spring Cloud Gateway.

2.2 Function of API Gateway

The MAIN functions of the API gateway are reverse routing, security authentication, load balancing, current limiting fuse, and log monitoring. In Zuul, we can configure routes by injecting beans or directly from configuration files:

zuul.routes.api-d.sensitiveHeaders="*"
zuul.routes.api-d.path=/business/api/**
zuul.routes.api-d.serviceId=business-web
Copy the code

We can do some security authentication through the gateway: for example, unified authentication. In Zuul:

How Zuul works

  • Filter mechanism

At zuul’s core is a set of filters that can act like filters from the Servlet framework, or AOP. Zuul runs Request Routing into user processing logic, and these filters participate in filtering handling such as Authentication, Load Shedding, and so on. Several standard filter types:

(1) PRE: This filter is invoked before the request is routed. We can use this filter to authenticate, select requested microservices in the cluster, log debugging information, and so on.

(2) ROUTING: This filter is used to build requests to microservices and request microservices using Apache HttpClient or Netfilx Ribbon.

(3) POST: This filter is executed after routing to the microservice. Such filters can be used to add standard HTTP headers to responses, collect statistics and metrics, send responses from microservices to clients, and so on.

(4) ERROR: This filter is executed when errors occur in other stages.

  • Filter life cycle

FilterOrder: Defines the execution order of filters by an int value. The smaller the value, the higher the priority.

ShouldFilter: Returns a Boolean type to determine whether the filter should be executed, so this function can be used to turn the filter on or off. In the example above, we return true directly, so the filter is always in effect.

Run: specifies the logic of the filter. Note, here we by CTX. SetSendZuulResponse (false) make zuul filtering the request, does not carry on the route, and then by CTX. SetResponseStatusCode (401) set up the return error code.

Code examples:

@Component
public class AccessFilter extends ZuulFilter {
	private static Logger logger = LoggerFactory.getLogger(AccessFilter.class);

	@Autowired
	RedisCacheConfiguration redisCacheConfiguration;

	@Autowired
	EnvironmentConfig env;

	private static final String[] PASS_PATH_ARRAY = { "/login"."openProject" };

	@Override
	public String filterType(a) {
		return "pre";
	}

	@Override
	public int filterOrder(a) {
		return 0;
	}

	@Override
	public boolean shouldFilter(a) {
		return true;
	}

	@Override
	public Object run(a) {
		RequestContext ctx = RequestContext.getCurrentContext();
		HttpServletRequest request = ctx.getRequest();
		HttpServletResponse response = ctx.getResponse();
		response.setCharacterEncoding("UTF-8");
		response.setHeader("content-type"."text/html; charset=UTF-8");

		logger.info("{} request to {}", request.getMethod(), request.getRequestURL());
		for (String path : PASS_PATH_ARRAY) {
			if (StringUtils.contains(request.getRequestURL().toString(), path)) {
				logger.debug("request path: {} is pass", path);
				return null;
			}
		}

		String token = request.getHeader("token");
		if (StringUtils.isEmpty(token)) {
			logger.warn("access token is empty");
			ctx.setSendZuulResponse(false);
			ctx.setResponseStatusCode(404);
			ctx.setResponseBody(JSONObject.toJSONString(
					Response.error(200, -3."header param error".null)));
			return ctx;
		}

		Jedis jedis = null;
		try {
			JedisPool jedisPool = redisCacheConfiguration.getJedisPool();
			jedis = jedisPool.getResource();
			logger.debug("zuul gateway service get redisResource success");
			String key = env.getPrefix() + token;
			String value = jedis.get(key);
			if (StringUtils.isBlank(value)) {
				ctx.setSendZuulResponse(false);
				ctx.setResponseStatusCode(401);
				ctx.setResponseBody(JSONObject.toJSONString(Response.error(200, -1."login timeout".null)));
				return ctx;
			} else {
				logger.debug("access token ok");
				return null; }}catch (Exception e) {
			logger.error("get redisResource failed");
			logger.error(e.getMessage(), e);
			ctx.setSendZuulResponse(false);
			ctx.setResponseStatusCode(500);
			ctx.setResponseBody(JSONObject.toJSONString(
					Response.error(200, -8."redis connect failed".null)));
			return ctx;
		} finally {
			if(jedis ! =null) { jedis.close(); }}}}Copy the code

3. Service registration and discovery of micro-services

3.1 Common registries

Eureka Consul Nacos is one of the most popular registries in the world, but Kubernetes can also register and discover services as described below.

Eureka’s high availability

It is possible to have node problems during registry deployment. Let’s first look at how Eureka cluster can be highly available. First configure the basic Eureka configuration:

spring.application.name=eureka-server
server.port=1111

spring.profiles.active=dev

eureka.instance.hostname=localhost

eureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/

logging.path=/data/${spring.application.name}/logs

eureka.server.enable-self-preservation=false
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

eureka.server.eviction-interval-timer-in-ms=5000
eureka.server.responseCacheUpdateInvervalMs=60000

eureka.instance.lease-expiration-duration-in-seconds=10

eureka.instance.lease-renewal-interval-in-seconds=3

eureka.server.responseCacheAutoExpirationInSeconds=180

server.undertow.accesslog.enabled=false
server.undertow.accesslog.pattern=combined
Copy the code

Once configured, create an application-peer1.properties file:

spring.application.name=eureka-server
server.port=1111
eureka.instance.hostname=peer1
eureka.client.serviceUrl.defaultZone=http://peer2:1112/eureka/
Copy the code

Application – peer2. The properties file:

spring.application.name=eureka-server
server.port=1112
eureka.instance.hostname=peer2
eureka.client.serviceUrl.defaultZone=http://peer1:1111/eureka/
Copy the code

In this case, domain names peer1 and peer2 are used to implement high availability. How do I configure domain names? There are several ways:

  • To configure the domain name, run the vi /etc/hosts command:
10.12.3.2 peer1
10.12.3.5 peer2
Copy the code
  • Configure the domain name when deploying the service through Kubernetes:
HostAliases: -ip: "10.12.3.2" Hostnames: - "peer1" -IP: "10.12.3.5" Hostnames: - "peer2"Copy the code

Nacos implements service registration and discovery

Nacos is launched by Alibaba and the latest version is V1.2.1. It can register and discover services, and provide configuration services as configuration management. You can manually download and install Nacos at github.com/alibaba/nac…

Execute, Linux/Unix/Mac:

sh startup.sh -m standalone
Copy the code

Windows:

cmd startup.cmd -m standalone
Copy the code

When we introduce the nacOS-related configuration, we can use it:

<dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
 </dependency>

 <dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
 </dependency>
Copy the code
Note: the following configuration file needs to be bootstrap, otherwise it may fail. You can try it yourself.
Spring: Application: name: oauth-cas cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 refreshable - dataids: physical properties, the propertiesCopy the code

After the configuration, complete the main:

package com.damon;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableAutoConfiguration
@ComponentScan(basePackages = {"com.damon"})
@EnableDiscoveryClient
public class CasApp {
  public static void main(String[] args) {
    SpringApplication.run(CasApp.class, args);
  }
}
Copy the code

To complete the above, we run the startup class, we open Nacos login, open the service list, you can see:

Kubernetes service registration and discovery

Next, please allow me to introduce the service registration and discovery feature of Kubernetes Spring-cloud-kubernetes DiscoveryClient matches “Service” resources in Kubernetes to services in Spring Cloud. In the Kubernetes environment, we do not need Eureka to do registration discovery, but directly use Kubernetes service mechanism.

In Pem.xml, there is a dependency configuration for the Spring-Cloud-Kubernetes framework:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-kubernetes-core</artifactId>
</dependency>

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-kubernetes-discovery</artifactId>
</dependency>
Copy the code

Why is Spring-cloud-Kubernetes able to complete service registration discovery? First, create a startup class for the Spring Boot project, and introduce the service discovery annotation @enableDiscoveryClient, and enable service discovery:

spring:
  application:
    name: edge-admin
  cloud:
    kubernetes:
      discovery:
        all-namespaces: true
Copy the code

Discovery: spring-cloud-kubernetes-Discovery: spring-cloud-kubernetes-discovery: spring-cloud-kubernetes-discovery: spring-cloud-kubernetes-discovery:

Why should I read this file? Because the Spring container starts up looking for all the Spring. factories in your classpath (including jar files), all the classes configured in the Spring. factories are instantiated. This technique is used in the starter jar that we often use in springBoot development. The effect is that once you rely on a starter jar, many functions are automatically performed during Spring initialization.

There are two classes in the spring.factories file: KubernetesDiscoveryClientAutoConfiguration and KubernetesDiscoveryClientConfigClientBootstrapConfiguration will be instantiated. See KubernetesDiscoveryClientConfigClientBootstrapConfiguration first, KubernetesAutoConfiguration and KubernetesDiscoveryClientAutoConfiguration these two classes can be instantiated:

 * Copyright 2013-2019 the original author or authors.

package org.springframework.cloud.kubernetes.discovery;

import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.cloud.kubernetes.KubernetesAutoConfiguration;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;

@Configuration
@ConditionalOnProperty("spring.cloud.config.discovery.enabled")
@Import({ KubernetesAutoConfiguration.class, KubernetesDiscoveryClientAutoConfiguration.class })
public class KubernetesDiscoveryClientConfigClientBootstrapConfiguration {}Copy the code

Again see KubernetesAutoConfiguration source code, can instantiate an important kind of DefaultKubernetesClient, as follows:

@Bean
@ConditionalOnMissingBean
public KubernetesClient kubernetesClient(Config config) {
	return new DefaultKubernetesClient(config);
}
Copy the code

Finally, we look at KubernetesDiscoveryClientAutoConfiguration source, note kubernetesDiscoveryClient method, it is the key of the interface, Another important thing to note is the value of the KubernetesClient parameter, which is the DefaultKubernetesClient object mentioned above:

@Bean
@ConditionalOnMissingBean
@ConditionalOnProperty(name = "spring.cloud.kubernetes.discovery.enabled", matchIfMissing = true)
public KubernetesDiscoveryClient kubernetesDiscoveryClient(KubernetesClient client, KubernetesDiscoveryProperties properties, KubernetesClientServicesFunction kubernetesClientServicesFunction, DefaultIsServicePortSecureResolver isServicePortSecureResolver) {
  return new KubernetesDiscoveryClient(client, properties, kubernetesClientServicesFunction, isServicePortSecureResolver);
}
Copy the code

Next, we look at the spring – the cloud – kubernetes KubernetesDiscoveryClient. Java, look at methods:

public List<String> getServices(Predicate<Service> filter) {
		return this.kubernetesClientServicesFunction.apply(this.client).list().getItems()
				.stream().filter(filter).map(s -> s.getMetadata().getName())
				.collect(Collectors.toList());
}
Copy the code

In the apply (enclosing the client). The list (), you can see the data source is this. The client, and KubernetesClientServicesFunction instantiation:

@Bean
public KubernetesClientServicesFunction servicesFunction(
			KubernetesDiscoveryProperties properties) {
  if (properties.getServiceLabels().isEmpty()) {
    return KubernetesClient::services;
  }

  return (client) -> client.services().withLabels(properties.getServiceLabels());
}
Copy the code

Calls to return the result of its services methods, KubernetesDiscoveryClient getServices method in this. What is the client? Has mentioned in the previous analysis, are DefaultKubernetesClient instance of a class, so, at this time to go to see DefaultKubernetesClient. Services approach, Find the client is ServiceOperationsImpl:

@Override
  public MixedOperation<Service, ServiceList, DoneableService, ServiceResource<Service, DoneableService>> services() {
    return new ServiceOperationsImpl(httpClient, getConfiguration(), getNamespace());
  }
Copy the code

We then look at the list function in our instance ServiceOperationsImpl:

public L list() throws KubernetesClientException { try { HttpUrl.Builder requestUrlBuilder = HttpUrl.get(getNamespacedUrl()).newBuilder(); String labelQueryParam = getLabelQueryParam(); if (Utils.isNotNullOrEmpty(labelQueryParam)) { requestUrlBuilder.addQueryParameter("labelSelector", labelQueryParam); } String fieldQueryString = getFieldQueryParam(); if (Utils.isNotNullOrEmpty(fieldQueryString)) { requestUrlBuilder.addQueryParameter("fieldSelector", fieldQueryString); } Request.Builder requestBuilder = new Request.Builder().get().url(requestUrlBuilder.build()); L answer = handleResponse(requestBuilder, listType); updateApiVersion(answer); return answer; } catch (InterruptedException | ExecutionException | IOException e) { throw KubernetesClientException.launderThrowable(forOperationType("list"), e); }}Copy the code

GetNamespacedUrl () : getRootUrl () : getRootUrl ();

public URL getRootUrl() {
    try {
      if (apiGroup != null) {
        return new URL(URLUtils.join(config.getMasterUrl().toString(), "apis", apiGroup, apiVersion));
      }
      return new URL(URLUtils.join(config.getMasterUrl().toString(), "api", apiVersion));
    } catch (MalformedURLException e) {
      throw KubernetesClientException.launderThrowable(e);
    }
  }
Copy the code

When we look at the logic, we seem to know that the result is in this format:

XXX/API/version or XXX/apis/XXX/versionCopy the code

See such result, feels more like visit kubernetes API Server with the URL of the standard format, details please refer to the official documentation on the API Server services, the address is: kubernetes. IO/docs/refere…

To clarify the above, we found that the final HTTP request was made to Kubernetes API Server to get the data list of the Service resource. Therefore, we end up creating a new Service resource on the k8S layer for it to fetch:

apiVersion: v1
kind: Service
metadata:
  name: admin-web-service
  namespace: default
spec:
  ports:
  - name: admin-web01
    port: 2001
    targetPort: admin-web01
  selector:
    app: admin-web
Copy the code

Of course, in the Deployment, whether in the form of Deployment or DaemonSet Deployment, it is still POD at the end. If you want to realize the multi-node Deployment of a single service, you can use:

kubectl scale --replicas=2 deployment admin-web-deployment
Copy the code

Conclusion:

The purpose of spring-cloud-kubernetes service discovery is to obtain the list of all services under one or more namespaces in Kubernetes, and set the filter port number when filtering the list. This will allow Spring Boot or other framework applications that rely on them to do the job of discovering the services and allowing them to passhttp://serviceNameAccess in this way.

4. Configuration management of micro-services

4.1 Common Configuration centers

At present, there are several common configuration centers: Spring Cloud Config, Apollo, Nacos, but in fact, Kubernetes component configMap can realize the configuration management of the service. And, in Spring Boot2.x, it has been introduced.

Nacos configuration center

In the registry above, we talked about Nacos as a registry that can also be configured to manage the environment variables of a service.

Again, introduce it to rely on:

<dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
 </dependency>

 <dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
 </dependency>
Copy the code

Also, note that the following configuration file needs to be bootstrap or it may fail.

Spring: Application: name: oauth-cas cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 refreshable - dataids: physical properties, the propertiesCopy the code

The startup class was covered in the registry above, now look at its configuration class:

package com.damon.config; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Component; @Component @RefreshScope public class EnvConfig { @Value("${jdbc.driverClassName:}") private String jdbc_driverClassName; @Value("${jdbc.url:}") private String jdbc_url; @Value("${jdbc.username:}") private String jdbc_username; @Value("${jdbc.password:}") private String jdbc_password; . }Copy the code

The configuration is available via the @Component and @refreshScope annotations. @value (“${jdbc.username:}”);

Next, you can configure the property value. Click Config Management to view the configuration:

If the configuration is not displayed for the first time, you can create a configuration:

Edit configuration:

Once you’re done, you can edit it, you can delete it, but I won’t do it here.

ConfigMap serves as configuration management

Spring-cloud-kubernetes provides service discovery capabilities above, but it is also powerful and provides configuration management of services:

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-actuator-autoconfigure</artifactId>
  </dependency>

  <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-kubernetes-config</artifactId>
  </dependency>
Copy the code

At initialization, introduce annotations to inject automatically:

package com.damon; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.autoconfigure.security.oauth2.client.EnableOAuth2Sso; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import com.damon.config.EnvConfig; @Configuration @EnableAutoConfiguration @ComponentScan(basePackages = {"com.damon"}) @EnableConfigurationProperties(EnvConfig.class) @EnableDiscoveryClient public class AdminApp { public static void main(String[] args) { SpringApplication.run(AdminApp.class, args); }}Copy the code

Where the EnvConfig class is used to configure the environment variable configuration:

package com.damon.config; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Configuration; @Configuration @ConfigurationProperties(prefix = "damon") public class EnvConfig { private String message = "This is a dummy message"; public String getMessage() { return this.message; } public void setMessage(String message) { this.message = message; }}Copy the code

Thus, at deployment time, we create a new resource of type ConfigMap and configure its property values:

kind: ConfigMap
apiVersion: v1
metadata:
  name: admin-web
data:
  application.yaml: |-
    damon:
      message: Say Hello to the World
    ---
    spring:
      profiles: dev
    damon:
      message: Say Hello to the Developers
    ---
    spring:
      profiles: test
    damon:
      message: Say Hello to the Test
    ---
    spring:
      profiles: prod
    damon:
      message: Say Hello to the Prod
Copy the code

And combined with the configuration, to achieve dynamic update:

spring:
  application:
    name: admin-web
  cloud:
    kubernetes:
      discovery:
        all-namespaces: true
      reload:
        enabled: true
        mode: polling
        period: 500
      config:
        sources:
        - name: ${spring.application.name}
          namespace: default
Copy the code

Here is to achieve automatic 500ms pull configuration, can also be triggered by the form of events to dynamically obtain the latest configuration:

spring:
  application:
    name: admin-web
  cloud:
    kubernetes:
      config:
        sources:
         - name: ${spring.application.name}
           namespace: default
      discovery:
        all-namespaces: true
      reload:
        enabled: true
        mode: event
        period: 500
Copy the code

5. Division of microservice modules

5.1 How to classify microservices

In the design of microservice architecture, the problem of service separation is very prominent. The first one is vertical service separation, and the second one is horizontal function separation.

Taking e-commerce business as an example, it is divided into user micro-service, commodity micro-service, transaction micro-service, order micro-service and so on according to the vertical division of business fields.

Think about it: does vertical splitting just by business domain satisfy all business scenarios? The result is definitely no. For example, user services are divided into user registration (write) and login (read). Write requests are always more important than read requests. In high concurrency, the read/write ratio is 10:1, or even higher. As a result, a large number of read requests often directly affect write requests. In order to avoid the interference of a large number of read to write requests, read and write separation of services is required, that is, users register as one microservice and log in as another microservice. In this case, services are split vertically based on the fine granularity of the API.

Horizontally, split by requested functionality, that is, the lifecycle of a request continues to be split. The request is sent from the client, and the first to receive the request is the gateway service (not considering the nginx proxy gateway distribution process), which performs authentication, parameter validity check, routing and forwarding, etc. The business logic service then orchestrates the business logic processing of the request. Data access services are required to store and query business data. Data access services provide basic CRUD atomic operations, and are responsible for sorting massive data into databases and tables, as well as shielding the differences of underlying storage. Finally, there are data persistence and caching services, such as MQ, Kafka, Redis Cluster, etc.

Microservices architecture through vertical separation of business and horizontal separation of functions, services evolve into smaller granularity, services are decoupled from each other, and each service can be rapidly iterated and continuously delivered (CI/CD), thus achieving the ultimate goal of cost reduction and efficiency improvement at the company level. However, the finer the service granularity, the more services interact with each other, and the more interactions make governance between services more complex. Inter-service governance includes inter-service discovery, communication, routing, load balancing, retry mechanism, traffic limiting degradation, fusing, and link tracing.

5.2 Granularity of microservices

The core six words of microservice granularity are probably “high cohesion, low coupling”. High cohesion: That is, each service is in the same network or domain, and relative to the outside, the whole is a closed, secure box, like a rose. The external interfaces of the box remain unchanged, as do the interfaces between modules inside the box, but the contents inside each module can be changed. Modules expose only minimal interfaces to avoid strong dependencies. Adding or deleting a module should only affect related modules that have dependencies, and should not affect irrelevant modules.

Low coupling, then, comes into our business system design. The so-called low coupling: it is to reduce the relationship between each business module, reduce the complexity of redundancy, repetition and cross, and the module function division is as single as possible. In this way, low coupling can be achieved.