This is the 10th day of my participation in Gwen Challenge. This article is participating in “Java Theme Month – Java Development in Action”, see the activity link for details.
The difference between Spring Boot 1.x and 2.x
In a brief introduction to Spring Cloud Architecture Design, I talked about the architecture design of Spring Cloud. In fact, at the very beginning, Spring Boot was basically applied to Eureka, Config, Zuul, Ribbon, Feign, Hystrix, etc. By Spring Boot 2.x, a large number of components were on the rise. Here is a brief list of the differences between the two versions.
In Spring Boot 1.x, the session timeout looks like this:
server.session.timeout=3600
Copy the code
In 2.x:
server.servlet.session.timeout=PT120M
Copy the code
A very different way to write this, and a cookie is the same:
server:
servlet:
session:
timeout: PT120M
cookie:
name: ORDER-SERVICE-SESSIONID
Copy the code
- The ContextPath configuration property of the application is changed to add a servlet, as with the session above.
- Spring Boot 2.x is based on Spring 5, while Spring Boot 1.x is based on Spring 4 or lower.
- Changes to the base class AbstarctErrorController for unified error handling.
- The Chinese characters of the configuration file can be read directly without transcoding.
- Acutator changes a lot. By default, all monitoring is no longer enabled. You need to customize the monitoring information and rewrite it completely.
- Starting with Spring Boot 2.x, it can be combined with K8s to implement configuration management of services, load balancing, etc., which is different from 1.x.
An introduction to some of K8s resources
As mentioned above, Spring Boot 2.x can be combined with K8s as a microservice architecture design, so let’s first talk about some components of K8s.
ConfigMap, as the name makes sense, is a key-value pair used to hold configuration information, either for individual properties or configuration files. For non-sensitive information, such as application configuration information, you can use ConfigMap.
There are several ways to create a ConfigMap.
1. Create a key-value string
kubectl create configmap test-config --from-literal=baseDir=/usr
Copy the code
The above command creates a key-value pair named test-config with a key baseDir and value “/usr”.
2. Create a file based on the YML description file
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
baseDir: /usr
Copy the code
Alternatively, create a YML file and configure different information for different environments:
kind: ConfigMap
apiVersion: v1
metadata:
name: cas-server
data:
application.yaml: |-
---
spring:
profiles: dev
greeting:
message: Say Hello to the Dev
Copy the code
Note:
- ConfigMap must be created before Pod can use it.
- Pod can only use ConfigMap in the same namespace.
Of course, there are many other uses, you can refer to the official website.
What kind of Service is it? It is a logical collection of pods that define a service and a policy for accessing pods.
There are four types of services:
- ExternalName: Creates a DNS alias to point to the service name. This prevents the service name from changing, but it needs to be used with the DNS plug-in.
- ClusterIP: Specifies the default fixed IP address for Pod access. The default IP address is automatically allocated. You can use the ClusterIP keyword to specify a fixed IP address.
- NodePort: ClusterIP-based, used to provide access ports for external pods to access Service.
- LoadBalancer: This is based on NodePort.
How do I use K8s for service registration and discovery
From the Service above, we can see a scenario: All microservers on a LAN, or in a K8s cluster, can be accessed by Pod through Service. This is the default type of Service. ClusterIP automatically allocates addresses.
The question is, if you can use ClusterIp to access services within a cluster, how do you register services? In fact, K8s does not introduce any registry, using the KUbe-DNS component of K8s. K8s then registers the name of the Service as a domain name in kube-DNS, using the name of the Service to access the services it provides. Then again, how to implement LB if there should be multiple POD pairs for a service? In fact, finally through kube-proxy, load balancing.
There are two types of Service load distribution strategies:
- RoundRobin: indicates the RoundRobin mode, which forwards requests to each pod at the back end. The default mode is RoundRobin.
- SessionAffinity: Indicates the session retention mode based on the client IP address. It is similar to IP Hash to implement load balancing.
In fact, K8s uses its Service to realize the discovery of services. In fact, to put it bluntly, it is to resolve layers of domain names, and finally resolve to the IP and port inside the container to find the corresponding Service, so as to complete the request.
Here’s a very simple example:
apiVersion: v1
kind: Service
metadata:
name: cas-server-service
namespace: default
spec:
ports:
- name: cas-server01
port: 2000
targetPort: cas-server01
selector:
app: cas-server
Copy the code
Kubectl apply -f service.yaml
root@ubuntu:~$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE admin-web-service ClusterIP 10.16.129.24 <none> 2001/TCP 84D Cas-server-service ClusterIP 10.16.230.167 < None > 2000/TCP 67D Cloud-admin-service-service ClusterIP 10.16.25.178 < none > 1001 / TCP 190 dCopy the code
Thus, we can see that the default type is ClusterIP, which is used for Pod access within the cluster. Multiple service addresses can be resolved by domain name, and then LB policy is used to select one of them as the request object.
How does K8s handle common configurations in microservices
Above, we discussed several ways to create a ConfigMap, one of which is commonly used in Java: creating a YML file for configuration management.
Such as:
kind: ConfigMap
apiVersion: v1
metadata:
name: cas-server
data:
application.yaml: |-
---
spring:
profiles: dev
greeting:
message: Say Hello to the Dev
spring:
profiles: test
greeting:
message: Say Hello to the Test
spring:
profiles: prod
greeting:
message: Say Hello to the Prod
Copy the code
A YML file is created above, and the configuration for each environment, such as development, test, production, and so on, is specified by Spring.profiles.
Specific code:
apiVersion: apps/v1 kind: Deployment metadata: name: cas-server-deployment labels: app: cas-server spec: replicas: 1 selector: matchLabels: app: cas-server template: metadata: labels: app: cas-server spec: nodeSelector: cas-server: "true" containers: - name: cas-server image: {{ cluster_cfg['cluster']['docker-registry']['prefix'] }}cas-server imagePullPolicy: Always ports: - name: cas-server01 containerPort: 2000 volumeMounts: - mountPath: /home/cas-server name: cas-server-path args: ["sh", "-c", "nohup java $JAVA_OPTS -jar -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=128m -Xms1024m -Xmx1024m -Xmn256m -Xss256k -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC cas-server.jar --spring.profiles.active=dev", "&"] hostAliases: - ip: Localhost -ip: "0.0.0.0" Hostnames: - "gemantic. All "volumes: -name: cas-server-path hostPath: path: /var/pai/cas-serverCopy the code
Thus, when we start the container, we can get the configuration in ConfigMap by specifying the active environment of the current container with –spring.profiles. Active =dev. Does it feel a bit like Java Config to configure multiple environments? But we don’t have to be that complicated. We can hand it all over to K8s. You just need to activate the command, isn’t that easy?
New features in Spring Boot 2.x
In the first section, we talked about the differences between 1.x and 2.x, among which the most prominent is that Spring Boot 2.x combines K8s to implement the architecture design of microservices. In K8s, pod does not automatically refresh the ConfigMap after it is updated. If you want to obtain the latest information from the ConfigMap, you need to restart the POD.
But 2.x provides automatic refresh:
spring:
application:
name: cas-server
cloud:
kubernetes:
config:
sources:
- name: ${spring.application.name}
namespace: default
discovery:
all-namespaces: true
reload:
enabled: true
mode: polling
period: 500
Copy the code
As mentioned above, we turned on the switch of automatic update configuration and set the mode of automatic update as active pull with an interval of 500ms. Meanwhile, we also provided another mode — event event notification mode. In this way, when ConfigMap changes, the latest data can be retrieved without restarting the POD.
At the same time, Spring Boot 2.x combines K8s to realize service registration and discovery of microservices:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-kubernetes-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-kubernetes-discovery</artifactId>
</dependency>
Copy the code
Enable the service discovery function:
spring:
cloud:
kubernetes:
discovery:
all-namespaces: true
Copy the code
After this function is enabled, as mentioned in the article of Shallow Introduction to Spring Cloud architecture design, it is actually an HTTP request to K8s API Server to obtain the data list of Service resources. Then the service is discovered according to the underlying load balancing strategy and finally resolved to a POD. In order for multiple pods for the same service to exist, we need to execute:
kubectl scale --replicas=2 deployment admin-web-deployment
Copy the code
The RestTemplate Client is used in conjunction with the Ribbon:
client: http: request: connectTimeout: 8000 readTimeout: 3000 backend: ribbon: eureka: enabled: false client: enabled: true ServerListRefreshInterval: 5000 ribbon: ConnectTimeout: 8000 ReadTimeout: 3000 eager-load: enabled: true clients: Cas server - service, admin - web - service MaxAutoRetries: 1 # service for the first request of retries MaxAutoRetriesNextServer: 1 # to retry the next service the maximum number of (not including the first service) # ServerListRefreshInterval: 2000 OkToRetryOnAllOperations: true NFLoadBalancerRuleClassName: Com.net flix. Loadbalancer. # RoundRobinRule com. Damon. Config. # RibbonConfiguration distributed load balancing strategyCopy the code
You can configure service lists and customize load balancing policies.
If you use Feign as an LB, there is only a slight difference with the Ribbon. Feign is based on the Ribbon itself.
Feign: client: config: default: #provider-service connectTimeout: 8000 3000 # Client read timeout set loggerLevel: fullCopy the code
Other load balancing policies can be customized. This is based on the Ribbon, so it’s the same.
Spring Boot 2.x combined with K8s to realize the micro service architecture design
Micro service architecture, main is to service consumers, service producers can be Shared, can call, on the basis of this, you can also realize the load balance, that is, a service call another service, in the presence of multiple nodes, the service can be found through some strategies to access to the service of a suitable node. The following focuses on the producers and consumers of services.
Look at producers first, introducing the usual dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-actuator-autoconfigure</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
Copy the code
We used the newer version above: Spring Boot 2.1.13, And The Cloud version is Greenwich.SR3. Secondly, we configure the dependencies used by K8s ConfigMap, and add some configurations of the database.
K8s ConfigMap (K8s ConfigMap) ConfigMap (K8s ConfigMap) ConfigMap (K8s ConfigMap) (K8s ConfigMap)
spring: application: name: cas-server cloud: kubernetes: config: sources: - name: ${spring.application.name} namespace: Reload: enabled: true mode: Path: /data/${spring.application. Name}/logsCopy the code
The rest of the configuration can be configured in the application file:
spring:
profiles:
active: dev
server:
port: 2000
undertow:
accesslog:
enabled: false
pattern: combined
servlet:
session:
timeout: PT120M
Copy the code
Let’s look at the startup class:
@Configuration @EnableAutoConfiguration @ComponentScan(basePackages = {"com.damon"}) //@SpringBootApplication(scanBasePackages = { "com.damon" }) @EnableConfigurationProperties(EnvConfig.class) public class CasApp { public static void main(String[] args) { SpringApplication.run(CasApp.class, args); }}Copy the code
We do not use the @SpringBootApplication annotation directly here, because we are mainly using several configurations and do not need to load them all.
We see an introduced envconfig.class in the startup class:
@Configuration
@ConfigurationProperties(prefix = "greeting")
public class EnvConfig {
private String message = "This is a dummy message";
public String getMessage() {
return this.message;
}
public void setMessage(String message) {
this.message = message;
}
Copy the code
This is the class that ConfigMap the properties in. The rest can define their own interface class to implement the service producer.
Finally, if we need to deploy under K8s, we need to prepare several scripts.
1. Create ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: cas-server
data:
application.yaml: |-
---
spring:
profiles: dev
greeting:
message: Say Hello to the Dev
spring:
profiles: test
greeting:
message: Say Hello to the Test
spring:
profiles: prod
greeting:
message: Say Hello to the Prod
Copy the code
Note that the namespace needs to be the same as the namespace where the service is deployed. The default is default, and this must be created before the service can be created.
2. Create a service deployment script
apiVersion: apps/v1
kind: Deployment
metadata:
name: cas-server-deployment
labels:
app: cas-server
spec:
replicas: 3
selector:
matchLabels:
app: cas-server
template:
metadata:
labels:
app: cas-server
spec:
nodeSelector:
cas-server: "true"
containers:
- name: cas-server
image: cas-server
imagePullPolicy: Always
ports:
- name: cas-server01
containerPort: 2000
volumeMounts:
- mountPath: /home/cas-server
name: cas-server-path
- mountPath: /data/cas-server
name: cas-server-log-path
- mountPath: /etc/kubernetes
name: kube-config-path
args: ["sh", "-c", "nohup java $JAVA_OPTS -jar -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=128m -Xms1024m -Xmx1024m -Xmn256m -Xss256k -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC cas-server.jar --spring.profiles.active=dev", "&"]
volumes:
- name: cas-server-path
hostPath:
path: /var/pai/cas-server
- name: cas-server-log-path
hostPath:
path: /data/cas-server
- name: kube-config-path
hostPath:
path: /etc/kubernetes
Copy the code
Note: there is a replicas property, which is the number of active replicas of the current POD. You can also use the script to make multiple copies of the pod. If there are no more than one, you can also run the command:
kubectl scale --replicas=3 deployment cas-server-deployment
Copy the code
Here, I recommend using the Deployment type for pod creation, since the Deployment type better supports elastic scaling and rolling updates.
At the same time, we specify the running environment of the current POD with –spring.profiles. Active =dev.
3. Create a Service
Finally, if the Service wants to be discovered, we need to create a Service:
apiVersion: v1
kind: Service
metadata:
name: cas-server-service
namespace: default
spec:
ports:
- name: cas-server01
port: 2000
targetPort: cas-server01
selector:
app: cas-server
Copy the code
Note that the namespace needs to be the same as the namespace deployed by the service, which defaults to default.
Looking at the consumer of the service, again, let’s look at introducing common dependencies:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-actuator-autoconfigure</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-config</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-commons</artifactId> </dependency> <! - combining k8s implementation service discovery - > < the dependency > < groupId > org. Springframework. Cloud < / groupId > <artifactId>spring-cloud-kubernetes-core</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-kubernetes-discovery</artifactId> </dependency> <! - load balancing strategy - > < the dependency > < groupId > org. Springframework. Cloud < / groupId > <artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-ribbon</artifactId> </dependency> <! - circuit breakers - > < the dependency > < groupId > org. Springframework. Cloud < / groupId > <artifactId>spring-cloud-starter-netflix-hystrix</artifactId> </dependency>Copy the code
Most of the dependencies here are the same as those of the producers, but the dependency of service discovery needs to be added, as well as the policy dependency of load balancing used, and the circuit breaker mechanism of the service.
The next configuration in the bootstrap file is the same as that in the producer file. The only difference is the application file:
backend: ribbon: eureka: enabled: false client: enabled: true ServerListRefreshInterval: 5000 ribbon: ConnectTimeout: 3000 ReadTimeout: 1000 eager-load: enabled: true clients: Cas-server-service,edge-cas-service,admin-web-service The list of services discovered by load balancing MaxAutoRetries: 1 # for the first time the requested service retries MaxAutoRetriesNextServer: 1 # to retry the next service the maximum number of (not including the first service) OkToRetryOnAllOperations: True NFLoadBalancerRuleClassName: com.net flix. Loadbalancer. RoundRobinRule # hystrix load balancing strategy: the command: BackendCall: Execution: ISOLATION: Thread: timeoutInMilliseconds: 5000 # BackendCallThread: coreSize: 5Copy the code
Load balancing mechanism and policies (you can customize policies) are introduced.
Next, start classes:
/** * @author Damon * @date January 13, 2020 9:23:06 ** / @configuration @enableAutoConfiguration @ComponentScan(basePackages = {"com.damon"}) @EnableConfigurationProperties(EnvConfig.class) @EnableDiscoveryClient public class AdminApp { public static void main(String[] args) { SpringApplication.run(AdminApp.class, args); }}Copy the code
The same EnvConfig class is not shown here. Others such as: the @enableDiscoveryClient annotation is for service discovery.
Similarly, we create a new interface, assuming that our producer has an interface that is:
http://cas-server-service/api/getUser
Copy the code
We can use the RestTemplate Client to load balance the Ribbon:
@LoadBalanced
@Bean
public RestTemplate restTemplate() {
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setReadTimeout(env.getProperty("client.http.request.readTimeout", Integer.class, 15000));
requestFactory.setConnectTimeout(env.getProperty("client.http.request.connectTimeout", Integer.class, 3000));
RestTemplate rt = new RestTemplate(requestFactory);
return rt;
}
Copy the code
As you can see, this approach to distributed load balancing is simple to implement by injecting an initialization Bean with an annotation @loadBalanced.
In the implementation class, we simply call the service producer directly:
ResponseEntity<String> forEntity = restTemplate.getForEntity("http://cas-server/api/getUser", String.class);
Copy the code
“Http://” must be added to the URL to realize service discovery and load balancing. You can use the Ribbon in several ways to implement LB policies, or you can customize one.
Finally, we can add a circuit breaker to the implementation class:
@HystrixCommand(fallbackMethod = "admin_service_fallBack") public Response<Object> getUserInfo(HttpServletRequest req, HttpServletResponse res) { ResponseEntity<String> forEntity = restTemplate.getForEntity(envConfig.getCas_server_url() + "/api/getUser", String.class); logger.info("test restTemplate.getForEntity(): {}", forEntity); if (forEntity.getStatusCodeValue() == 200) { logger.info("================================test restTemplate.getForEntity(): {}", JSON.toJSON(forEntity.getBody())); logger.info(JSON.toJSONString(forEntity.getBody())); }}Copy the code
In case of fuse break, the callback method is as follows:
private Response<Object> admin_service_fallBack(HttpServletRequest req, HttpServletResponse res) { String token = StrUtil.subAfter(req.getHeader("Authorization"), "bearer ", false); logger.info("admin_service_fallBack token: {}", token); Return response. ok(200, -5, "Service is down!" , null); }Copy the code
It must return the same object as the original function, otherwise an error may be reported. For details, see Spring Cloud circuit Breakers.
Finally, like the producer, you need to create ConfigMap, Service, and Service deployment scripts, which will be open source below and not shown here. Finally, we will find that when a certification authority is requested, multiple pods that exist in the certification authority can be requested in rotation. This is the training strategy based on the Ribbon to achieve distributed load balancing and information sharing based on Redis.