It is expected to be divided into about 4 chapters. This chapter mainly summarizes some configuration, logging, mirroring issues. The next installment will focus on continuous integration, and then on monitoring. Finally, clustering.

Spring Profiles and environment variables

We know that in Docker-based DevOps, we should have as many environments as possible with one image. To ensure uniform code issues across environments. According to our actual situation, we did not use the configuration center scheme, and adopt the scheme of environment variables to achieve.

By default, Spring Boot supports multiple environments. Spring profiles can be used to differentiate configurations for different environments or clusters.

You can specify which environment configuration to use using the environment variable SPRING_PROFILES_ACTIVE. Specific commands are as follows:


docker run -d -p 8080:8080 -e"SPRING_PROFILES_ACTIVE = dev" - the nametest testImage:latestCopy the code


We generally use a multi-file management configuration internally, and the environment is divided into five. They are local,dev, Test, Pre, and Pro, corresponding to local debugging, development environment,test environment, pre-release environment, and official environment respectively. Five configuration files are generated, Are the applicaiton yaml, applicaiton – local. Yaml, applicaiton – dev. Yaml, applicaiton – test. Yaml, applicaiton – pre. Yaml, applicaiton – p Rod. Yaml. In Applicaton.yaml we put common configuration, such as Jackson configuration, partial kafka, Mybatis configuration. For MySQL, the Kafka connection configuration is saved in the environment configuration. By default, the environment is Local. During the deployment of each environment, environment variables are overridden to switch configurations.

Another problem with this approach is that the online MySQL connection address can be exposed to anyone who has access to the code, which is very dangerous, so we also use environment variable injection by default for these configurations. The configuration information of the formal environment is generally known only by o&M. Let them inject it during o&M configuration.

Here’s an example:

spring.redis.host=${REDIS_HOST}
spring.redis.port=${REDIS_PORT}
spring.redis.timeout=30000

docker run -d -p 8080:8080 -e "SPRING_PROFILES_ACTIVE=dev" -e "REDIS_HOST = 127.0.0.1" -e "REDIS_PORT=3306" --name test testImage:latestCopy the code


In our code, there are other cases where we need to determine whether we need to configure beans based on environment variables. For example, Swagger we don’t want to turn on in production. In this case, we use @profile to determine whether the Bean needs to be initialized. Here’s an example:

@Component
@Profile("dev")
public class DatasourceConfigForDev


@Configuration
@EnableSwagger2
@Profile( "dev")
public class SwaggerConfig {
}Copy the code

Spring Boot containerized logs

In practice, we use Kubernetes for container scheduling and ES for logging. Currently, there are four conventional solutions to apply log collection.

The first application logs are passed directly over the network to the log collection component and then to ES. For example, LogstashSocketAppender of logstash- Logback-encoder. If the log volume is too large, it can be entered into the message channel first and then collected by the log collector. This method consumes more CPU and memory resources and requires a relatively stable network environment.

The second way is to output logs to a fixed directory and mount this directory to local or network storage for processing by the log collector. In this way, the pod information about Kubernetes is missing in the log. It needs to be replenished in other ways.

The third way is to output logs directly to the console, which is then logged by the Docker and collected by the log collector. With so many different types of containers running on the same host, the cost of parsing logs can be very, very high without special processing.

Fourth, each application has a separate auxiliary container for log parsing and collection. It takes up more resources. As long as the log collection tool in the secondary container is well chosen, this is really the best solution.

Based on the above centralized solution, we chose the third option according to our own situation. In order to avoid various log parsing work during the collection process, we want the log output to be in Json format as much as possible. Here we use logstash-logback-encoder to solve the problem and output JSON with a fixed structure. We created a logback-kubernetes.xml file for environments that need to be run in containers. This way we can happily use Spring Boot’s default logging for local development.

Questions about Java running in containers

We are currently using Java 8 and the JDK has chosen openJDK. The main reason why we chose openJDK was that we had not yet packaged the internal image, and we followed the tutorial to enter the openJDK camp. Now we should be happy. Looks like you can only use openJDK after Day. After all, with Java 11’s new authorization mode, we still need to consider whether to use it.

Before Java 8U131, we often had OOM kill problems because the JVM didn’t recognize that it was running in a container and couldn’t automatically allocate runtime parameters based on the container’s limited CPU. For some Java applications, repeated debugging is required. There is no way to do universal processing and automatic capacity expansion and reduction). Later we found https://github.com/fabric8io-images/java/tree/master/images/jboss/openjdk8/jdk, the mirror can according to automatically access cgroup for CPU and memory information, Calculate a relatively reasonable JVM configuration parameter. According to this idea, we also created our own internal corresponding script (monitoring system is different), but this configuration process is not very transparent.

, later to JRE 8 u131 JVM added – XX: + UnlockExperimentalVMOptions – XX: + UseCGroupMemoryLimitForHeap, can be used to identify the memory limit of container (principle you can baidu, here not speak). Considering that in general we would not be running out of CPU and memory would be the main bottleneck, we packaged the new image. The mirror image looks like this:


The FROM alpine: 3.8 ENV LANG ="en_US.UTF-8" \
    LANGUAGE="en_US.UTF-8" \
    LC_ALL="en_US.UTF-8" \
    TZ="Asia/Shanghai"

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/' /etc/apk/repositories \
    && apk add --no-cache tzdata curl ca-certificates \
    && echo "${TZ}" > /etc/TZ \
    && ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime \
    && rm -rf /tmp/* /var/cache/apk/*

ENV JAVA_VERSION_MAJOR=8 \
    JAVA_VERSION_MINOR=181 \
    JAVA_VERSION_BUILD=13 \
    JAVA_VERSION_BUILD_STEP=r0 \
    JAVA_PACKAGE=openjdk \
    JAVA_JCE=unlimited \
    JAVA_HOME=/usr/lib/jvm/default-jvm \
    DEFAULT_JVM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -XX:+UseG1GC"

RUN apk add --no-cache openjdk8-jre=${JAVA_VERSION_MAJOR}.${JAVA_VERSION_MINOR}.${JAVA_VERSION_BUILD}-${JAVA_VERSION_BUILD_STEP} \
    && echo "securerandom.source=file:/dev/urandom" >> /usr/lib/jvm/default-jvm/jre/lib/security/java.security \
    && rm -rf /tmp/*  /var/cache/apk/*Copy the code


To this point our Java base image is even packaged, but also relatively good solution to Java running in the container of some of the problems. As for future updates, Java 8U191 and Java 11 have solved the resource limitation problem, so there is no need to think about it. If you are not afraid to help out, try Java 11.

Some of the specific image information can refer to: https://github.com/XdaTk/DockerImages


About Spring Boot and Tomcat APR

For Spring Boot containers, we use Tomcat here. Undertow, which we have tried for some time, does have a smaller memory footprint. However, because the monitoring is not perfect, so we temporarily focus on Tomcat. If you upgrade to Spring Boot 2.0, you may notice that a WARN log about Tomcat APR appears at startup time. As to what is APR, you can refer to http://tomcat.apache.org/tomcat-9.0-doc/apr.html

For performance, we decided to switch to APR mode. Based on the Java image mentioned above, we continue to encapsulate again.

FROM xdatk/ openJDK :8.181.13-r0 as native ENV TOMCAT_VERSION="9.0.13" \
    APR_VERSION="1.6.3 - r1" \
    OPEN_SSL_VERSION="1.0.2 p - r0"
ENV TOMCAT_BIN="https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v${TOMCAT_VERSION}/bin/apache-tomcat-${TOMCAT_VERSION}.tar.gz"

RUN apk add --no-cache apr-dev=${APR_VERSION} openssl-dev=${OPEN_SSL_VERSION} openjdk8=${JAVA_VERSION_MAJOR}.${JAVA_VERSION_MINOR}.${JAVA_VERSION_BUILD}-${JAVA_VERSION_BUILD_STEP} wget unzip make g++ \
    && cd /tmp \
    && wget -O tomcat.tar.gz ${TOMCAT_BIN} \
    && tar -xvf tomcat.tar.gz \
    && cd apache-tomcat-*/bin \
    && tar -xvf tomcat-native.tar.gz \
    && cd tomcat-native-*/native \
    && ./configure --with-java-home=${JAVA_HOME}\ && make \ && make install FROM xdatk/ openJDK :8.181.13-r0 ENV TOMCAT_VERSION="9.0.13" \
    APR_VERSION="1.6.3 - r1" \
    OPEN_SSL_VERSION="1.0.2 p - r0" \
    APR_LIB=/usr/local/apr/lib

COPY --from=native ${APR_LIB} ${APR_LIB}

RUN apk add --no-cache apr=${APR_VERSION} openssl=${OPEN_SSL_VERSION}Copy the code

Measured down, there will be some performance hints.

Above, we have basically guaranteed that Spring Boot will run properly in the container. Next we need to streamline the code into production, so stay tuned for the next chapter.