Introduction to itoken Project

The development environment

  • Operating system: Windows 10 Enterprise
  • Development tool: Intellij IDEA
  • Database: MySql 5.7.22
  • Java SDK: Oracle JAK 1.8.152

The deployment environment

  • Operating system: Linux Ubuntu Server 16.04 X64
  • Virtualization technology: VMware + Docker

Project management tools

  • Project construction: Maven + Nexus
  • Code management: Git + GitLab
  • Image management: Docker Registry

Background main technology stack

  • Core framework: Spring Boot + Spring Cloud
  • View framework: Spring MVC
  • Page engine: Thymeleaf
  • ORM framework: Tk. mybatis simplifies MyBatis development
  • Database connection pool: Alibaba Druid
  • Database cache: Redis Sentinel
  • Message middleware: RabbitMQ
  • Interface document engine: Swagger2 RESTful API document generation
  • Full text search engine: ELasticSearch
  • Distributed link tracing: ZipKin
  • Distributed file system: Alibaba FastDFS
  • Distributed service monitoring: Spring Boot Admin
  • Distributed coordination System: Spring Cloud Eureka
  • Distributed Configuration center: Spring Cloud Config
  • Distributed log system: ELK(ElasticSearch + Logstash + Kibana)
  • Reverse proxy load balancer: Nginx

The front section is the main technology stack

  • Front-end framework: Bootstrap + JQurey
  • Front-end template: AdminLTE

Automated operation and maintenance

  • Continuous integration: GitLab
  • Continuous delivery: Jenkins
  • Container Choreography: Kubernetes

The project architecture

Create teams and projects

Spring Cloud base service deployment

  • A cluster must consist of at least three computers
  1. Pull the project into the Linux system

  2. Packaging project

mvn clean package
Copy the code
  1. Edit the Dockerfile of the corresponding image
FROM openJDK :8-jre RUN mkdir /app COPY itoken-config-1.0.0-snapshot. jar /app/ CMD java-jar / app/itoken - config - 1.0.0 - the SNAPSHOT. Jar -- spring. Profiles. The active = prod EXPOSE was 8888Copy the code
  1. Build the project image and upload it to Registry
Docker build-t 47.112.214.6:5000/itoken-configCopy the code
  1. Method to test whether the mirror is correct
Docker run - it 47.112.214.6:5000 / itoken - config"/bin/bash"Java -jar itoken -eureka-1.0.0-snapshot.jar --spring.profiles. Active =prod or docker run -p 8888:8888 47.112.214.6:5000 / itoken - configCopy the code
  1. Create the docker-comemage. yml file
  • vim docker-compose.yml
  • Compose the docker-compose file content
Configure a single application
version: '3.1'Services: itoken-config: restart: always image: 47.112.215.6:5000/itoken-config container_name: itoken-config ports: - 8888:8888Configure the cluster
version: '3.1'Services: itoken-eureka-1: restart: always image: 47.112.215.6:5000/itoken-eureka container_name: itoken-eureka-1 ports: - 8761:8761 itoken-eureka-2: restart: always image: 47.112.215.6:5000/itoken-eureka Container_name: IToken-eureka-2 ports: -8861:8761 Itoken-eureka-3: restart: Always image: 47.112.215.6:5000/ IToken-eureka Container_name: IToken-Eureka-3 ports: -8961:8761Copy the code
  • Track project container startup
docker logs -f ac70f745fbbc
Copy the code

Deploying continuous integration

  • Continuous integration refers to the frequent (multiple times a day) integration of code into the trunk.
## Find errors quickly. Every update is integrated into the trunk, making it easy to find and locate errors quickly.
## Prevents branches from deviating too far from the trunk. If you do not integrate frequently, the trunk is constantly updated, which can make later integration difficult or even difficult.
Copy the code
  • Pipeline (pipe)
A Pipeline is actually a build task, which can contain multiple processes, such as installing dependencies, running tests, compiling, deploying test servers, deploying production servers, etc. Any submitted or merged Request can trigger a PipelineCopy the code
  • Stages (stage)
"Stages" refers to the construction stage. Specifically, it is the process mentioned above. We can define multiple Stages in a Pipeline. These Stages have the following characteristics: 1. 2. The construction task will be successful only when all Stages are completed 3. When a Stage fails, later Stages do not perform, and the build task fails.Copy the code
  • The Jobs (task)
Jobs stands for build work and represents work performed within a Stage. We can define multiple Jobs in Stages. These Jobs have the following characteristics: 1. Jobs on the same Stage are executed in parallel. 2. The Stage succeeds only when Jobs on the same Stage are executed successfully. If any Job fails, the Stage fails, that is, the build task failsCopy the code

Continuous execution with GitLab Runner

  1. Create docker-comemage.yml for GitLab Runner continuous integration container
version: '3.1'
services:
  gitlab-runner:
    ## build (build:+ directory name)
    build: environment
    restart: always
    container_name: gitlab-runner
    Log in to the operation container as a true root administrator
    privileged: true
    volumes:
      - /usr/local/docker/runner/config:/etc/gitlab-runner
      - /var/run/docker.sock:/var/run/docker.sock
Copy the code
  1. Create a Dockerfile for GitLab Runner continuous integration image
FROM gitlab/ gitlab-Runner :v11.0.2 MAINTAINER Mrchen <[email protected]># Modify the software source
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse' > /etc/apt/sources.list && \
    echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse' >> /etc/apt/sources.list && \
    echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse' >> /etc/apt/sources.list && \
    echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse' >> /etc/apt/sources.list && \
    apt-get update -y && \
    apt-get clean

# installation DockerRUN apt-get -y install apt-transport-https ca-certificates curl software-properties-common && \ apt-get update && apt-get install -y gnupg2 && \ curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add - && \ curl  -fsSL get.docker.com -o get-docker.sh && \ sh get-docker.sh --mirror Aliyun COPY daemon.json /etc/docker/daemon.jsonDocker Compose installation
WORKDIR /usr/localhttps://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose- / bin RUN curl - L ` uname-s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x docker-compose

# to install the Java
RUN mkdir -p /usr/local/java
WORKDIR /usr/local/java
COPY jdk-8u251-linux-x64.tar.gz /usr/local/java
RUN tar -zxvf jdk-8u251-linux-x64.tar.gz && \
    rm -fr jdk-8u251-linux-x64.tar.gz

# install Maven
RUN mkdir -p /usr/local/maven
WORKDIR /usr/local/maven
# wget RUN https://raw.githubusercontent.com/topsale/resources/master/maven/apache-maven-3.5.3-bin.tar.gzCOPY the apache maven - 3.6.3 - bin. Zip/usr /local/maven RUN apt-get install unzip RUN unzip apache-3.6.3-bin.zip && \ rm -fr apache-maven-3.6.3-bin.zip# COPY Settings. The XML/usr/local/maven/apache maven - 3.6.3 - bin. Zip/conf/setting. The XML

Configure environment variables
ENV JAVA_HOME /usr/local/ Java/jdk1.8.0 _251 ENV MAVEN_HOME/usr /localMaven/apache maven - 3.6.3 ENV PATH$PATH:$JAVE_HOME/bin:$MAVEN_HOME/bin

WORKDIR /
Copy the code
  1. Registered Runner

docker exec -it gitlab-runner gitlab-runner register

Enter GitLab address
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://47.112.215.6/

Enter the GitLab Token
Please enter the gitlab-ci token for this runner:
1Lxq_f1NRfCfeNbE5WRh

Enter the description of Runner
Please enter the gitlab-ci description forThis runner: can be empty# set Tag, which can be used to specify that a CI is triggered when a specified Tag is built
Please enter this gitlab-ci tags for this runner (comma separated):

# Select runner executor, here we select shellPlease enter the executor: Virtualbox, docker+machine, Parallels, shell, SSH, Docker-ssh +machine, kubernetes, docker, docker-ssh: shellCopy the code
  1. Create the.gitlab-ci.yml file under the project project
  • Add a Dockerfile file

FROM OpenJDK :8-jre MAINTAINER Mrchen <365984197qq.com> ENV APP_VERSION 1.0.0-snapshot# Wait for other apps to come online
# ENV DOCKERIZE_VERSION v0.6.1
#RUN weget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.g z \
# && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
# && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz

RUN mkdir /app

COPY itoken-config-$APP_VERSION.jar /app/app.jar
Execute multiple initialization commands
# ENTRYPOINT [dockerize "-", "timeout", "5 m", "- wait," "TCP: / / 192.168.75.128:8888", "Java." "-Djava.security.egd=file:/dev/./urandom", "-jar", ""]
ENTRYPOINT ["java"."-Djava.security.egd=file:/dev/./urandom"."-jar"."/app/app.jar"."--spring.profiles.active=prod"]

EXPOSE 8888
Copy the code
  • Add docker-comemage. yml file

version: '3.1'Services: itoken-config: restart: always image: 47.112.215.6:5000/itoken-config container_name: itoken-config ports: - 8888:8888Copy the code
  • Add. Gitlab-ci.yml file

stages:
  - build
  - push
  - run
  - clean

build:
  stage: build
  script:
    - /usr/local/maven/apache-maven-3.6.3/bin/ MVN clean package -cp target/itoken-config-1.0.0- snapshot.jar docker/ -cdDocker-docker build-t 47.112.215.6:5000/ itoken-config. push: stage: push script: - docker push 47.112.215.6:5000/itoken-config run: stage: run script: -cd docker
    - docker-compose down
    - docker-compose up -d

clean:
  stage: clean
  script:
    - docker rmi $(docker images -q -f dangling=true)
Copy the code
  • Submitting the above files to gitLab triggers the script

  • Starting multiple Runner projects requires changes to networks

version: '3.1'Services: itoken-eureka: restart: always image: 47.112.215.6:5000/itoken-eureka container_name: Itoken-eureka ports: - 8761:8761 networks: - eureka-network networks: eureka-network:Copy the code

Production environment projects are rolled back in seconds

  • Use the Linux ln soft link command
  • Use the docker image to restart and roll back to the previous version

Develop administrator services using microservices

  1. Add druid connection pool and mysql database configuration
  • Add the POM.xml dependency
< spring - the boot - alibaba - druid. Version > 1.1.10 < / spring - the boot - alibaba - druid. Version > < mysql. Version > 5.1.46 < / mysql version > <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>${spring-boot-alibaba-druid.version}</version>
</dependency>

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>${mysql.version}</version>
    <scope>runtime</scope>
</dependency>
Copy the code
  • Add datasource configuration for application.yml
datasource: druid: url: jdbc:mysql://ip:port/dbname? useUnicode=true&characterEncoding=utf-8&useSSL=false
    username: root
    password: 123456
    initial-size: 1
    min-idle: 1
    max-active: 20
    test-on-borrow: true
    driver-class-name: com.mysql.jdbc.Driver
Copy the code

  1. Tk.mybatis configuration simplifies MyBatis development
  • Adding a dependency package
< the dependency > < groupId > tk. Mybatis < / groupId > < artifactId > mapper - spring - the boot - starter < / artifactId > < version > 2.0.2 < / version > </dependency>Copy the code
  1. Add the MyBatis configuration

  1. Add the MyMapper interface
  2. Add a PageHelper class method
  3. The configuration code automatically generates the plug-in

Test-driven Programming for Agile Development (TDD)

  1. Writing test cases (login and registration test cases)
  2. Write the implementation according to the use case method

Static File Deployment (Nginx-CDN Content Distribution Network)

  • Port-based virtual host configuration
  1. Add the docker-comemage. yml file
version: '3.1'
services:
  nginx:
    restart: always
    image: nginx
    container_name: nginx
    ports:
      - 81:80
      - 9000:9000
    volumes:
      - ./conf/nginx.conf:/etc/nginx/nginx.conf
      - ./wwwroot:/usr/share/nginx/wwwroot

Copy the code
  1. Add the nginx.conf configuration file
worker_processes 1;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;
    # configure virtual host 192.168.75.145
    # server {
    192.168.75.145:80 192.168.75.145:80 192.168.75.145:80
    # listen 80;
    Set the IP address here
    # server_name 192.168.75.145;
    # all requests start with /. All requests can match this location
    # location / {
        # use the root command to specify the virtual host directory
        # for example http://ip/index.html will find/usr/local/docker/nginx/below/html80 / index. HTML
        # for example http://ip/item/index.html will find/usr/local/docker/nginx/below/html80 / item/index. The HTML

    # root /usr/share/nginx/html80;
        Specify the welcome page, search from left to right
    # index index.html index.html
    #}

    #}
    # configure virtual host 192.168.75.245server { listen 8080; Server_name 192.168.75.145; location / { root /usr/share/nginx/wwwroot/html8080; index index.html index.html; }}}Copy the code
  • Reverse proxy configuration for Nginx
server { listen 80; server_name itoken.mrchen.com; The location / {proxy_pass http://192.168.75.128:9090; index index.html index.htm; }}Copy the code
  • Load balancing for Nginx
Upstream myapp1 {server 192.168.94.132:9090 weight=10; Server 192.168.94.132:9091 weight = 10; } server { listen 80; Server_name 192.168.94.132 location / {proxy_pass http://myapp1; index index.jsp index.html index.htm; }}Copy the code
  • Nginx reverse proxy load balancer implements pseudo CDN server

Use Redis for data caching

  • Caching (data queries, short links, news content, product content, and so on)
  • Deploy the Redis service in Redis HA(Redis high Availability) mode
## Redis HA high availability implementation technologyKeepalived 2. Zookeeper 3. Sentinel (official recommendation)Copy the code
  1. Set up the Redis cluster
    • Edit the docker-comemage. yml file
version: '3.1'
services:
  master:
    image: redis
    container_name: redis-master
    ports:
      - 6379:6379

  slave1:
    image: redis
    container_name: redis-slave-1
    ports:
      - 6380:6379
    command: redis-server --slaveof redis-master 6379

  slave2:
    image: redis
    container_name: redis-slave-2
    ports:
      - 6381:6379
    command: redis-server --slaveof redis-master 6379
Copy the code
  1. Sentinel high availability monitoring service was established
    • Edit the docker-comemess. yml configuration
version: '3.1'
services:
  sentinel1:
    image: redis
    container_name: redis-sentinel-1
    ports:
      - 26379:26379
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    values:
      - ./sentinel1.conf:/usr/local/etc/redis/sentinel.conf

  sentinel2:
    image: redis
    container_name: redis-sentinel-2
    ports:
      - 26380:26379
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    values:
      - ./sentinel2.conf:/usr/local/etc/redis/sentinel.conf

  sentinel3:
    image: redis
    container_name: redis-sentinel-3
    ports:
      - 26381:26379
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    values:
      - ./sentinel3.conf:/usr/local/etc/redis/sentinel.conf
Copy the code
* Add sentinel.conf configurationCopy the code
port 26379
dir /tmp
127.0.0.1 is the IP address of redis-master, 6379 is the port of redis-master, and 2 is the minimum number of votes.Sentinel Monitor myMaster 127.0.0.1 6379 2 Sentinel Down-after-milliseconds myMaster 30000 Sentinel PARALLEL milliseconds mymaster 1 sentinel failover-timeout mymaster 180000 sentinel deny-scripts-reconfig yesCopy the code
  • Use the oracle to connect to the Redis service

Set up redis service provider (provide cache service)

  1. Add the POM configuration for Redis
<dependency> <groupId>org.apache.commons</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>  <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>Copy the code
  1. Add redis’ application.yml configuration
spring: redis: lettuce: pool: max-active: 8 max-idle: 8 max-wait: -1ms min-idle: 0 sentinel: master: mymaster nodes: 47.112.215.6:26379,47.112. 215.6:26380,47.112. 215.6:26381Copy the code

Single sign-on service with Redis (same Origin policy needs to be resolved)

## Share by Cookie* First, the domain name of the application group should be unified * second, the technology used by all systems of the application group should be the same * third, the cookie itself is not secure## Record login information through Redis
Copy the code

Resolve cross-domain issues (nginx request text cross-domain)

  1. Use CORS(Cross-resource Sharing) to solve cross-domain problems
* The Internet Explorer version must be at least 10. The CORS interface on the server is required. Set access-Control-allow-Origin in the headerCopy the code
  1. Use JSONP to solve cross-domain problems
  2. Resolve cross-domain problems using Nginx reverse proxies
worker_processes 1;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;
    # configure virtual host 192.168.75.245server { listen 80; Server_name 47.112.215.6; location / { add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Headers X-Requested-With; add_header Access-Control-Allow-Methods GET,POST,OPTIONS; root /usr/share/nginx; index index.html index.html; }}}Copy the code

Spring Boot MyBatis Redis implements level 2 cache (data does not change often, reducing database pressure)

  1. MyBatis cache introduction
# level 1 cacheMyBatis will create a simple cache in the sqlSession object that represents the session, cache the results of each query, and return data directly from the cache (memory level) if it is determined that a previous query is exactly the same.## Secondary cache (third-party cache)- Cache sharing
Copy the code
  1. Configure MyBatis level 2 cache
  • Enable MyBatis level 2 cache
mybatis:
  configuration:
    cache-enabled: true
Copy the code
  • Idea Added serialVersionUID prompt setting

  • The entity class implements the serialization interface and declares the serial number
private static final long serialVersionUID = 8461546412131L;
Copy the code
  • Create relevant utility classes
Implement the Spring ApplicationContextAware interface for manually injecting the Bean ApplicationContextHolderCopy the code
  • MyBatis Cache interface, used to customize the Cache for Redis
  • Add annotations to the Mapper interface

Spring Boot configures the Swagger2 interface document engine

  1. Add maven package configuration
< the dependency > < groupId > IO. Springfox < / groupId > < artifactId > springfox - swagger2 < / artifactId > < version > 2.8.0 < / version > </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> The < version > 2.8.0 < / version > < / dependency >Copy the code
  1. Configure Swagger2(Java Configuration)
  2. Start the class to add annotations

http://ip

The use of FastDFS

  1. Create the docker-comemage. yml file
version: '3.1'
services:
  fastdfs:
    build: environment
    restart: always
    container_name: fastdfs
    volumes:
      - ./storage:/fastdfs/storage
    network_mode: host
Copy the code
  1. Create a Dockerfile file
FROM ubuntu:xenial
MAINTAINER [email protected]

# update data source
WORKDIR /etc/apt
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse' > /etc/apt/sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse' >> /etc/apt/sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse' >> /etc/apt/sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse' >> /etc/apt/sources.list
RUN apt-get update

# install dependencies
RUN apt-get install make gcc libpcre3-dev zlib1g-dev --assume-yes

# Copy toolkit
ADD fastdfs.tar.gz /usr/local/src
ADD fastdfs-nginx-module.tar.gz /usr/local/src
ADD libfastcommon.tar.gz /usr/local/ SRC ADD nginx - 1.15.4. Tar. Gz/usr /local/src

# installation libfastcommon
WORKDIR /usr/local/src/libfastcommon
RUN ./make.sh && ./make.sh install

# installation FastDFS
WORKDIR /usr/local/src/fastdfs
RUN ./make.sh && ./make.sh install

Install the FastDFS tracker
ADD tracker.conf /etc/fdfs
RUN mkdir -p /fastdfs/tracker

Configure FastDFS storage
ADD storage.conf /etc/fdfs
RUN mkdir -p /fastdfs/storage

Configure the FastDFS client
ADD client.conf /etc/fdfs

# configuration fastdfs - nignx - module
ADD config /usr/local/src/fastdfs-nginx-module/src

# FastDFS integrates with Nginx
WORKDIR /usr/local/ SRC/nginx - 1.15.4 RUN. / configure -- add - the module = / usr /local/src/fastdfs-nginx-module/src
RUN make && make install
ADD mod_fastdfs.conf /etc/fdfs

WORKDIR /usr/local/src/fastdfs/conf
RUN cp http.conf mine.types /etc/fdfs/

# configure Nginx
ADD nginx.conf /usr/local/nginx/conf

COPY entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

WORKDIR /
EXPOSE 8888
CMD ["/bin/bash"]

Copy the code

Build the fastDFS client

  1. Install fastDFS client (add fastdfs-client-java dependency)
<dependency> <groupId>org.csource</groupId> <artifactId>fastdfs-client-java</artifactId> < version > 1.29 - the SNAPSHOT < / version > < / dependency >Copy the code
  1. Adding cloud Configuration
Storage: fastdfs. Base. Url: http://192.168.94.132:8887/type: fastdfs
  fastdfs:
    tracker_server: 192.168.94.132:22122
Copy the code

The use of the RabbitMQ

  1. The installation of the RabbitMQ
  • Write the docker-comemess. yml file
version: '3.1'
services:
  rabbitmq:
    restart: always
    image: rabbitmq:management
    container_name: rabbitmq
    ports:
      - 5672:5672
      - 15672:15672
    environment:
      TZ: Asia/Shanghai
      RABBITMQ_DEFAULT_USER: rabbit
      RABBITMQ_DEFAULT_PASS: 123456
    volumes:
      - ./data:/var/lib/rabbitmq
Copy the code
  1. The use of the RabbitMQ