1. Introduction

Some time ago, I made an Ali cloud server. Want to be in the above toss, but do not want to pollute the existing environment because of their own blind toss. After all, there is no free snapshot service on Aliyun anymore. To restore, the easiest way is to reinstall the system. And once you reinstall it, all the environment you’ve built before will be for nothing.

In addition, I want to introduce Docker before, so I plan to use docker container to deploy the front-end application.

2. Build front-end applications

Before packaging, you first need a working front-end application. This can be built using UMI or create-react-app.

3. Default configuration file for nginx

You then need to add the default Nginx configuration file to your project.

server {
    listen 80;
    server_name localhost;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html; }}Copy the code

4. Write local build scripts

4.1. Remove the last directory and Dockerfile

#! /bin/bash
if [ -d "./dist" ]; then
    rm -rf ./dist
fi

if [ -f "./Dockerfile" ]; then
    rm -f ./Dockerfile
fi
Copy the code

Since the content in the DIST must be different after each change, this step is not necessary. Running NPM’s package command also automatically clears up the directory.

Clearing the Dockerfile is in case you update the Dockerfile and don’t get the latest configuration this time.

4.2. Package front-end applications

Execute front-end packaging commands to generate static file directories.

yarn build
Copy the code

4.3 generate Dockerfile

echo "FROM nginx:latest" >> ./Dockerfile
echo "COPY ./dist /usr/share/nginx/html/" >> ./Dockerfile
echo "COPY ./default.conf /etc/nginx/conf.d/" >> ./Dockerfile
echo "EXPOSE 80" >> ./Dockerfile
Copy the code

FROM draws the base image of the custom container as nginx:latest; COPY the static file directory to the /usr/share/nginx/html/ directory in the container, and write the nginx configuration to the corresponding location in the container. EXPOSE is set to port 80 of the exposed container.

4.4. Generate and push customized images

docker build -t detectivehlh/mine .
docker login -u detectivehlh -p ********
docker push detectivehlh/mine
Copy the code

Here is the development local, using the Docker command to package, so the script is highly dependent on docker. The build command is used to package the Docker application, and the -t option specifies the docker image name and tag, which defaults to latest.

Then log in to dockerHub and push the customized image to dockerHub. Detectivehlh is the dockerHub user name, mine is the image name.

4.5. Delete useless images whose tag is None

The first build will not generate an image with a Tag of None, but this will happen every time the command is executed again. Therefore, every time a new image is built, the image that is not needed needs to be cleared.

docker images | grep none | awk '{print $3}' | xargs docker rmi
Copy the code

Awk is a powerful text analysis tool. {print $3} means to print the third field of each matched line, that is, the image ID of the docker. $0 indicates the current row.

Xargs is a filter that passes arguments to other commands (later docker RMI), converting standard input into command-line arguments.

In summary, the above command is to find the ID of the image whose tag is None and remove the image using the docker rmI command.

4.6. Perform the deployment

cmd="cd ~ && sh deploy.sh mine"
ssh -t USER_NAME@IP_ADDRESS "bash -c \"${cmd}\ ""
Copy the code

Log in to the remote server using SSH and run the script in parameters.

Sh is the build script placed on the server. Place it under the default login user. We found that it was followed by mine, which is the name of the Docker image running on the server. There is no need to add hash to the container name because of my small project.

The complete build script in the project is as follows.

#! /bin/bash
if [ -d "./dist" ]; then
    rm -rf ./dist
fi
if [ -f "./Dockerfile" ]; then
    rm -f ./Dockerfile
fi

yarn build

echo "FROM nginx:latest" >> ./Dockerfile
echo "COPY ./dist /usr/share/nginx/html/" >> ./Dockerfile
echo "COPY ./default.conf /etc/nginx/conf.d/" >> ./Dockerfile
echo "EXPOSE 80" >> ./Dockerfile

docker build -t detectivehlh/mine .
docker login -u detectivehlh -p ********
docker push detectivehlh/mine

docker images | grep none | awk '{print $3}' | xargs docker rmi

cmd="cd ~ && sh deploy.sh mine"
ssh -t USER_NAME@IP_ADDRESS "bash -c \"${cmd}\ ""
Copy the code

5. Write a server deployment script

From the above steps, we also need a server-side deployment script. You might say, doesn’t the title say a script is done? Em… One server, one local… Abbreviations require only one script.

5.1. Receiving Parameters

In the local build script, we pass in the name of the Container on which the Docker is running. It is needed in the server build script to receive it. Then update the docker image you just pushed.

#! /bin/bash
name=The $1
docker pull detectivehlh/$name
Copy the code

5.2. Start the container

When we start the Container, we are faced with two situations: the container with the name of the passed parameter is already running. If you run the Docker run command again, an error will be reported, causing us to fail to use the latest Container and the purpose of updating the application cannot be achieved.

if docker ps | grep $name | awk {'print $(NF)'} | grep -Fx $name; then
	echo "Container mine is already start"
	docker stop $name
	docker rm $name
	docker run -d --name $name -p 3000:80 detectivehlh/$name
else
	echo "Container mine is not start! , starting"
	docker run -d --name $name -p 3000:80 detectivehlh/$name
	echo "Finish starting"
fi
docker images | grep none | awk '{print $3}' | xargs docker rmi
Copy the code

The first if determines that if there is a container running with the name of the parameter passed in, stop the container and restart it. If not, start the container directly.

The run command won’t be explained much. -d means to run the container in the background and return the container ID, –name means to set the name of the container, -p means to set the port, map the port 3000 of ali Cloud server to the port 80 of the container, and the last sentence means which image to start.

The last sentence is to remove the useless mirror with tag None after multiple updates. The complete script is below.

#! /bin/bash
name=The $1
docker pull detectivehlh/$name
if docker ps | grep $name | awk {'print $(NF)'} | grep -Fx $name; then
	echo "Container mine is already start"
	docker stop $name
	docker rm $name
	docker run -d --name $name -p 3000:80 detectivehlh/$name
else
	echo "Container mine is not start! , starting"
	docker run -d --name $name -p 3000:80 detectivehlh/$name
	echo "Finish starting"
fi
docker images | grep none | awk '{print $3}' | xargs docker rmi
Copy the code

6. If you just want to pack

If you just want to package a docker image, then all you need is Dockerfile and docker build command.

7. To summarize

This script was originally written primarily for convenience. So a few tweaks were made in the script to do just that. I ended up with a convenient deployment script that met my needs.

Its convenience is that when I finish updating the project code, I just run the script and wait for a while, and the project will automatically package into a Docker image and run the container on my server automatically.

However, this approach brings uncontrollable problems to the actual production environment. For example, scripts must not be uploaded because of sensitive server information. But if you accidentally upload it, your server is running naked; For another example, you need to be confident in your code, and deploying it without testing it can be risky.

If it’s for your own use, don’t worry about it. But if it’s open to everyone and has a certain amount of traffic, like a blog, then it’s not very user-friendly for other users.

So my point is, it depends. Currently, my project is being used by only a few people and is still in the iterative stage. And the repository is private, so I don’t have to worry about privacy at all. I don’t have a problem with a service going live without testing it. I’ll test it locally first and then deploy it once I’m sure it’s all right. Therefore, at different stages, it is OK to find the most suitable scheme for yourself.