Docker is becoming more and more popular. It can easily isolate the environment, expand capacity, and facilitate operation and maintenance management. It is also easier for developers to develop, test and deploy.

Best of all, when you’re faced with an unfamiliar project, you can quickly get it running locally by following the Dockerfile without even looking at the documentation (which may or may not be all right).

There’s a lot of emphasis on the concept of Devops, and I’ve had the word “Devops” on my desktop for a whole day. Suddenly, Devops means writing a Dockerfile to run an app.

Here’s how to deploy a front-end application using Docker. A journey of a thousand miles begins with a single step, which means first let it run.

  • How to deploy the front end efficiently with Docker
  • Series: What did I do when I had a cloud server

If this article helps you, please click a star on shfshanyue/op-note for me

If you are new, there will be a discount to buy the machine in Ali Cloud, you can click the link to buy. You can follow my notes server operation guide to get started on maintaining the server and setting up the application.

  • Invite you together 86 yuan/year to buy cloud server

Let it run first

First, a brief introduction to a typical front-end application deployment process

  1. npm install, install dependencies
  2. npm run build, compile, package, generate static resources
  3. Servize static resources

After introducing the deployment process, simply write a Dockerfile

FROM node:alpine

# stands for production environment
ENV PROJECT_ENV production
Many packages will behave differently depending on this environment variable
In addition, packaging in Webpack is optimized for this environment variable, but create-react-app will write this environment variable dead when packaging
ENV NODE_ENV production
WORKDIR /code
ADD . /code
RUN npm install && npm run build && npm install -g http-server
EXPOSE 80

CMD http-server ./public -p 80
Copy the code

Now the front-end service is up and running. Now you can complete the other phases of the deployment. In general, this is the job of operations, but it’s always good to push your knowledge boundaries.

  • Use nginx or Traefik as a reverse proxy
  • Compose with Kubernetes or compose etc.
  • Use Gitlab CI or Drone CI to make CI/CD

In this case, the image has two problems, resulting in a long deployment time, which is not conducive to fast product delivery

  • The image building time is too long. Procedure
  • The size of the build image is too large, 1G+

Start with Dependencies and devDependencies

Lu xiaofeng once said that if a front-end programmer works eight hours a day, at least two hours are wasted. One hour for NPM install and another hour for NPM run build.

For each deployment, reducing the number of useless packages downloaded can save a lot of image build time. Code style test modules such as ESLint, Mocha, and CHAI can be added to devDependencies. Install the package using NPM install — Production in production.

About the difference between the two can be the reference document docs.npmjs.com/files/packa…

FROM node:alpine

ENV PROJECT_ENV production
ENV NODE_ENV production
WORKDIR /code
ADD . /code
RUN npm install --production && npm run build && npm install -g http-server
EXPOSE 80

CMD http-server ./public -p 80
Copy the code

I think it’s a little bit too fast.

We notice that package.json is relatively stable relative to the project’s source file. If no new installation package needs to be downloaded, you do not need to reinstall the package during image construction. You can save half the time on NPM Install.

Leveraging image caching

In the case of ADD, caching can be used if the content that needs to be added has not changed. It is a good idea to separate package.json from the source file and write to the image. Currently, this can be saved in half if no new installation package is updated

FROM node:alpine

ENV PROJECT_ENV production
ENV NODE_ENV production

# http-server can also take advantage of caching without changing
RUN npm install -g http-server

WORKDIR /code

ADD package.json /code
RUN npm install --production

ADD . /code
RUN npm run build
EXPOSE 80

CMD http-server ./public -p 80
Copy the code

For more details on the use of caches, note the RUN Git clone

cache

Refer to the official documentation docs.docker.com/develop/dev…

Multistage construction

Image build times are now much faster thanks to caching. However, the volume of the mirror is still too large, which also increases the time of each deployment

Consider the flow of each CI deployment

  1. Build the image on the build server
  2. Push the image to the mirror warehouse server,
  3. Pull the image from the production server and start the container

Obviously, the large volume of mirrors causes low transmission efficiency and increases the delay of each deployment.

Even though the build and production servers are under the same node, there is no latency issue. Reducing mirror volume also saves disk space

A big part of the problem with mirror size is the infamous size of node_modules

In the end, we only need the contents in the public folder. For source files and files under node_modules, it takes up too much space and is unnecessary, resulting in waste.

At this point, Docker’s multi-stage build can be used to extract only compiled files

Refer to the official documentation docs.docker.com/develop/dev…

FROM node:alpine as builder

ENV PROJECT_ENV production
ENV NODE_ENV production

# http-server can also take advantage of caching without changing
WORKDIR /code

ADD package.json /code
RUN npm install --production

ADD . /code
RUN npm run build

# Select a smaller volume base image
FROM nginx:alpine
COPY --from=builder /code/public /usr/share/nginx/html
Copy the code

At this point, the mirror image goes from 1G+ to 50M+

Use the CDN

Looking at 50M+ mirror size, nginx: Alpine’s mirror is 16M, and the remaining 40M is static.

If static resources are uploaded to the CDN, there is no need to enter the image. In this case, the image size is controlled under 20M

Static resources can be divided into two parts

  • /static. This type of file refers directly to the root path in the project and is copied to /public when packing
  • /build, which requires require references, will be packed by Webpack and hash, and will change the resource address via publicPath. Such files can be uploaded to the CDN with a permanent cache, without the need for mirroring
FROM node:alpine as builder

ENV PROJECT_ENV production
ENV NODE_ENV production

# http-server can also take advantage of caching without changing
WORKDIR /code

ADD package.json /code
RUN npm install --production

ADD . /code

# NPM run uploadCdn # NPM run uploadCdn
RUN npm run build && npm run uploadCdn

# Select a smaller volume base image
FROM nginx:alpine
COPY --from=builder code/public/index.html code/public/favicon.ico /usr/share/nginx/html/
COPY --from=builder code/public/static /usr/share/nginx/html/static
Copy the code

Pay attention to the public number shanyuexixing, record my technical growth, welcome to exchange