preface
I believe that many front-end students are familiar with the development of VUE or React, and know how to package to generate a production environment, but some students may know less about the deployment of production environment. Small companies may be deployed with the help of the back-end, while large companies will have special deployment of operation and maintenance students. Some students do not have much contact with the deployment of production environment, so this time to share and summarize the actual experience related to front-end project deployment: From static site deployment, to Node project deployment, to load balancing deployment, we will also share scripts and methods to improve deployment efficiency.
The preparatory work
- One or more servers or VMS.
- A packaged file for a VUE or React project.
- A node project source code.
Deployment of static sites
Static site deployment refers to the deployment of front-end HTML/CSS/JS resources, such as THE HTML, CSS and JS files generated after vue or React package. We upload these files to the server and expose these resources to the public network through Nginx.
- Upload files to the server
It is easy to manually copy the packaged files to the server. However, if you manually copy the files every time, the deployment efficiency will be low. Therefore, it is recommended to use some scripts or tools. In large companies, server permissions are strictly controlled and may require various jumpers or dynamic passwords. Large companies usually have dedicated operations personnel or CI/CD tools.
Small companies may have the freedom to allow individuals to SSH directly to the server, using rsync or SCP commands (Linux or Mac) to upload files to the server with one click to improve deployment efficiency. To share the deployment script we used before, create a new file named deploy.sh in the root directory of the front-end project:
#! /bin/bash
function deploy() {
# test server
test_host="root@test_server_ip"
# Production server
prod_host="root@prod_server_ip"
project_path="/srv/YourProject"
if [ "The $1"= ="prod" ]; then
target="$prod_host:$project_path"
else
target="$test_host:$project_path"
fi
rsync -azcuP ./dist/ --exclude node_modules --exclude coverage --exclude .env --exclude .nyc_output --exclude .git "$target"
echo "deploy to $target"
}
deploy $@
Copy the code
Upload all files from /dist to/SRV /YourProject on the corresponding server. To deploy the test environment, run the./deploy.sh command to upload the /dist directory directly to the root@test_server_ip server.
Sh prod is added to the end of the production deployment parameter./deploy.sh prod enables multiple deployment environments. A further step is to write the command to run the script directly into package.json, for example:
"scripts": {
"build": "vue-cli-service build --mode staging",
"deploy": "npm run build && ./deploy.sh",
"deploy:prod": "npm run build && ./deploy.sh prod"
},
Copy the code
This enables direct packaging and deployment to the test environment with the NPM run deploy command. If your company currently uses manual copy or FTP tools for inefficient deployment, try using the scripts above.
PS: Rsync is only available on Linux or Mac, so only Linux or Mac users can run this command. Windows users cannot run this command.
- Write the site’s CONF
After you upload the files to the server, you are ready to configure Nginx. /etc/nginx/conf.d/test.conf/test.conf /etc/nginx/conf.d /etc/nginx/conf
server {
listen 80;
server_name your-domain.com; # the domain name
location / {
root /srv/YourProject; # site source code root directory
index index.html;
}
location /api {
proxy_pass http://localhost:8080; # Address of the reverse proxy back-end interface}}Copy the code
Generally speaking, static sites only need to be configured with these,
server_name
Indicates the domain name, which must be resolved to the public IP address of the server.root
Represents the location of the code in the server,index
Specifies that the default processing file isindex.html
;location /api
Is the reverse proxy back-end service (assuming that the back-end service is deployed on local port 8080), that isyour-domain.com/api
Requests are forwarded tohttp://localhost:8080
In general, this method can solve the cross – domain problem perfectly.
After modifying the nginx conf, you need to reload the nginx service: nginx -s reload
- test
If the domain name configured in the previous step has been resolved to the server IP address, you can directly access your site through the domain name on the public network. If not, you can modify the host file to make the configured domain name accessible on the host. Or visit http://localhost.
Deployment of the Node project
Nodeprojects can start services with commands such as nodeapp.js during development, but on the server, if you use this command, the Node process will stop after exiting the server, so you need to use a tool to keep the Node process alive. Now, pM2 is generally used.
- Install the pm2
npm install -g pm2
Copy the code
Some common pM2 commands:
pm2 start app.js # Launch the app.js app
pm2 start app.js -i 4 # cluster mode starts 4 app.js app instances # 4 apps will automatically load balance
pm2 start app.js --name="api" # Launch the application and name it "API"
pm2 start app.js --watch Automatically restart the application when the file changes
pm2 start script.sh Start the bash script
pm2 list List all applications that PM2 started
pm2 monit Display the CPU and memory usage for each application
pm2 show [app-name] Display all information about the application
pm2 logs Display logs for all applications
pm2 logs [app-name] # display logs for the specified application
pm2 flush
pm2 stop all # Stop all apps
pm2 stop 0 # stop the specified application with id 0
pm2 restart all # Restart all applications
pm2 reload all Restart all applications in cluster mode
pm2 gracefulReload all # Graceful reload all apps in cluster mode
pm2 delete all # Close and delete all apps
pm2 delete 0 # delete app id 0
pm2 scale api 10 # Extend the application named API to 10 instances
pm2 reset [app-name] Reset the number of restarts
pm2 startup Create a boot autostart command
pm2 save Save the list of current apps
pm2 resurrect Reload the list of saved apps
pm2 update # Save processes, kill PM2 and restore processes
pm2 generate # Generate a sample json configuration file
pm2 deploy app.json prod setup # Setup "prod" remote server
pm2 deploy app.json prod # Update "prod" remote server
pm2 deploy app.json prod revert 2 # Revert "prod" remote server by 2
pm2 module:generate [name] # Generate sample module with name [name]
pm2 install pm2-logrotate # Install module (here a log rotation system)
pm2 uninstall pm2-logrotate # Uninstall module
pm2 publish # Increment version, git push and npm publish
Copy the code
- Start the project with PM2
Js –name=”my-project” to start the Node project, but this would be difficult to manage by hand. Json file in the root directory of the Node project to specify pm2 startup parameters, such as:
{
"name": "my-project"."script": "./server/index.js"."instances": 2."cwd": "."."exec_mode" : "cluster"
}
Copy the code
Name indicates the name of the PM2 process, script indicates the file entry that is started, Instances indicates the number of started examples (it is recommended that the value be no greater than the number of server processor cores), and CMD indicates the directory where the application resides.
We can pm2 start pm2. Json directly when the server starts the Node project.
- Nginx proxy binding domain name
Node projects run using PM2 and only run on a port on the server, such as http://server_ip:3000. If the service needs to be accessed directly through a domain name, you also need to use the nginx proxy to port 80. Create a new my-project.conf file in /etc/nginx/conf.d:
server { listen 80; server_name your-domain.com; location / { proxy_pass http://localhost:3000; }}Copy the code
This is the simplest configuration, you can add some parameters according to the actual situation. This completes the deployment of a Node project.
Front-end load balancing deployment
I believe that few students can access the deployment of load balancing, when a project has a large number of visits to the need to use load balancing, generally there will be special operation and maintenance students to do. Load balancing is simply spreading a large number of concurrent requests across multiple servers.
There are many kinds of load balance of the architecture, the architecture of a project is a process of constant evolution, adopt what kind of load balancing architecture specific to the analysis, this article does not speak what time is suitable for use which architecture (I will not, smile), then how will share practical use Nginx from zero load balance architecture builds a classic case.
Nginx Server
Nginx
Nginx
Centos2
Centos3
Centos4
Nginx Server
I don’t have so many servers, so for a more complete demonstration, I now use VirtualBox to create four virtual machines to simulate four servers (of course, if conditions are really limited, you can use the same server four ports instead).
1
2
3
4
Nginx Server
Centos2
Centos3
Centos4
First, the application server builds the service site
Great oaks from little acorns grow. We first have to build a service that can be external. This service can be a website or an interface. For the sake of simplicity, we’ll start with a KoA Hello World, and for later verification of load balancing, the code deployed on each machine will have a slightly modified copy, such as: Hello Centos2,Hello Centos3, and Hello Centos4 are used to verify which server the request is sent to.
The KOA demo site is ready for you: KoA-loadBalance.
For example, Centos2(192.168.0.2) (IP is fictitious) is the virtual machine on which the KOA site will be deployed using PM2.
- Upload the source code to the Centos2 server using SCP or rsync
Remember the deploy.sh script above? If you add scripts to your project, NPM run deploy can be deployed directly to the server. Demo source code has this script, we can change the actual IP inside, and then execute the command ha.
- SSH Access Centos2 server
SSH [email protected]Copy the code
- Installing the Node Environment
curl -sL https://rpm.nodesource.com/setup_13.x | sudo bash -
sudo yum install nodejs
Copy the code
To see which versions of Node are currently available, select the latest version (currently 13).
- Install the pm2
npm i pm2 -g
Copy the code
- Pm2 starts the site
Execute in the project root directory:
pm2 start pm2.json
Copy the code
$curl localhost:3000 $curl localhost:3000
Similarly, follow the steps above to deploy the services to both Centos3 and Centos4 servers. (Remember to change Hello XXX in index.js for later verification). If not, our site runs on port 3000 on three different servers:
192.168.0.2:3000
= = >Hello Centos2
192.168.0.3:3000
= = >Hello Centos3
192.168.0.4:3000
= = >Hello Centos4
Centos2,Centos3, Centos4 don’t have Nginx installed on them. In practice, application servers do not need to be exposed to the public network. They only need to be directly connected to the load balancing server through the Intranet, which is more secure. Our site, as a Node project, provides Web services on its own, so there is no need to install an Nginx for proxy or forwarding.
Build Nginx Server
Nginx Install nginx Install
Install nginx in Centos1.
Third, load balancing
At first it may be hard not to know how to configure load balancing, but then you’ll find it super easy to configure nginx using just one configuration: upstream.
- All nodes in the cluster
Centos2,Centos3, and Centos4 are clustered together. The nginx configuration is similar to the following:
upstream APPNAME {
server host1:port;
server host2:port;
}
Copy the code
APPNAME can be customized and is usually the project name. Create a new upstream.conf file in /etc/nginx/conf.d:
Upstream koa-loadBalance {server 192.168.0.2:3000; Server 192.168.0.3:3000; Server 192.168.0.4:3000; }Copy the code
Thus, we have integrated the three servers into a cluster at http://koa-loadbalance, which we will use next.
- Configure external sites
/etc/nginx/conf.d: /etc/nginx/conf.d: /etc/nginx/conf.
server {
listen 80;
server_name www.a.com;
charset utf-8;
location / {
proxy_pass http://koa-loadbalance; Here is the name of the cluster above}}Copy the code
Remember to use nginx -s reload to complete the load balancing configuration.
Four, test,
If your domain name is real and has been resolved to the nginx server, you can now access it directly through the domain name. Since the author is using a VIRTUAL machine, the public network is not accessible, so here to configure the host, make www.a.com to the centos1 server, and then open the browser www.a.com to test our load balancing website. /etc/hosts sudo vi /etc/hosts sudo vi /etc/hosts
IP is the IP address of Centos1
192.168.0.1 www.a.com
Copy the code
www.a.com
Hello CentosX
5. Several strategies of load balancing
There are a number of load-balancing policies that can be set up with nginx’s upstream. Here are some common policies.
- Polling (default) : Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.
upstream test{server 192.168.0.2:3000; Server 192.168.0.3:3000; Server 192.168.0.4:3000; }Copy the code
- Specify weight: Specify the polling probability. Weight is proportional to the access ratio and is used when the back-end server performance is uneven.
upstream test{server 192.168.0.2:3000 weight = 5; Server 192.168.0.3:3000 weight = 10; Server 192.168.0.4:3000 weight = 20; }Copy the code
- Ip_hash: Each request is allocated based on the hash result of the IP address. In this way, each visitor accesses the same back-end server, which can solve the session problem.
upstream test {
ip_hash;
server 192.168.0.2:3000;
server 192.168.0.3:3000;
server 192.168.0.4:3000;
}
Copy the code
- Fair (third party) : Requests are allocated based on the response time of the back-end server, with priority given to those with short response times.
upstream test{server 192.168.0.2:3000; Server 192.168.0.3:3000; Server 192.168.0.4:3000; fair; }Copy the code
- Url_hash (third-party) : Allocates requests based on the hash result of the url, so that each URL is directed to the same (corresponding) backend server, which is effective when the backend server is used for caching.
upstream test{server 192.168.0.2:3000; Server 192.168.0.3:3000; Server 192.168.0.4:3000;hash $request_uri;
hash_method crc32;
}
Copy the code
For more information, see ngx_HTTP_upstream_module. Use the above policies as required, or use the default polling method without special requirements.
The last
From static site to Node site, and then to load balancing, I believe that after reading this article we have a more comprehensive understanding of the whole front-end deployment system. Especially load balancing, usually contact little always feel very complicated, in fact, after reading will feel very simple. More advanced deployments might use Docker or K8S clustering, but that’s for later.
For deployment improvements, this article has also shared the steps of using the rsync command, combined with the package.json script, to complete the deployment in a single command.
Of course, there is still a lot of room for optimization in this approach. The really good deployment method is continuous integration, which can be automated by Jenkins or other tools, and the code push can be built and deployed automatically. If your company is still using primitive deployment methods, you may want to explore some of these more fun and smooth operations.