- Nginx Concepts I wish I knew years ago
- Author: Aemie Jariwala
- The Nuggets translation Project
- Permanent link to this article: github.com/xitu/gold-m…
- Translator: joyking7
- Proofread: PassionPenguin, ningzhy3
Nginx is a master-slave Web server that can be used as a reverse proxy, load balancer, mail proxy, and HTTP cache.
A: wow! Complicated terminology and confusing definitions, with a lot of confusing words, right? Don’t worry, I can help you understand the basic architecture and terminology of Nginx before we install and create an Nginx configuration.
To keep things simple, just remember: Nginx is an amazing Web server.
Simply put, a Web server is like a middleman. For example, if you want to access dev.to, type the address https://dev.to, and your browser will find the Web server address of https://dev.to and direct it to the backend server, which will return the response to the client.
Proxy vs reverse proxy
Nginx’s basic functionality is proxying, so it’s time to understand what proxying and reverse proxying are.
The agent
Well, we have one or more clients, an intermediate Web server (in this case, we call it a proxy), and a server. The main thing is that the server does not know which client is requesting. Confused? Let me illustrate with a diagram.
In this case, clients 1 and 2 send request request1 and request2 to the server through the proxy server, and the background server does not know whether request1 was sent by Client1 or Client2 and only performs the operation.
The reverse proxy
In the simplest terms, a reverse proxy works the other way around. Say you have a client, an intermediate Web server, and one or more backend servers. Let’s continue with a schematic illustration.
In this case, the client sends a request through a Web server, which directs the request to any one of many servers using an algorithm, one of which is round-robin (the cutest one!). The response is then returned to the client via the Web server. So in this case, the client does not know which backend server it is interacting with.
Load balancing
Darn, another new word, but one that’s easier to understand because it’s a practical application of reverse proxy itself.
Let’s start with the basic differences. In load balancing, it is necessary to have two or more backend servers, but in a reverse proxy setup, this is not necessary and can even be used with a single backend server.
Taking a look behind the scenes, if we have a large number of requests from clients, the load balancer checks the status of each backend server and assigns the load to the request, then sends the response to the client more quickly.
Stateful apps vs stateless apps
Ok guys, I promise I’ll get to Nginx code soon, so let’s get all the basics straight!
Stateful application
This application stores an additional variable to hold information that applies only to a single server instance.
What I mean by that is that if the back-end server server1 stores some information, it will not be stored on Server2, so the interacting client (in this case, Bob) may not get the desired results because it may interact with either Server1 or Server2. In this case, Server1 will allow Bob to view the configuration file, but Server2 will not. Thus, stateful applications, even though they block many API calls to the database and are faster, can cause these problems on different servers.
Stateless application
Now, stateless means more API calls to the database, but fewer problems for clients interacting with different backend servers.
I know you don’t understand me. In simple terms, if I send a request from a client through a Web server to, say, a backend server server1, it will provide the client with a token to access any other requests. The client can use the token and send the request to the Web server, which sends the request along with the token to any backend server, each of which returns the same expected output.
What is Nginx?
Nginx is a Web server, and I’ve been using the term Web server throughout this blog so far, and to be honest, it’s like a middleman.
This diagram is not hard to follow, it’s just a combination of all the concepts I’ve explained so far. In this figure, we have three background servers running on port 3001, 3002, and 3003 respectively. These background servers jointly use the database running on port 5432.
Now, when a client sends a request to https://localhost (port 443 by default) for GET /employees, Nginx will algorithmatically send the request to any backend server that gets the information from the database, The JSON result is then sent back to the Nginx Web server, which in turn sends it back to the client.
If we were to use an algorithm such as round robin scheduling, Nginx would do this: for example, if Client2 sends a request to https://localhost, the Nginx server would route the request to port 3001 and then return the response to the client. For another request, Nginx passes the request to port 3002, and so on.
That’s a lot of concepts! But by now, you should have a good idea of what Nginx is and its terminology. Now, we’ll move on to Nginx installation and configuration.
The installation process
Finally! If you can understand the Nginx concept and see the code part, that’s awesome!
Well, to be honest, installing Nginx on any operating system requires only one command. I’m a Mac OSX user, so I write commands based on it. But there are similar commands for Ubuntu and Windows, as well as other Linux distributions.
$ brew install Nginx
Copy the code
With just one command, you have Nginx installed on your system! Opponents very much!
Run So easy! 😛
To check that Nginx is up and running on your system, run the following command, another very simple step.
$ nginx
# OR
$ sudo nginx
Copy the code
After running the command, use your favorite browser to visit http://localhost:8080/ and you will see the following screen on your screen!
Basic configuration and examples
Ok, we’ll show you the magic of Nginx with an example. First, create the following directory structure on your local machine:
. ├ ─ ─ nginx - demo │ ├ ─ ─ the content │ │ ├ ─ ─ first. TXT │ │ ├ ─ ─ index. The HTML │ │ └ ─ ─ index. The md │ └ ─ ─ the main │ └ ─ ─ index. The HTML └ ─ ─ Tem-nginx └─ outsider ├ ─ index.htmlCopy the code
Also, write basic context content in HTML and MD files.
What are we trying to achieve?
Here, we have two separate folders, nginx-Demo and temp-nginx, each containing static HTML files. We’ll focus on running the two folders on a common port and setting the rules we like.
Now back on track. We can implement any changes to the nginx default configuration by modifying the nginx.conf file in the /usr/local/etc/nginx directory. Also, I have Vim on my system, so I’ll use Vim to make the changes, and you’re free to use the editor of your choice.
$ cd /usr/local/etc/nginx
$ vim nginx.conf
Copy the code
This will open a default Nginx configuration file, but I really don’t want to use its default configuration. Therefore, I usually copy the configuration file and make changes to the original file. We do the same thing here.
$ cp nginx.conf copy-nginx.conf
$ rm nginx.conf && vim nginx.conf
Copy the code
Now open an empty file to which we will add our configuration.
-
Add a basic configuration. Adding events {} is necessary because it is typically used to represent the number of workers in the Nginx architecture. We use HTTP here to tell Nginx that we are going to use layer 7 of the OSI model.
Here, we have Nginx listening on port 5000 and pointing to static files in the /nginx-demo/main folder.
http { server { listen 5000; root /path/to/nginx-demo/main/; } } events {} Copy the code
-
Next we will add additional rules for /content and /outsider URL, where outsider will point to a directory other than the root (/nginx-demo) mentioned in the first step.
Here location/Content means that whatever root I define in the subdirectory, the content subURL will be added to the end of the defined root directory. So, when I specify root /path/to/nginx-demo/, Just said I tell Nginx in http://localhost:5000/path/to/nginx-demo/content/ showed me the contents of the folder static files.
http { server { listen 5000; root /path/to/nginx-demo/main/; location /content { root /path/to/nginx-demo/; } location /outsider { root /path/temp-nginx/; } } } events {} Copy the code
Cool! Now Nginx is not limited to defining root urls, but can also set rules so that I can prevent clients from accessing certain files.
-
We will write an additional rule to the defined master server to block access to any.md files. We can use regular expressions in Nginx with the following rules:
location ~ .md { return 403; } Copy the code
-
Finally, take a look at the popular command proxy_pass. Now that we know what proxies and reverse proxies are, let’s define another backend server running on port 8888, so we now have two backend servers running on port 5000 and port 8888 respectively.
What we need to do is, when the client accesses port 8888 through Nginx, pass the request to port 5000 and return the response to the client!
server { listen 8888; location / { proxy_pass http://localhost:5000/; } location /new { proxy_pass http://localhost:5000/outsider/; }}Copy the code
Finally, take a look at the complete code! 😁
http {
server {
listen 5000;
root /path/to/nginx-demo/main/;
location /content {
root /path/to/nginx-demo/;
}
location /outsider {
root /path/temp-nginx/;
}
location ~ .md {
return 403;
}
}
server {
listen 8888;
location / {
proxy_pass http://localhost:5000/;
}
location /new {
proxy_pass http://localhost:5000/outsider/;
}
}
}
events {}
Copy the code
Run the code through sudo nginx.
Additional Nginx commands!
-
Start the Nginx Web server for the first time.
$ nginx #OR $ sudo nginx Copy the code
-
Reload the running Nginx Web server.
$ nginx -s reload #OR $ sudo nginx -s reload Copy the code
-
Shut down the running Nginx Web server.
$ nginx -s stop #OR $ sudo nginx -s stop Copy the code
-
Find which Nginx processes are running on the system
$ ps -ef | grep Nginx Copy the code
The fourth command is very important. If the first three commands fail, you can use the fourth command to find all the running Nginx processes, kill them, and restart the Nginx service.
To kill a process, you need to know its PID first and then kill it with the following command:
$ kill -9 <PID>
#OR
$ sudo kill -9 <PID>
Copy the code
Before I end this article, I would like to clarify that the images and visuals I use are from Google images and Youtube tutorials provided by Hussein Nasser.
So much for the basics of Nginx and configuration. If you’re interested in Nginx’s advanced configuration, let me know in the comments. Until then, enjoy programming and explore the magic of Nginx! 👋
If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.
The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.