Nginx Concepts I wish I knew years ago

Original article by Aemie Jariwala (authorized)

Dried fish and marinated eggs

Nginx is a master-slave Web server that can be used for reverse proxies, load balancers, mail proxies, and HTTP caching.

Emmm, the introduction to Nginx above looks a bit complicated and full of confusing terminology. Relax, in this article I will take you through the architecture and terminology of Nginx, and finish with a hands-on approach to installing and configuring Nginx.

In short, just remember this: Nginx is an amazing Web server. (Note: The magic will be explained below.)

So what is a Web server? In short, a Web server is a middleman. For example, if you go to hellogithub.com (note: When you type https://hellogithub.com in the address bar, your browser will find the web server address of https://hellogithub.com and point it to the backend server, which in turn returns a response to the client.

Proxy vs reverse proxy

The basic feature of Nginx is proxying, so it’s important to understand what proxying and reverse proxying are.

The agent

As a small example, we now have N clients (N >= 1), an intermediate Web server (in this case, we call it a proxy), and a server. The main scenario of this example is that the server does not know which client is requesting (responding). Is it a little hard to understand? Let me illustrate it with a schematic diagram:

As shown, Client1 and Client2 send request1 and request2 to the server through the proxy server. At this point, the back-end server does not know whether request1 was sent by Client1 or client2, but performs the operation (response).

The reverse proxy

Simply put, a reverse proxy does the opposite of a proxy. Now we have one client, one intermediate Web server, and N back-end servers (N >= 1), again looking at the diagram:

As shown, the client sends the request through the Web server. The Web server uses an algorithm, the most interesting of which is polling, to direct the request to one of many back-end servers and send the response back to the client via the Web server. Therefore, in the example above, the client doesn’t really know which back-end server it is interacting with.

Load balancing

Another boring term: load balancing, but it’s easy to understand because load balancing is itself an example of reverse proxy.

Take a look at the essential differences between load balancing and reverse proxy. In load balancing, you must have two or more back-end servers, but in reverse proxies, multiple servers are not necessary and even one back-end server can work. Let’s dig a little deeper, if we have a lot of requests from clients, the load balancer will check the status of each back-end server, distribute the requests evenly, and send the response to the client more quickly.

Stateful vs. stateless applications

Okay, before we start practicing Nginx, let’s get all the basics straight!

Stateful application

Stateful applications store an extra variable that is used only to hold information needed for use by a single instance in the server.

As shown in the figure, a back-end server, Server1, stores some information that server Server2 does not store, so the interaction of the client (Bob in the figure above) may or may not yield the desired results because it may interact with server1 or Server2. In this case, server1 allows Bob to view the data files, but server2 does not. So, while a stateful application avoids multiple API calls to the database and is faster, it can cause this problem on different servers.

Stateless application

Stateless applications have more database API calls, but fewer problems when clients interact with different back-end servers.

Don’t understand? In simple terms, if I send a request from the client back end server server1 through the Web server, it will return a token to the client for any further access requests. The client can use the token and send a request to the Web server. This Web server sends the request along with the token to any backend server, and each backend server can provide the same desired results.

What is Nginx?

Nginx is the web server that I’ve been using for my entire blog so far. To be honest, Nginx is like a middleman.

This diagram is not hard to understand, and it is a combination of all the concepts so far. In this case, we have three back-end servers running on ports 3001, 3002, and 3003, all of which have access to the same database running on port 5432.

When a client makes a GET /employees request to https://localhost (default port 443), Nginx will algorithmically send the request to any back-end server, Get the data from the database and send the JSON data back to the Nginx Web server to the client.

If we use an algorithm such as polling, which tells Client2 to send a request to https://localhost, the Nginx server first sends the request to port 3000 and returns the response to the client. For another request, Nginx passes the request to port 3002, and so on.

Knowledge reserve complete! At this point, you have a clear understanding of what Nginx is and the terminology that Nginx involves. It’s time to learn about installation and configuration techniques.

Start installing Nginx

It’s time, if you understand the concepts above, to get started with Nginx.

Well, the Nginx installation process is simple for any system. I am a Mac OSX user, so the commands in this example are based on macOS, and Ubuntu, Windows, and other Linux distributions operate similarly.

$ brew install Nginx
Copy the code

Just do this and your system has Nginx! Isn’t it amazing?

Running Nginx is so easy πŸ˜›

It’s also easy to check that Nginx is running.

$ nginx 
# OR 
$ sudo nginx
Copy the code

Execute the above command, open the browser again and enter http://localhost:8080/ press Enter to view, you will see the following screen!

Nginx Basic Configuration & Examples

Let’s take a look at the magic of Nginx in action.

First, create the following directory structure locally:

. β”œ ─ ─ nginx - demo β”‚ β”œ ─ ─ the content β”‚ β”‚ β”œ ─ ─ first. TXT β”‚ β”‚ β”œ ─ ─ index. The HTML β”‚ β”‚ β”” ─ ─ index. The md β”‚ β”” ─ ─ the main β”‚ β”” ─ ─ index. The HTML β”” ─ ─ Tem-nginx └─ outsider β”œ ─ index.htmlCopy the code

Of course, basic information should be included in.html and.md files.

What do we want to achieve?

Here, we have two separate folders, nginx-Demo and temp-nginx, each containing static HTML files. We’ll focus on running the two folders on a common port and setting the rules we want.

If you want to change the Nginx default configuration, you need to change the nginx.conf file in the usr/local/etc/ Nginx directory. I have Vim on my system, so I use Vim here to change the Nginx configuration. You can use your own editor to change the configuration.

$ cd /usr/local/etc/nginx
$ vim nginx.conf
Copy the code

The above command opens a default configuration file for Nginx, which I really don’t want to use directly. Therefore, my usual practice is to copy the configuration file and make changes to the main file. This is no exception.

$ cp nginx.conf copy-nginx.conf
$ rm nginx.conf && vim nginx.conf 
Copy the code

The above command will open an empty file to which we will add the configuration.

  1. Add basic Settings for the configuration. Be sure to add events {}, because it is typically used to represent the number of workers in the Nginx architecture. Here we use HTTP to tell Nginx that we will be working at layer 7 of the OSI model.

    Here, we tell Nginx to listen on port 5000 and point to static files in the main folder.

    http {
    
      server {
        listen 5000;
        root /path/to/nginx-demo/main/; 
        }
    
    }
    
    events {}
    Copy the code
  2. Next we will add additional rules for /content and /outsider URL, where outsider will point to a directory other than the root directory mentioned in the first step.

    Location/Content here means that whatever root I define in the leaf directory, the content subURL will be added to the end of the defined root URL. So, when I specify root as root /path/to/nginx-demo/, This simply means that I told Nginx displays the contents of a static file in the http://localhost:5000/path/to/nginx-demo/content/ folder.

    http {
    
      server {
          listen 5000;
          root /path/to/nginx-demo/main/; 
    
          location /content {
              root /path/to/nginx-demo/;
          }   
    
          location /outsider {
              root /path/temp-nginx/;
          }
      }
    
    }
    
    events {}
    Copy the code

    Awesome!!! Now Nginx can not only define the URL root path, but also set rules so that we can prevent clients from accessing a file.

  3. Next, we write a rule on the primary server to prevent any.md files from being accessed. We can use regular expressions in Nginx, so we’ll define rules like this:

      location ~ .md {
            return 403;
      }
    Copy the code
  4. Finally, let’s finish this chapter by studying the proxy_pass command. We’ve seen what proxies and reverse proxies are, and we’ll start here by defining another back-end server running on port 8888. We now have two back-end servers running on ports 5000 and 8888.

    What we need to do is, when the client accesses port 8888 through Nginx, pass the request to port 5000 and send the response back to the client!

    server { listen 8888; location / { proxy_pass http://localhost:5000/; } location /new { proxy_pass http://localhost:5000/outsider/; }}Copy the code

Take a look, this is all the configuration information 😁

http {

    server {
        listen 5000;
        root /path/to/nginx-demo/main/; 

        location /content {
            root /path/to/nginx-demo/;
        }   

        location /outsider {
            root /path/temp-nginx/;
        }

                location ~ .md {
          return 403;
        }
    }

      server {
        listen 8888;

        location / {
            proxy_pass http://localhost:5000/;
        }

        location /new {
            proxy_pass http://localhost:5000/outsider/;
        }
  }

}

events {}
Copy the code

Use sudo nginx to run this configuration.

Other Nginx commands

  1. Start the Nginx Web server for the first time.

    $ nginx 
    #OR 
    $ sudo nginx
    Copy the code
  2. Reload the running Nginx Web server.

    $ nginx -s reload
    #OR 
    $ sudo nginx -s reload
    Copy the code
  3. Stop the running Nginx Web server.

    $ nginx -s stop
    #OR 
    $ sudo nginx -s stop
    Copy the code
  4. View the Nginx process running on the system.

    $ ps -ef | grep Nginx
    Copy the code

The fourth command is important, if the first three commands cause some problems, you can usually use the fourth command to find all running Nginx processes, kill them, and then restart them.

To kill a process, you need PID, and then kill it with the following command:

$ kill -9 <PID>
#OR 
$ sudo kill -9 <PID>
Copy the code

To end this article, I want to note that I used some images from Google and a video tutorial posted by Hussein Nasser on YouTube.

Enjoy Coding and explore the magic of Nginx! πŸ‘‹

Finally, welcome you to join HelloGitHub “translation dance” series, let your talent dance! Share excellent articles with more people. Requirements:

  • Usually browse GitHub, open source, programming, programmers and other English information and articles
  • I want to share my excellent English articles with more people
  • Translation is accurate but not straight or machine translation
  • Ensure that at least one high quality article is translated or corrected per month
  • Understand Markdown and layout rules
  • Contact me at HelloGitHub