The day before yesterday, two surface goose factory, the interviewer asked “nginx do you understand? Such a broad and straightforward sentence pattern, I can not grasp the point, language.
After work, I thought about it and used a lot of NGINx abilities unconsciously at ordinary times, but I didn’t blow the corresponding concept in the interview.
Interview nGINx core competencies
Nginx is one of the most popular web servers in the world.
High concurrent connections: It is officially stated that a node supports 50,000 concurrent connections, but the actual production environment can support 20,000 to 30,000 concurrent connections. Low memory consumption: Under 30,000 concurrent connections, starting 10 Nginx processes consumes only 150M memory (15M x 10=150M). Simple configuration and low cost: open source and free
1. Forward and reverse proxies
The so-called “proxy” refers to setting a hardware/software forwarding request at the edge of the Intranet. The “forward” or “reverse” statement depends on whether an “outbound request” or “inbound request” is forwarded.
Forward proxy: Handles outbound requests from the client, forwards them to the Internet, and returns the generated response to the client. Reverse proxy: Handles inbound requests from the Internet, forwards them to a back-end worker, and then returns the response to the Internet.
- Forward and reverse proxies differ in the direction of the proxy, but both proxies handle HTTP requests/responses.
- Proxy servers exist for:
- Fortress server/Intranet isolation: If the Intranet client cannot access the Internet, configure the fortress server and hide the Intranet working server
- Proxy server additional functions: perform operations on traffic, use caching or compression to improve performance, defend against attacks, and filter information
2. Load balancing
Load balancing is usually accompanied by reverse proxy to distribute traffic, transparently proxy, and enhance fault tolerance
http { upstream myapp1 { server srv1.example.com; server srv2.example.com; server srv3.example.com; } server { listen 80; location / { proxy_pass http://myapp1; }}}Copy the code
In the early stage, our core product was deployed on two Windows Sever AND IIS servers. In the front, we deployed an Nginx to do load balancing.
Obviously, there is a load balancing strategy here.
- Round-robin Indicates round-robin
- Least-connected: The next request goes to the server with the smallest active link
- Ip-hash: Determines which server to send requests to based on the client’S IP address and hash function
Nginx.org/en/docs/htt…
✨ Extend skill points:
-
[Service Discovery]:
In the container /K8S environment, the service address is dynamically allocated by the cluster system, and the service discovery capability is generally built in. The service name defined in Docker-Comppose /K8S represents the entire service. Using Nginx to implement multiple instances of Docker-Comppose service
-
[Session affinity] :
Also known as “sticky session”, this ensures that requests from the same client are routed to the same back-end server in a stateful application. Here’s another example: “Uploading and Previewing pictures with Conversational Affinity”
3. Separation of static and static
Dynamic and static separation is related to the current hot concept of front and rear end separation,
The front-end can be developed and tested by itself, using Nginx to form a static resource server, and the back-end service can only be used as additional resources.
The following example shows static resources in /usr/share/nginx/html and dynamic resource paths containing API or swagger.
upstream eap_website { server eapwebsite; } server { listen 80; Location / {# static resource root /usr/share/nginx/html; index index.html index.htm; try_files $uri /index.html; } location ^~ / API / {# dynamic resource proxy_pass http://eap_website/api/; } location ^~ /swagger/ {# proxy_pass http://eap_website/swagger/; }}Copy the code
✨ Extend skill points
- The above process is also the fourth of the twelve Elements of Modern Application Methodology, and it is a shame that the back end is reduced to API development in this system
- Here’s a handy tip for dynamically inserting API base addresses during container generation, which is useful for containerization of static and static separation.
Practical function
-
Multiple WebApps in the same domain name are supported through ports
-
Binding an Https Certificate
A domain name is bound to port 443 and port 8080
upstream receiver_server { server receiver:80; } upstream app_server { server app:80; } server { listen 443 ssl http2; server_name eqid.gridsum.com; ssl_certificate /conf.crt/live/gridsum.com.crt; ssl_certificate_key /conf.crt/live/gridsum.com.key; location / { proxy_pass http://receiver_server/; } } server { listen 8080 ssl http2; server_name eqid.gridsum.com:8080; ssl_certificate /conf.crt/live/gridsum.com.crt; ssl_certificate_key /conf.crt/live/gridsum.com.key; location / { proxy_pass http://app_server/; }}Copy the code
-
Support for the rewrite rule: Distribute HTTP requests to different back-end application server nodes based on domain names and urls.
-
Built-in health check function: If an application node at the back end is down, requests are not forwarded to this node. Online functions are not affected. Key commands: max_fails, fail_timeout
upstream backend { server backend1.example.com weight=5; Server 127.0.0.1:8080 max_fails = 3 fail_timeout = 30 s; server unix:/tmp/backend3; server backup1.example.com backup; }Copy the code
-
Bandwidth saving: Supports GZIP compression
-
Solving cross-domain problems ① Reverse proxy ② Adding CORS response headers
5 and 6 points are reflected together: separate projects at the front and back ends, add CORS response headers for cross-domain requests, and enable GZIP compression for static resources
location / {
gzip on;
gzip_types application/javascript text/css image/jpeg;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Access-Control-Allow-Credentials' 'true';
}
Copy the code
To enter the big factory, the technology stack to expand beyond the comfort zone, large factory personnel mostly have multiple skills, plug and use at any time.
Solid basic knowledge, will be integrated through, faster unlock difficult posture.
Goose factory two large probability hung, practice did not blow into the concept, and line and share.
This article reviews the practice of using nginx for small size armor, which should be enough to blow your next interview. If there are any mistakes, please comment.