Java interview summary summary, including Java key knowledge, as well as common open source framework, welcome to read. The article may have wrong place, because individual knowledge is limited, welcome everybody big guy to point out! The article continues to be updated at……

ID The title address
1 Design Mode Interview Questions (Most comprehensive summary of interview questions) Juejin. Cn/post / 684490…
2 Java Basics (most comprehensive interview questions) Juejin. Cn/post / 684490…
3 Java Set interview Questions (the most comprehensive interview questions) Juejin. Cn/post / 684490…
4 JavaIO, BIO, NIO, AIO, Netty Interview Questions (Most comprehensive summary of interview questions) Juejin. Cn/post / 684490…
5 Java Concurrent Programming Interview questions (most comprehensive interview questions) Juejin. Cn/post / 684490…
6 Java Exception Interview Questions (The most comprehensive interview Questions) Juejin. Cn/post / 684490…
7 Java Virtual Machine (JVM) Interview Questions Juejin. Cn/post / 684490…
8 Spring Interview Questions (the most comprehensive interview questions) Juejin. Cn/post / 684490…
9 Spring MVC Interview Questions (The most comprehensive interview Questions) Juejin. Cn/post / 684490…
10 Spring Boot Interview Questions (Most comprehensive summary of interview questions) Juejin. Cn/post / 684490…
11 Spring Cloud Interview Questions (The most comprehensive interview questions) Juejin. Cn/post / 684490…
12 Redis Interview Questions (most comprehensive summary of interview questions) Juejin. Cn/post / 684490…
13 MyBatis Interview Questions (most comprehensive interview questions) Juejin. Cn/post / 684490…
14 MySQL Interview Questions (most comprehensive interview questions) Juejin. Cn/post / 684490…
15 TCP, UDP, Socket, HTTP interview questions Juejin. Cn/post / 684490…
16 Nginx Interview Questions (The most comprehensive interview Questions) Juejin. Cn/post / 684490…
17 ElasticSearch interview questions
18 Kafka interview questions
19 RabbitMQ interview questions (most comprehensive summary of interview questions) Juejin. Cn/post / 684490…
20 Dubbo Interview Questions (the most comprehensive interview questions) Juejin. Cn/post / 684490…
21 ZooKeeper Interview Questions Juejin. Cn/post / 684490…
22 Netty Interview Questions (Most comprehensive summary of interview questions)
23 Tomcat Interview Questions (The most comprehensive interview questions) Juejin. Cn/post / 684490…
24 Linux Interview Questions (Most comprehensive Summary of interview questions) Juejin. Cn/post / 684490…
25 Internet Related interview Questions (the most comprehensive summary of interview questions)
26 Internet Security Questions (Summary of the most comprehensive interview questions)

What is Nginx?

  • Nginx is a lightweight/high performance reverse proxy Web server, he realizes very efficient reverse proxy, load balancing, he can handle 20-30 thousand concurrent connections, the official monitoring can support 50 thousand concurrent, now There are many users using Nginx website in China, such as: Sina, netease, Tencent and so on.

Why Nginx?

  • Cross-platform, simple configuration, direction proxy, high concurrent connection: processing 20-30,000 concurrent connection number, official monitoring can support 50,000 concurrent, small memory consumption: open 10 nginx only accounts for 150M memory, nginx processing static files well, less memory consumption,

  • And Nginx has built-in health checks: if a server goes down, a health check is done, and requests are not sent to the down server. Resubmit the request to another node.

  • Using Nginx you can also:

    1. Broadband savings: Supports GZIP compression and can add browser local cache
    2. High stability: The probability of downtime is very small
    3. Receiving user requests is asynchronous

Why is Nginx so high performance?

  • Because of its event processing mechanism: asynchronous non-blocking event processing mechanism: using the Epoll model, provides a queue, queue solution

How does Nginx handle requests?

  • When nginx receives a request, it matches the server module first with the Listen and server_name directives, and then with the location in the server module, which is the actual address
Server {# the first server block starts, representing a separate virtual host site listen 80; Server_name localhost; Location / {# the first location block starts root HTML; The root directory of the Nginx installation directory index index.html index.htm; # default home page file, multiple separated by Spaces}Copy the code

What are forward and reverse proxies?

  1. Forward proxy is when a person sends a request directly to the target server
  2. The anti-proxy request is uniformly received by the Nginx reverse proxy server and then distributed to the back-end business processing server for processing according to certain rules

Using “What are the advantages of a reverse proxy server?

  • A reverse proxy server can hide the existence and features of the source server. It acts as an intermediate layer between the Internet cloud and the Web server. This is great for security, especially if you use web hosting services.

Pros and cons of Nginx?

  • Advantages:

    1. Small memory, high concurrent connection, fast response
    2. Can realize HTTP server, virtual host, direction proxy, load balancing
    3. Nginx is simple to configure
    4. The official server IP address may not be exposed
  • Disadvantages: Poor dynamic processing: Nginx handles static files well, consumes less memory, but processing dynamic pages is very weak, now the general front end uses Nginx as a reverse proxy to resist pressure,

Nginx application scenarios?

  1. HTTP server. Nginx is an HTTP service that can provide HTTP services independently. Can do web static server.
  2. Virtual hosting. Multiple websites can be created on one server, for example, virtual machines for personal websites.
  3. Reverse proxy, load balancer. When the number of visits to the website reaches a certain level, a single server cannot meet the user’s request, you need to use multiple server cluster can use Nginx as a reverse proxy. In addition, multiple servers can share the load equally, so that one server will not be idle due to the high load of a server downtime.
  4. Nginz can also configure security management, for example, you can use Nginx to build API interface gateway, the interception of each interface service.

What are the Nginx directory structures?

[root@localhost ~]# tree /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx / │ ├─ ├─ Fastcgi_params # Fastcgi │ ├─ Fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf ├─ FastCGI_├ s.default │ ├─ Koi-UTF │ ├─ mime. Types # Media │ ├─ mime. Types. Default │ ├─ Nginx.conf # Nginx └ │ ├─ Nginx.conf. Default │ ├─ Anti-Flag # Nginx ├─ Anti-Flag # Nginx.conf. Default │ ├─ Anti-Flag # ├─ ├─ uWSGi ├─ ├─ uWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params 50x.html # Error pages elegantly replace display files, │ ├─ ├─ access.log │ ├─ error.log │ ├─ ├─ ├.log │ ├─ ├.log │ ├─ ├.log │ ├─ ├.log └─ nginx. Pid # pid file, └─ nginx Will put all the process ID number to the file ├ ─ ─ proxy_temp # temp directory ├ ─ ─ sbin # Nginx command directory │ └ ─ ─ Nginx # Nginx start command ├ ─ ─ scgi_temp # temp directory └ ─ ─ uwsgi_temp # temporary directoryCopy the code

What attribute modules does the Nginx configuration file nginx.conf have?

Worker_processes 1; # worker_connections 1024; HTTP {# HTTP block start include mime.types; Default_type application/octet-stream; # Default media type sendFile on; Keepalive_timeout 65; Server {# the first server block starts, representing an independent virtual host site listen 80; Server_name localhost; Location / {# the first location block starts root HTML; The root directory of the Nginx installation directory index index.html index.htm; } # first location block result error_page 500502503504/50x.html; # location = /50x. HTML {# location block start, access 50x. HTML root HTML; # specify the corresponding site directory as HTML}}......Copy the code

Nginx static resources?

  • Static resource access, is stored in Nginx HTML pages, we can write their own

How to solve front-end cross domain problems with Nginx?

  • Use Nginx to forward requests. Cross-domain interfaces are written as local-domain interfaces, and those interfaces are then forwarded to the real requesting address.

How to configure Nginx virtual host?

  • 1, based on the domain name of the virtual host, through the domain name to distinguish the virtual host – application: external website

  • 2, port based virtual host, through the port to distinguish virtual host – application: the company’s internal website, external website management background

  • 3. Ip-based virtual hosting.

Configure domain names based on virtual hosts

  • The /data/ WWW /data/ BBS directory needs to be set up and the Windows local hosts needs to add domain name resolution corresponding to the VM IP address. The index. HTML file is added to the corresponding domain name website directory.
Server {listen 80; / / data/ WWW/server {listen 80; server_name www.lijie.com; location / { root data/www; index index.html index.htm; }} server {listen 80;}} server {listen 80; server_name bbs.lijie.com; location / { root data/bbs; index index.html index.htm; }}Copy the code

Port-based virtual host

  • Use port to distinguish, browser use domain name or IP address: port number access
Data/WWW server {listen 8080; data/ WWW server {listen 8080; server_name 8080.lijie.com; location / { root data/www; index index.html index.htm; }} # if the client accesses www.lijie.com, listen port 80 directly to the real IP server address 127.0.0.1:8080 server {listen 80; server_name www.lijie.com; Location / {proxy_pass http://127.0.0.1:8080; index index.html index.htm; }}Copy the code

What does location do?

  • The function of the location directive is to perform different applications according to the URI requested by the user, that is, to match the website URL requested by the user, and relevant operations will be carried out once the match is successful.

Location syntax?

Note: ~ represents the English letters you input

A match Match rule priority
= An exact match 1
^ ~ Starts with a string 2
~ Case-sensitive regular matching 3
~ * Case-insensitive regular matching 4
! ~ Case-sensitive re that does not match 5
! ~ * A case-insensitive re that does not match 6
/ Universal match, any request will be matched 7

Location re case

  • Example:
# priority 1, exact match, root path location =/ {return 400; } # location ^~ /av {root /data/av/; /media***** path location ~ /media {alias /data/static/; } # 4 priority, case-insensitive regular match, all the * * * *. JPG | | PNG GIF came here the location ~ *. * \. (JPG | | PNG GIF | js | CSS) ${root/data/av /; } location / {return 403; }Copy the code

How is the current limiting done?

  • Nginx traffic limiting is to limit the speed of user requests and prevent the server from being overwhelmed

  • There are three types of current limiting

    1. Normal Restricted access frequency (normal traffic)
    2. Burst limiting access Frequency (Burst traffic)
    3. Limit the number of concurrent connections
  • Nginx is based on the leaky bucket flow algorithm

Three traffic limiting algorithms are implemented

1. Normal restricted access frequency (normal traffic) :
  • Nginx limits the number of requests a user sends, and how often I receive a request.

  • Nginx uses ngX_HTTP_limit_REq_module module to limit the access frequency, the principle of restriction is based on the principle of leak-bucket algorithm to achieve. In the nginx.conf configuration file, you can use the limit_req_zone command and the limit_req command to limit the frequency of processing requests for a single IP address.

	# define the dimension of limiting traffic, one request per user per minute, all the excess is missed
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m;

	Bind the flow limiting dimensionserver{ location/seckill.html{ limit_req zone=zone; proxy_pass http://lj_seckill; }}Copy the code
  • 1r/s represents one request per second, and 1r/m receives one request per minute. If Nginx has not finished processing other requests, Nginx will refuse to process the user request.
2. Burst limit access frequency (burst traffic) :
  • Limit the number of requests a user sends and how often Nginx receives one.

  • The above configuration can limit the frequency of access to some extent, but there is a problem: what happens if burst traffic exceeds request rejection processing and cannot handle burst traffic during activity? Nginx provides the burst parameter combined with the nodelay parameter to solve the problem of traffic burst. You can set the number of requests that can be processed in addition to the set number of requests. We can add burst parameters and nodelay parameters to the previous example:

Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; Server {location/seckill. HTML {limit_req zone=zone burst=5 nodelay; proxy_pass http://lj_seckill; }}Copy the code
  • Burst =5 nodelay; This means that Nginx will immediately process the first five requests from a user, and the rest will slowly fall off. If there are no other requests, I will process yours. If there are other requests, I will miss your request
3. Limit the number of concurrent connections
  • The ngx_HTTP_limit_conn_module module in Nginx provides the function of limiting the number of concurrent connections, which can be configured using the limit_conn_zone directive and the limit_CONN execution. Here’s a simple example:
    http {
	limit_conn_zone $binary_remote_addr zone=myip:10m;
	limit_conn_zone $server_name zone=myServerName:10m;
    }

    server {
        location / {
            limit_conn myip 10;
            limit_conn myServerName 100;
            rewrite / http://www.lijie.net permanent;}}Copy the code
  • The maximum number of concurrent connections for a single IP address is 10, and the maximum number of concurrent connections for the entire virtual server is 100. Of course, the number of connections to the virtual server is not counted until the requested header is processed by the server. As mentioned earlier, Nginx is based on the leaky bucket algorithm. The actual upper limit flow is generally based on the leaky bucket algorithm and token bucket algorithm. Let’s take a look at the introduction of two algorithms:

Leaky bucket flow algorithm and token bucket algorithm know?

Bucket algorithm

  • Leak-bucket algorithm is an algorithm commonly used in traffic shaping or rate limiting in the network world. Its main purpose is to control the rate at which data is injected into the network and smooth the burst traffic on the network. The leaky bucket algorithm provides a mechanism by which burst traffic can be shaped to provide a steady flow of traffic to the network. That’s what we were talking about. The mechanism provided by the leaky bucket algorithm is in fact the same as in the previous example:Burst traffic will enter a leak bucket, and the leak bucket will process the requests in turn according to the rate defined by us. If the flow is too large, that is, the burst flow will overflow directly, and the excess requests will be rejected. So the leaky bucket algorithm can control the data transmission rate.

Token bucket algorithm

  • Token bucket algorithm is one of the most commonly used algorithms in network traffic shaping and rate limiting. Typically, the token bucket algorithm is used to control the amount of data sent to the network and to allow the delivery of burst data. RateLimiter, Google’s open source project Guava, uses the token bucket control algorithm. The token bucket algorithm works as follows: there is a token bucket of a fixed size that generates tokens at a constant rate. If the token consumption rate is less than the token production rate, tokens are generated until the entire token bucket is filled.

Why do we do static separation?

  • Nginx is the hottest Web container, the important point of website optimization is static site, the key point of static site is static separation, static separation is to make dynamic Web pages in the dynamic website according to certain rules to distinguish between constant resources and often changed resources, dynamic resources do a good job after splitting, We cache static resources based on their characteristics.

  • Let static resources only go to static resource server, dynamic go to dynamic server

  • Nginx has a strong static processing capability, but a weak dynamic processing capability. Therefore, static and static separation technology is commonly used in enterprises.

  • For static resources such as images, JS, CSS files, we cache in the reverse proxy server nginx. In this way, when the browser requests a static resource, the proxy server Nginx can handle it directly, without having to forward the request to the back-end server Tomcat. If the user requests a dynamic file, such as servlet, JSP will be forwarded to the Tomcat server for processing, so as to achieve static and static separation. This is also an important function of the reverse proxy server.

How does Nginx do static separation?

  • You only need to specify the directory corresponding to the path. Location/can be matched using regular expressions. And specify the directory on the corresponding hard disk. As follows :(all on Linux)
        location /image/ {
            root   /usr/local/static/;
            autoindex on;
        }
Copy the code
  1. Create a directory

    mkdir /usr/local/static/image
    Copy the code
  2. Enter the directory

    cd  /usr/local/static/image
    Copy the code
  3. Put a picture on it

    1.jpg
    Copy the code
  4. Restart the nginx

    sudo nginx -s reload
    Copy the code
  5. Open your browser and type server_name/image/1.jpg to access the static image

Nginx load balancing algorithm how to achieve? What are the strategies?

  • To avoid server crashes, load balancing is used to share server load. When users visit, they will first visit a forwarding server, and then the forwarding server will distribute the access to the server with less pressure.

  • Nginx implements the following five load balancing policies:

1 Polling (default)

  • Each request is allocated to a different back-end server one by one in chronological order. If a back-end server goes down, the failed system can be automatically removed.
upstream backserver { 
 server 192.168. 0.12; server192.168. 0.13; }Copy the code

2 the weight weight

  • The larger the value of weight is allocated

  • The higher the access probability is, the higher the access probability is when the performance of each back-end server is unbalanced. The second is to set different weights in the case of master and slave to achieve reasonable and effective use of host resources.

upstream backserver { 
 server 192.168Weight = 0.122; 
 server 192.168Weight = 0.138; 
} 
Copy the code
  • The higher the weight, the greater the probability of being visited, as shown above, 20% and 80% respectively.

3 IP_Hash (IP Binding)

  • Each request is allocated based on the hash of the access IP, so that visitors from the same IP address have fixed access to a single back-end server,And it can effectively solve the session sharing problem of dynamic web pages
upstream backserver { 
 ip_hash; 
 server 192.1680.12:88; 
 server 192.1680.13:80; 
} 
Copy the code

4 Fair (Third-party plugin)

  • The upstream_fair module must be installed.

  • Compared with smarter load balancing algorithms such as weight and IP_hash, Fair performs intelligent load balancing based on the page size and load time. The algorithm with short response time is preferentially allocated.

upstream backserver { 
 server server1; 
 server server2; 
 fair; 
} 

Copy the code
  • Requests are assigned to whichever server responds faster.

5. Url_hash (third-party plugin)

  • You must install the Nginx Hash software package

  • Requests are allocated based on the hash results of urls so that each URL is directed to the same backend server, which further improves the efficiency of the backend cache server.

upstream backserver { 
 server squid1:3128; 
 server squid2:3128; 
 hash $request_uri; 
 hash_method crc32; 
} 

Copy the code

How to configure Nginx high availability?

  • If the upstream server (real access server) fails or does not respond in time, it should be directly rotated to the next server to ensure high availability of the server

  • Nginx configuration code:

server { listen 80; server_name www.lijie.com; Location / {### specify the upstream server load balancing server proxy_pass http://backServer; Proxy_connect_timeout 1s; proxy_connect_timeout 1s; ###nginx send timeout to upstream server (real access server) proxy_send_timeout 1s; Proxy_read_timeout 1; proxy_read_timeout 1; index index.html index.htm; }}Copy the code

How does Nginx determine whether other IP addresses are inaccessible?

If ($remote_addr = 192.168.9.115) {return 403; }Copy the code

How do I limit browser access?

If ($http_user_agent ~ Chrome) {return 500; }Copy the code

What is the Rewrite global variable?

variable meaning
$args This variable is equal to the argument in the request line, as in $query_string
$content length Content-length field in the request header.
$content_type The Content-Type field in the request header.
$document_root The value specified in the root directive for the current request.
$host Request the host header field, otherwise the server name.
$http_user_agent Client Agent information
$http_cookie Client cookie information
$limit_rate This variable can limit the connection rate.
$request_method The action requested by the client, usually GET or POST.
$remote_addr IP address of the client.
$remote_port Port of the client.
$remote_user A user name that has been authenticated by the Auth Basic Module.
$request_filename The file path of the current request, generated by the root or alias directive and the URI request.
$scheme HTTP methods (such as HTTP, HTTPS).
$server_protocol The protocol used for the request, usually HTTP/1.0 or HTTP/1.1.
$server_addr Server address, which can be determined after completing a system call.
$server_name Server name.
$server_port Port number for the request to arrive at the server.
$request_uri The original URI containing the request parameters, without the host name, such as “/foo/bar.php? Arg = baz “.
$uri The current URI without request parameters, $URI does not contain a host name, such as “/foo/bar.html”.
$document_uri Same as $uri.