Project background
In back-end microservices, a unified gateway is usually exposed to the outside world, so that the whole system service has a unified entrance and exit, convergence service; On the front end, however, the unified gateway gateway service are less common, often go out service provided by the application of independent, the industry also has use micro front-end application to the application of dispatching and communication, which make forwarding is one solution, nginx here to convergence front end the application gateway, deployment project need trying to do, Public port is limited, so in order to better more access to the application, it draw lessons from the idea of the backend gateway, implements a front-end gateway agent forward solution, this paper aimed to the front-end gateway practice the thinking and step of the process of pit induction and summary, also hopes to provide some relevant application scenario classmates solve the train of thought
Architecture design
The name of the | role | note |
---|---|---|
The gateway layer | Used to carry front-end traffic as a unified entrance | Front-end or back-end routes can be used to carry traffic, mainly for traffic segmentation, or a single application can be deployed here as a mixture of routing and scheduling |
The application layer | It is used to deploy each front-end application, not limited to the framework. Communication between applications can be distributed through HTTP or to the gateway, provided that the gateway layer has the function of receiving and scheduling | Not limited to the front-end framework and version, each application has been deployed independently. Communication between each application needs to be through HTTP communication, and can also be through communication between container deployment such as K8S |
The interface layer | It is used to fetch data from the back end, which may have different microservice gateways, separate third-party interfaces, or BFF interfaces such as Node.js, depending on the form of back-end deployment | The unified shared interface can be forwarded to the gateway layer for proxy forwarding |
Scheme selection
At present, the business logic of the project application system is relatively complex, so it is not convenient to be unified in the form of micro-front-end in the form of SingleSPA. Therefore, we choose the form of micro-front-end gateway segmentation with Nginx as the main technical form for construction. In addition, multiple third-party applications need to be connected later. The iframe format involves the problem of getting through the network. Due to the limited public network ports in the service form, it is necessary to design a set of virtual ports that can be 1: N. Therefore, nginx is finally selected as the main gateway for traffic forwarding and application segmentation.
The hierarchy | plan | note |
---|---|---|
The gateway layer | An Nginx is used as the public network traffic entrance, and the path is used to slice different sub-applications | The parent Nginx application as a front-end application entrance, need to make a load balancing process, here using K8S load balancing to do, configure three copies, if a pod hangs, can use K8 mechanism to pull up |
The application layer | Multiple different nginx applications, here because of the path segmentation, so need to do a treatment for resource direction, see the next section on the pit case | Here the docker mount directory is used for processing |
The interface layer | After multiple different Nginx applications reverse proxy the interface, the interface cannot be forwarded because it is sent forward by the browser, so the front-end code needs to be processed. For details, see the pit case | Ci and CD construction scaffolding will be configured and some common front-end scaffolding will be configured, such as VUE-CLI, CRA, UMI access plug-in package |
On the pit case
Static resource 404 error
[Case Description] We found that normal HTML resources could be located after the path proxy, but a 404 error could not be found for JS and CSS resources
At present, most applications are single page applications, and single page applications are mainly operated on DOM by JS. For MV * framework, they usually conduct front-end routing and intercept operations on some data, so they need to conduct relative path search on resource search in the processing process of the corresponding template engine
Our project construction is mainly deployed through Docker + K8S, so here we think of a unified resource path in a path directory, and this directory path needs to be consistent with the parent nginx application forwarding path name, that is to say, the child application needs to register a routing information in the parent application. Subsequently, changes can be located through service registration
The parent applies nginx configuration
{
"rj": {
"name": "XXX application"."path: "/rj/"}}Copy the code
server { location /rj/ { proxy_pass http://ip:port/rj/; }}Copy the code
The child application
FROM xxx/nginx:1.20.1
COPY ./dist /usr/share/nginx/html/rj/
Copy the code
Interface proxy 404 error
[Case Description] After processing static resources, we requested the interface in the parent application, and found that the interface also had a query error of 404
[Case Analysis] As the project is currently separated from the front and back ends, so the back-end interface is usually implemented through the nginx of the child application for direction proxy, so after the parent application nginx forward because there is no proxy interface address in the parent application nginx, so there will be no resources
[Solution] There are two solutions. One is to use the parent application to proxy the interface address of the back end. In this case, the problem is that if the name of the child application proxy is the same, and the interface is not from one microservice, or there may be different static proxy and BFF form. In this way, the construction of the parent application will have uncontrollable complexity; The other is to isolate by changing the front-end request path in the child application to one of the agreed paths, such as adding the agreed path in the service registry. Here we both, for us to access research project, will undertake unity in complex application gateway and static resource forward proxy configuration, and application of the appointment path names, such as the back-end to/API/forwarding gateway unity, for access of the research project, we now need to access the application interface of change, Later we will provide a plug-in library for common scaffolding API change plan, such as the vue – cli/cra/umi, etc., for a third party team since the research of the scaffold build applications need to manually change, but in general the custom scaffolding team will usually have a unified configuration front the path of the request, for the old applications such as in jq building project, You need to manually change them
Here I use vuE-CLI3 to build a demonstration:
// config
export const config = {
data_url: '/rj/api'
};
Copy the code
// Specific interface
// Usually there will be some axios route interception processing
import request from '@/xxx';
// There is a unified entry for baseUrl, just change the entry for baseUrl
import { config } from '@/config';
// Specific interface
export const xxx = (params) = >
request({
url: config.data_url + '/xxx'
})
Copy the code
Source analyses
As a lightweight and high-performance Web server, nginx architecture and design is of great significance, and has certain guiding ideas for the design of Node.js and other Web frameworks
Nginx is written in C language, so it combines the whole architecture through modules, including common modules such as HTTP module, event module, configuration module and core module, etc., through the core module to schedule and load other modules, so as to realize the interaction between modules
Here we mainly need to forward the application through proxy_pass in location, so let’s take a look at the processing of proxy_pass in proxy module
ngx_http_proxy_module
static char *ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
static ngx_command_t ngx_http_proxy_commands[] = {
{
ngx_string("proxy_pass"),
NGX_HTTP_LOC_CONF | NGX_HTTP_LIF_CONF | NGX_HTTP_LMT_CONF | NGX_CONF_TAKE1,
ngx_http_proxy_pass,
NGX_HTTP_LOC_CONF_OFFSET,
0.NULL}};static char *
ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
{
ngx_http_proxy_loc_conf_t *plcf = conf;
size_t add;
u_short port;
ngx_str_t *value, *url;
ngx_url_t u;
ngx_uint_t n;
ngx_http_core_loc_conf_t *clcf;
ngx_http_script_compile_t sc;
if (plcf->upstream.upstream || plcf->proxy_lengths) {
return "is duplicate";
}
clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
clcf->handler = ngx_http_proxy_handler;
if (clcf->name.len && clcf->name.data[clcf->name.len - 1] = ='/') {
clcf->auto_redirect = 1;
}
value = cf->args->elts;
url = &value[1];
n = ngx_http_script_variables_count(url);
if (n) {
ngx_memzero(&sc, sizeof(ngx_http_script_compile_t));
sc.cf = cf;
sc.source = url;
sc.lengths = &plcf->proxy_lengths;
sc.values = &plcf->proxy_values;
sc.variables = n;
sc.complete_lengths = 1;
sc.complete_values = 1;
if(ngx_http_script_compile(&sc) ! = NGX_OK) {return NGX_CONF_ERROR;
}
#if (NGX_HTTP_SSL)
plcf->ssl = 1;
#endif
return NGX_CONF_OK;
}
if (ngx_strncasecmp(url->data, (u_char *) "http://".7) = =0) {
add = 7;
port = 80;
} else if (ngx_strncasecmp(url->data, (u_char *) "https://".8) = =0) {
#if (NGX_HTTP_SSL)
plcf->ssl = 1;
add = 8;
port = 443;
#else
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0."https protocol requires SSL support");
return NGX_CONF_ERROR;
#endif
} else {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0."invalid URL prefix");
return NGX_CONF_ERROR;
}
ngx_memzero(&u, sizeof(ngx_url_t));
u.url.len = url->len - add;
u.url.data = url->data + add;
u.default_port = port;
u.uri_part = 1;
u.no_resolve = 1;
plcf->upstream.upstream = ngx_http_upstream_add(cf, &u, 0);
if (plcf->upstream.upstream == NULL) {
return NGX_CONF_ERROR;
}
plcf->vars.schema.len = add;
plcf->vars.schema.data = url->data;
plcf->vars.key_start = plcf->vars.schema;
ngx_http_proxy_set_vars(&u, &plcf->vars);
plcf->location = clcf->name;
if (clcf->named
#if (NGX_PCRE)
|| clcf->regex
#endif
|| clcf->noname)
{
if (plcf->vars.uri.len) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0."\"proxy_pass\" cannot have URI part in "
"location given by regular expression, "
"or inside named location, "
"or inside \"if\" statement, "
"or inside \"limit_except\" block");
return NGX_CONF_ERROR;
}
plcf->location.len = 0;
}
plcf->url = *url;
return NGX_CONF_OK;
}
Copy the code
ngx_http
static ngx_int_t
ngx_http_add_addresses(ngx_conf_t *cf, ngx_http_core_srv_conf_t *cscf,
ngx_http_conf_port_t *port, ngx_http_listen_opt_t *lsopt)
{
ngx_uint_t i, default_server, proxy_protocol;
ngx_http_conf_addr_t *addr;
#if (NGX_HTTP_SSL)
ngx_uint_t ssl;
#endif
#if (NGX_HTTP_V2)
ngx_uint_t http2;
#endif
/* * we cannot compare whole sockaddr struct's as kernel * may fill some fields in inherited sockaddr struct's */
addr = port->addrs.elts;
for (i = 0; i < port->addrs.nelts; i++) {
if (ngx_cmp_sockaddr(lsopt->sockaddr, lsopt->socklen,
addr[i].opt.sockaddr,
addr[i].opt.socklen, 0)
!= NGX_OK)
{
continue;
}
/* the address is already in the address list */
if(ngx_http_add_server(cf, cscf, &addr[i]) ! = NGX_OK) {return NGX_ERROR;
}
/* preserve default_server bit during listen options overwriting */
default_server = addr[i].opt.default_server;
proxy_protocol = lsopt->proxy_protocol || addr[i].opt.proxy_protocol;
#if (NGX_HTTP_SSL)
ssl = lsopt->ssl || addr[i].opt.ssl;
#endif
#if (NGX_HTTP_V2)
http2 = lsopt->http2 || addr[i].opt.http2;
#endif
if (lsopt->set) {
if (addr[i].opt.set) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0."duplicate listen options for %V",
&addr[i].opt.addr_text);
return NGX_ERROR;
}
addr[i].opt = *lsopt;
}
/* check the duplicate "default" server for this address:port */
if (lsopt->default_server) {
if (default_server) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0."a duplicate default server for %V",
&addr[i].opt.addr_text);
return NGX_ERROR;
}
default_server = 1;
addr[i].default_server = cscf;
}
addr[i].opt.default_server = default_server;
addr[i].opt.proxy_protocol = proxy_protocol;
#if (NGX_HTTP_SSL)
addr[i].opt.ssl = ssl;
#endif
#if (NGX_HTTP_V2)
addr[i].opt.http2 = http2;
#endif
return NGX_OK;
}
/* add the address to the addresses list that bound to this port */
return ngx_http_add_address(cf, cscf, port, lsopt);
}
static ngx_int_t
ngx_http_add_addrs(ngx_conf_t *cf, ngx_http_port_t *hport,
ngx_http_conf_addr_t *addr)
{
ngx_uint_t i;
ngx_http_in_addr_t *addrs;
struct sockaddr_in *sin;
ngx_http_virtual_names_t *vn;
hport->addrs = ngx_pcalloc(cf->pool,
hport->naddrs * sizeof(ngx_http_in_addr_t));
if (hport->addrs == NULL) {
return NGX_ERROR;
}
addrs = hport->addrs;
for (i = 0; i < hport->naddrs; i++) {
sin = (struct sockaddr_in *) addr[i].opt.sockaddr;
addrs[i].addr = sin->sin_addr.s_addr;
addrs[i].conf.default_server = addr[i].default_server;
#if (NGX_HTTP_SSL)
addrs[i].conf.ssl = addr[i].opt.ssl;
#endif
#if (NGX_HTTP_V2)
addrs[i].conf.http2 = addr[i].opt.http2;
#endif
addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol;
if (addr[i].hash.buckets == NULL
&& (addr[i].wc_head == NULL
|| addr[i].wc_head->hash.buckets == NULL)
&& (addr[i].wc_tail == NULL
|| addr[i].wc_tail->hash.buckets == NULL)
#if (NGX_PCRE)
&& addr[i].nregex == 0
#endif
)
{
continue;
}
vn = ngx_palloc(cf->pool, sizeof(ngx_http_virtual_names_t));
if (vn == NULL) {
return NGX_ERROR;
}
addrs[i].conf.virtual_names = vn;
vn->names.hash = addr[i].hash;
vn->names.wc_head = addr[i].wc_head;
vn->names.wc_tail = addr[i].wc_tail;
#if (NGX_PCRE)
vn->nregex = addr[i].nregex;
vn->regex = addr[i].regex;
#endif
}
return NGX_OK;
}
Copy the code
conclusion
For the front-end gateway, not only can the gateway be separated independently for layering, but also can use the singlespa-like scheme to use front-end routing for gateway processing and application tuning, so as to realize the control of single-page application, but separate out each sub-application. Is the benefits between each application can be applied by the parent or bus communication between each other, each other as well as the public resources sharing and private resources in isolation, for the purpose of this project, the current business mode is more suitable for using a separate gateway layer to implement, while using nginx can achieve smaller configured to access various applications, realize the front entrance of convergence, Subsequent here will provide scaffolding for constructing process of ci, CD, convenient application developers access to the building plan, so as to realize the effect of engineering, for multiplying number can copy operation, we should think of that use the method of engineering to solve, rather than blindly into human, after all is more adept at handling a single machine of the same batch and stable output work, ‘!!!!! !
reference
- Understand the rewrite module of nginx
- Front end gateway thinking
- The Nginx family of agents could not load static resource handling methods later
- Front-end redirection and forwarding