Two months ago, I wrote an article called “From Encapsulating Nginx NJS Tool Images”, giving a brief introduction to NJS released by the Nginx official team and the Docker images I customized for it.

In this article, I’ll show you how to use Nginx NJS to write a set of API aggregators with minimal lines of code, and how to use Docker to encapsulate them as usable services.

Writing in the front

This article covers several sections, if you are not familiar with them, you can read my previous related articles to deepen your understanding and grasp:

  • Docker and container encapsulation, previous articles
  • Nginx and its modules, previous articles
  • Nginx NJS, past articles, NJS-learning-Materials

In order to simulate and demonstrate near-real converged service functionality, I randomly found two interfaces on the official website of open source software I use frequently:

  • MySQL:https://www.mysql.com/common/chat/chat-translation-data.json
  • Redis:https://redislabs.com/wp-content/themes/wpx/proxy/signup_proxy.php

All right, we’re all set. Let’s get started.

Write Nginx NJS scripts

Great oaks from little acorns start with the simplest part.

Write the basic Nginx interface using NJS

Before we try to aggregate interfaces, try writing a basic version that lets Nginx emulate an interface like {code: 200, desc: “This is the description “}.

If you’re familiar with Node or any other back-end language, it should be obvious what this code does: We first define a function called simple, then define the interface data we want to show, then set the Nginx response content type to UTF8-encoded JSON and the interface HTTP Code to 200, and finally declare simple in the module to be publicly callable.

Function simple(req) {var result = {code: 200, desc: ""}; req.headersOut["Content-Type"] = "application/json; charset=UTF-8"; req.return(200, JSON.stringify(result)); } export default { simple };Copy the code

Save the above as app.js and place it in a directory called script for later use. Next we declare a configuration file that allows Nginx to call NJS:

load_module modules/ngx_http_js_module.so; user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; js_import app from script/app.js; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; charset utf-8; gzip on; location / { js_content app.simple; }}}Copy the code

Save the above as nginx.conf, which we will also use later.

As you can see, this configuration file does not look very different from the previous configuration file, but it does have some “differences”. If you remove all the non-nJS content, you can clearly see how NJS interacts with Nginx.

load_module modules/ngx_http_js_module.so; . http { ... js_import app from script/app.js; server { ... location / { js_content app.simple; }}}Copy the code

The first step is to load the ngx_HTTP_js_module. so module with a global explicit declaration, then to introduce the script we wrote into the Nginx HTTP block scope, and finally to call the script-specific methods to provide services.

For easy validation of the service, we also need to write a simple compose orchestration file:

Version: '3' services: nginx-api-Demo: image: nginx:1.19.8- Alpine Restart: Always ports: -8080/80 Volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./script:/etc/nginx/scriptCopy the code

As mentioned in the previous article, NJS is now the official Nginx module and is shipped with the official Docker image by default, so we will use the latest official image Nginx :1.19.8-alpine.

Docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up

Instead of using Nginx to call the CGI program, you can see that the interface processing time is only 1ms. Although this is due to the very low complexity of the code we implement, the network overhead usually results in much larger results. From one point of view, Nginx’s direct involvement in result calculation has potential in terms of performance when “external program” computation is not required.

Try writing an interface to get remote data

We then write an interface that can fetch remote data in the same way we wrote before, by replacing the data returned by our defined interface with the data interface result requested using the SubRequest method.

function fetchRemote(req) { req.subrequest("https://www.mysql.com/common/chat/chat-translation-data.json").then((response) => { req.headersOut["Content-Type"] = "application/json; charset=UTF-8"; req.return(200, JSON.stringify(response)); }) } export default { fetchRemote };Copy the code

For the sake of distinction, we will change the function name to the more appropriate “fetchRemote”, and then update the calling method in the nginx.conf file as well:

. location / { js_content app.fetchRemote; }...Copy the code

Then restart the service with docker-compose up and visit localhost:8080 again to verify that the results of the program are as expected.

However, the page returns something like the following:

{" status ": 404," args ": {}," httpVersion ":" 1.1 ", "remoteAddress" : "172.21.0.1", "headersOut" : {" content-type ":" text/HTML ", "Conte nt-Length":"555"},"method":"GET","uri":"https://www.mysql.com/common/chat/chat-translation-data.json","responseText":"<h tml>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found < / h1 > < / center > \ r \ n < hr > < center > nginx / 1.19.8 < / center > \ r \ n the < / body > < / HTML > \ r \ r \ n \ n <! -- a padding to disable MSIE and Chrome friendly error page -->\r\n<! -- a padding to disable MSIE and Chrome friendly error page -->\r\n<! -- a padding to disable MSIE and Chrome friendly error page -->\r\n<! -- a padding to disable MSIE and Chrome friendly error page -->\r\n<! -- a padding to disable MSIE and Chrome friendly error page -->\r\n<! -- a padding to disable MSIE and Chrome friendly error page -->\r\n","headersIn":{"Host":"localhost:8080","Connection":"keep-alive","Cache-Control":"max-age=0","sec-ch-ua":"\"Googl e Chrome\"; v=\"89\", \"Chromium\"; v=\"89\", \"; Not A Brand\"; v=\"99\"","sec-ch-ua-mobile":"? DNT 0 ", "" :" 1 ", "the Upgrade - Insecure - Requests" : "1", "the user-agent" : "Mozilla / 5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, Like Gecko) Chrome / 89.0.4389.90 Safari / 537.36 ", "Accept", "text/HTML, application/XHTML + XML, application/XML. Q = 0.9, image/avif, image/webp image/apng, * / *; Q = 0.8, application/signed - exchange; v=b3; Q = 0.9 ", "the Sec - Fetch - Site" : "none", "the Sec - Fetch - Mode" : "navigate", "the Sec - Fetch - User" : "? 1","Sec-Fetch-Dest":"document","Accept-Encoding":"gzip, deflate, br","Accept-Language":"zh-CN,zh; Q = 0.9, en. Q = 0.8, ja. Q = 0.7}}"Copy the code

The page returns data, but it’s obviously not what we want.

Check the Nginx logs to learn more about why this error occurred.

[error] 33#33: *1 open() "/etc/nginx/htmlhttps://www.mysql.com/common/chat/chat-translation-data.json" failed (2: No such file or directory), client: 172.21.0.1, server: localhost, request: "GET/HTTP/1.1", subRequest: "https://www.mysql.com/common/chat/chat-translation-data.json", host: "localhost:8080" ...Copy the code

Let’s talk about the right answer.

Get remote data correctly

This is where an error occurs because NJS’s subRequest method only supports sending requests asynchronously to the reverse proxy.

We will change the request address to a Nginx reverse proxy. Because this interface is only used for NJS calls, we do not need to provide open access, so we can add internal directives for external access restriction processing, to prevent calls outside of NJS from accessing our remote interface:

location /proxy/api-mysql {
    internal;
    proxy_pass https://www.mysql.com/;
    proxy_set_header Host www.mysql.com;
}
Copy the code

Then modify the request address in the previous code:

function fetchRemote(req) { req.subrequest("/proxy/api-mysql/common/chat/chat-translation-data.json").then((response) =>  { req.headersOut["Content-Type"] = "application/json; charset=UTF-8"; req.return(200, JSON.stringify(response)); }) } export default { fetchRemote };Copy the code

Starting the service again, we can see that we have been able to get the remote data, but the results look problematic:

{" status ": 200," args ": {}," httpVersion ":" 1.1 ", "remoteAddress" : "172.27.0.1", "headersOut" : {" content-type ":" application/json" ,"Content-Length":"1863","X-Frame-Options":"SAMEORIGIN","Strict-Transport-Security":"max-age=15768000","Last-Modified":" Tue, 27 Nov 2018 20:34:52 GMT","Accept-Ranges":"bytes","Vary":"Accept-Encoding","Content-Encoding":"gzip","X-XSS-Protection":"1; mode=block","X-Content-Type-Options":"nosniff"},"method":"GET","uri":"/proxy/api-mysql/common/chat/chat-translation-data . Json, "" the responseText" : "\ u001f � \ b \ u0000 \ u0000 \ u0000 \ u0000 \ u0000 \ u0000 \ u0003 � Z [o \ u0013G \ u0014 ~ G �? � � W (\ u0002 � � J � R \ u0014 � � � b K � JT} \ u0018 {� � $�] 3 � � 4 � � |! 4 j � � I � � & $� � P (� �; QA � � �} \ u001b \ u0016 \ u0007 '1 � _ � \ u0019 � \ u001d � � � c (� M \ "9 ^  9 � � � � sf, � � \ u0006 \ u0019 +! P \ \ u0003 � � � u0016}  � \ b � � � � \ u0017B \ rD � � �? ᄆ � � 98 � � � D B e \ u0010 � � q o \ u0003 � ؂ � � c [lh @ U \ u00022 � xk � � \ u0004Copy the code

The reason for this problem is that the remote server is returning GZip data to us, so we have two choices here: tell the server we don’t support GZip, or let Nginx decompress the retrieved data.

Since we tell the remote server that we don’t support GZip, the remote server will still send the compressed data (common in CDN), so we recommend using solution two, again modify the Nginx configuration so that Nginx can automatically decompress the remote data.

location /proxy/api-mysql {
    internal;
    gunzip on;
    proxy_pass https://www.mysql.com/;
    proxy_set_header Host www.mysql.com;
}
Copy the code

But another problem occurs when we restart the service for testing:

[error] 32 #33: *4 Closing Request, client: 172.28.0.1, server: 0.0.0.0:80 [error] 32 #33: *8 Too big SubRequest response while sending to client, client: 172.28.0.1, server: localhost, request: "GET/HTTP/1.1", subrequest: "/proxy/api-mysql/common/chat/chat-translation-data.json", upstream: "Https://137.254.60.6:443//common/chat/chat-translation-data.json", the host: "localhost: 8080"Copy the code

Nginx’s default Buffer for temporary data is much larger than GZip’s default Buffer for temporary data, so we need to adjust this further:

subrequest_output_buffer_size 200k;

location /proxy/api-mysql {
    internal;
    gunzip on;
    proxy_pass https://www.mysql.com/;
    proxy_set_header Host www.mysql.com;
}
Copy the code

The subrequest_output_BUFFer_SIZE configuration value can be adjusted according to your scenario requirements. Restart the service again and see that we have been able to get the correct remote interface data content.

Write programs with aggregation capabilities

Since we were aggregating multiple interfaces, we made some adjustments to both the NJS code and the Nginx configuration.

I won’t demonstrate the lousy sequential execution pattern here, because for these non-context-dependent interfaces, using asynchronous concurrent fetching can consume as little time as possible to provide results. Of course, there are scenarios for serial requests, and I’ll talk about how to use NJS flexibly to control the request flow in a later article.

// https://github.com/nginx/njs/issues/352#issuecomment-721126632
function resolveAll(promises) {
  return new Promise((resolve, reject) => {
    var n = promises.length;
    var rs = Array(n);
    var done = () => {
      if (--n === 0) {
        resolve(rs);
      }
    };
    promises.forEach((p, i) => {
      p.then((x) => {
        rs[i] = x;
      }, reject).then(done);
    });
  });
}

function aggregation(req) {
  var apis = ["/proxy/api-mysql/common/chat/chat-translation-data.json", "/proxy/api-redis/wp-content/themes/wpx/proxy/signup_proxy.php"];
  resolveAll(apis.map((api) => req.subrequest(api)))
    .then((responses) => {
      var result = responses.reduce((prev, response) => {
        var uri = response.uri;
        var prop = uri.split("/proxy/api-")[1].split("/")[0];
        try {
          var parsed = JSON.parse(response.responseText);
          if (response.status === 200) {
            prev[prop] = parsed;
          }
        } catch (err) {
          req.error(`Parse ${uri} failed.`);
        }
        return prev;
      }, {});
      req.headersOut["Content-Type"] = "application/json;charset=UTF-8";
      req.return(200, JSON.stringify(result));
    })
    .catch((e) => req.return(501, e.message));
}

export default { aggregation };
Copy the code

Next, make some adjustments to the Nginx configuration file:

. location / { js_content app.aggregation; } subrequest_output_buffer_size 200k; location /proxy/api-mysql { internal; gunzip on; proxy_pass https://www.mysql.com/; proxy_set_header Host www.mysql.com; } location /proxy/api-redis { internal; gunzip on; proxy_pass https://redislabs.com/; proxy_set_header Host redislabs.com; }...Copy the code

Finally, we start the service again to verify that we can get the correct remote data and aggregate it.

Now that we’ve got what we want, let’s talk a little bit about container encapsulation.

Use containers to encapsulate NJS applications

As mentioned earlier, the NJS module is supported by Nginx official images by default. We can directly use Nginx :1.19.8-alpine as the base for image building.

The image file is very simple, requiring only three lines:

FROM nginx:1.19.8-alpine COPY nginx.conf /etc/nginx.conf COPY app.js /etc/nginx.script /app.jsCopy the code

Save the above content as a Dockerfile and use the docker build-t njs-api. Construct our mirror image.

If you choose to use Docker Images to view the image, you will find that the image we built is very small, almost the same size as the official Nginx image, so there will be a great advantage in public network distribution, according to docker incremental distribution features, We actually only distribute the last two of the three configuration lines (layers), which are about a few kilobytes.

Njs-api Latest F4b6de5dacb8 3 minutes ago 22.6MB Nginx 1.19.8- Alpine 5FD75c905b52 7 days ago 22.6MBCopy the code

After building the image, use the Docker run –rm it -p 8090:80 nJS-API to further verify that the service is working properly and, unsurprisingly, get the results shown in the image in the previous section.

The last

All right, so to sum up.

In this article, because we did not use any non-Nginx out-of-image Runtime, the resulting image is very small and good for network distribution.

Because of the simplicity and clarity of NJS and Nginx, the NJS program is released at the end of the request life cycle, the NJS engine is relatively efficient, and the NJS engine itself only implements a subset of ECMA (low overall complexity), and the sub-request life cycle is very short. So our service can use very low resource usage (close to Nginx native resource usage) to provide performance close to Nginx native service.

If you write a lot of business code, you’ll find that this article leaves some obvious performance improvements unaddressed: how to improve the performance of aggregated interfaces, how to work with partite modules in custom Nginx mirrors and environments, and what more complex tasks can NJS actually do?

I’ll expand on these in the next NJS article.

–EOF


This article is published under a SIGNATURE 4.0 International (CC BY 4.0) license. Signature 4.0 International (CC BY 4.0)

Author: Su Yang

Creation time: March 18, 2021 statistical word count: 9759 words reading time: 20 minutes to read this article links: soulteary.com/2021/03/18/…


Smuggle in a bit of booty:)

If you think this article is good and want to learn more about NJS production practices, please like this article.

Readers who know me know that I have always been a relatively Buddha writer, and generally write in accordance with the nature. Although many “sequels” have been sorted out almost, but because of the lack of stimulation and motivation, they are piled up in the draft box and become silent.

The number of visits to various platforms, blog traffic statistics, and collection numbers of articles all look good, but I don’t know whether to spend time sorting out drafts that are “going to get moldy” because I don’t get direct feedback.

Well, that’s about it. The rest is up to you. Popular content, I will appropriately write more.

I hope my words can help you, reduce unnecessary trample pits and toss.