Translated from Levelup.gitConnected.com by Bitfish

In a front-end project, our web pages typically need to send multiple HTTP requests to the server.

Suppose our product has a feature where the client sends an HTTP request to the server every time the user clicks the LI tag.

Here’s a simple Demo:

<html>

<body>
    <ul>
        <li>1</li>
        <li>2</li>
        <li>3</li>
        <li>4</li>
        <li>5</li>
        <li>6</li>
        <li>7</li>
        <li>8</li>
        <li>9</li>
    </ul>

    <script>
        // Suppose this function is used to make HTTP requests to the server
        var sendHTTPRequest = function(message) {
            console.log('Start sending HTTP message to the server: ', message)
            console.log('1000ms passed')
            console.log('HTTP Request is completed')}var ul = document.getElementsByTagName('ul') [0];

        ul.onclick = function(event) {
            if (event.target.nodeName === "LI") {

                // Executes this function every time the <li> tag is clicked.
                sendHTTPRequest(event.target.innerText)
            }
        }
    </script>
</body>

</html>
Copy the code

In the above code, we simulate sending an HTTP request directly using the simple sendHTTPRequest function. This was done to better focus on the core goal, so I simplified some of the code.

We then bind the click event to the UL element. Each time the user clicks a tag such as

  • 5
  • , the client will execute the sendHTTPRequest function to make an HTTP request to the server.

    The procedure above looks like this:

    To make it easier for you to try, I made a Codepen demo: codepen. IO /bitfishxyz/…

    Of course, in a real project, we might send a file to the server, push notifications, or send some logs. But for the sake of demonstration convention, we’ll skip over these details.

    Ok, this is a very simple demonstration, so what are the weaknesses of the above code?


    If your project is very simple, you should have no problem writing code like this. However, if your project is complex and the client needs to send HTTP requests to the server frequently, this code is inefficient.

    In the example above, what happens if any user clicks the Li element repeatedly and quickly? At this point, our client needs to make frequent HTTP requests to the server, and each request consumes a lot of time and server resources.

    Each time a client establishes a new HTTP connection with the server, it consumes some time and server resources. Therefore, in the HTTP transport mechanism, it is more efficient to transfer all files at once than to transfer a few files many times.

    For example, you might need to send five HTTP requests, each with an HTTP packet size of 1MB. Now, you send one HTTP request at a time, with a packet size of 5MB. The latter is usually expected to perform better than the former.

    A large number of HTTP requests on a web page can slow down the load time of the page and ultimately impair the user experience. This can cause visitors to leave the page faster if the load is not fast enough.

    Therefore, in this case, we can consider merging HTTP requests.

    In our current project, my thinking goes like this: We could set up a cache locally and then collect all the messages that need to be sent to the server within a certain range and send them together.

    You can pause for a moment and try to figure it out yourself.

    Tip: You need to create a local cache object to collect messages that need to be sent. You then need to use a timer to send the collected messages on a regular basis.

    This is an implementation.

    var messages = [];
    var timer;
    var sendHTTPRequest = function (message) {
      messages.push(message);
      if (timer) {
        return;
      }
      timer = setTimeout(function () {
        console.log("Start sending messages: ", messages.join(","));
        console.log("1000ms passed");
        console.log("HTTP Request is completed.");
    
        clearTimeout(timer);
        timer = null;
        messages = [];
      }, 2000);
    };
    Copy the code

    Whenever the client needs to send a message, which is when an onclick event is triggered, sendHTTPRequest does not immediately send the message to the server. Instead, sendHTTPRequest caches the message in the message first. Then, we have a timer that executes after 2 seconds, and after 2 seconds, the timer sends all previously cached messages to the server. This change serves the purpose of composing HTTP requests.

    The test results are as follows:

    As you can see, even though we triggered the click event multiple times, we only sent one HTTP request in two seconds.

    Of course, I set the wait time to 2 seconds for demonstration purposes. If you think the wait time is too long, you can shorten the wait time.

    For projects that don’t require much real-time interaction, the 2-second delay isn’t a huge side effect, but it can take a lot of stress off the server. In the right circumstances, it can be well worth it.


    The above code does provide some performance improvements for the project. But in terms of code design, the above code is not good.

    First, it violates the principle of single liability. The sendHTTPRequest function not only sends HTTP requests to the server, but also composes HTTP requests. This function performs too many operations, making the code look very complicated.

    If a feature (or object) takes on too much responsibility, that feature will often have to change significantly when our requirements change. Such a design does not respond effectively to possible change, which is bad design.

    Our ideal code would look like this:

    Instead of making any changes to sendHTTPRequest, we chose to broker it. This proxy function performs the task of merging HTTP requests and passing the merged message to sendHTTPRequest for sending. And then we can use proxySendHTTPRequest directly in the future.

    You can pause for a moment and try to figure it out yourself.

    This is an implementation:

    var proxySendHTTPRequest = (function() {
      var messages = [],
          timer;
      return function(message) {
        messages.push(message);
        if (timer) {
          return;
        }
        timer = setTimeout(function() {
          sendHTTPRequest(messages.join(","));
          clearTimeout(timer);
          timer = null;
          messages = [];
        }, 2000); }; }) ();Copy the code

    The basic idea is similar to the previous code, which uses the Messages variable to cache all messages for a certain amount of time, and then sends them uniformly through a timer. In addition, this code uses the closure technique to put the messages and timer variables in local scope to avoid contaminating the global namespace.

    The main difference between this code and the previous code is that it does not change the sendHTTPRequest function, but hides it behind proxySendHTTPRequest. Instead of accessing sendHTTPRequest directly, we use proxySendHTTPRequest to access it. ProxySendHTTPRequest has the same parameter list and return value as sendHTTPRequest.

    What are the benefits of this design?

    • The task of sending and merging HTTP requests is given to two different functions, each focusing on one responsibility. It follows the single responsibility principle and makes code easier to understand.
    • Since the arguments to both functions are the same, we can simply useproxySendHTTPRequestreplacesendHTTPRequestLocation without making any major changes.

    Imagine if, in the future, network performance improves, or for some other reason, we no longer need to merge HTTP requests. At this point, if we had used the previous design, we would have had to change the code drastically again. In the current code design, we can simply replace the function name.

    In fact, this coding technique is often referred to as the proxy pattern in design patterns.

    The so-called agency model is well understood in real life.

    • Let’s say you want to visit a website, but you don’t want to give away your IP address. Then you can use a VPN, first access your proxy server, and then access the target website through the proxy server. That way, the target site doesn’t know your IP address.
    • Sometimes, you will hide your real server behind the Nginx server and let the Nginx server handle trivial operations for your real server.

    These are examples of the agency model in real life.

    We don’t need to worry about a formal definition of the proxy pattern (or any other design pattern), just that we can provide proxy functionality (or objects) to control access to the target functionality (or objects) when the client doesn’t have the convenience (or ability) to access it directly. The client actually accesses the proxy function (or object), which does some processing of the request and then passes it on to the target.


    If you have any questions or think this article needs improvement, you are welcome to leave a comment. Your advice and criticism are very useful to me. Thank you.