Hot update

Of course, here is more of a vegetable dog, one-sided and incomplete, actually there are a bunch of fallacies. Welcome to be corrected.

Front-end hot update

While we’re talking about hot updates, we might as well extend and complement the implementation of automatic front-end updates. Individual talent and learning are shallow, seen in two ways roughly

  • Just refresh the screen, basicallybowersyncThe way, directly reload, simple and crude, to avoid many problems
  • Incremental updatingwebpack-dev-servertheHMR

Let’s talk a little bit about how Webpack-hot-Middleware enables hot updates.

I’m not going to talk about how to replace and override the results of the previous execution, okay

Personal understanding: It is a simple event mechanism

The service side

It’s essentially a connection that sends data forward

/ / the server end. publish:function(payload) {
  /** * erveryClient writes to each connection. * Content-type: 'text/event-stream; Charset = utF-8 ', * Connection: 'keep-alive', this will only be enabled at http1 */
  everyClient(function(client) {
    client.write('data: ' + JSON.stringify(payload) + '\n\n'); }); },...Copy the code

The client

The docking receives data for processing. Of course, the data here is actually formatted. Since this part of the code is running in the front end, all that is left is the basic DOM manipulation.

/**
* @see https://github.com/webpack-contrib/webpack-hot-middleware/blob/master/client.js
*/
function processMessage(obj) {
  switch (obj.action) {
    case 'building':
      if (options.log) {
        console.log(
          '[HMR] bundle ' +
            (obj.name ? "'" + obj.name + "'" : ' ') +
            'rebuilding'
        );
      }
      break;
    // ...
    default:
      if(customHandler) { customHandler(obj); }}if(subscribeAllHandler) { subscribeAllHandler(obj); }}Copy the code

Instead of using socket. IO (currently webpack-dev-server you can find a similar implementation), switch to EventSource.

Inspection mechanism of live

Incidentally, there is a set of mechanisms implemented to improve stability.

// client
// Every time you update the latest activity time, periodically check whether the timeout period is exceeded
function handleOnline() {
    if (options.log) console.log('[HMR] connected');
    lastActivity = new Date(a); }function handleMessage(event) {
    lastActivity = new Date(a);for (var i = 0; i < listeners.length; i++) { listeners[i](event); }}var timer = setInterval(function() {
    if (new Date() - lastActivity > options.timeout) {
      handleDisconnect();
    }
}, options.timeout / 2);

// server
// Periodically send messages to the front-end to update the front-end activity time. var interval = setInterval(function heartbeatTick() {
    everyClient(function(client) {
      client.write('data: \uD83D\uDC93\n\n'); }); }, heartbeat).unref(); .Copy the code

The node hot update

After briefly discussing the basics of hot updates at the front end, let’s look at hot updates at the lower node level.

How do I update module code

First, we have our first problem, how to update the code to a running program.

  • The first step is to define our sunrise code. It is a collection of strings (a bunch of binary numbers and so on). Forcing string collection ok) to compile and execute the character, I found the following way
    • evelExecutable files
    • FunctionConstructor to create a function
    • vmModule execution code
A little supplement

When I tried it, there was actually one small problem

const vm = require("vm");
var a = 1;
var b = 1;
var c = 1;
d = 1;

vm.runInNewContext(
  ` console.log('vm',d) console.log('vm',typeof a); a = 2; console.log('vm',a); `,
  global
);
console.log("a", a);
console.log("-- -- -- -- -- -- --");
eval(` console.log('eval',typeof b); b = 2; console.log('eval',b) `);
console.log("b", b);

console.log("-- -- -- -- -- -- --");
const test = Function(
  ` console.log('function',d) console.log('function',typeof c); c = 2; console.log('function',c)`
);
test();
console.log("c", c);

Vm 1 vm undefined vm 2 a 1 ------- eval number eval 2 b 2 ------- function 1 function undefined function 2 c 1 * /
Copy the code

I find that the above Function execution results are inconsistent with those of the browser, and investigate the reasons in detail: Function can only read global variables and their internal variables. Node load execution wraps a layer of functions, and declaring a global variable directly in the browser is equivalent to attaching an attribute to the current global object with the value of the variable. You can see that a little bit more clearly by looking at the variable D.

summary

  1. Function does not fetch the variables of the current scope, only global variables, which makes it difficult for our hot-loaded code to run as a separate and secure module (I can think of only keeping variables in global maintenance). This can cause a lot of problems.)
  2. Eval,Function is not easy to debug. Debugging tools do not interrupt point-to-point debugging, so troublesome things are not recommended.
  3. Eval function runs slower.This is not tested, in fact, the above method is loaded string, there seems to be no way to precompile the processing

Why doesn’t JavaScript recommend eval?

Node module loading mode

Node.js, node.js, node.js, node.js, node.js, node.js, node.js

Talk is cheap ,show me the code. Let’s throw in two source codes

  • The require function
  • The module module

The Node module is loaded and running

Now let’s go deep, look at node is how to handle js module, estimate I write also not how, you can first look at ruan Yifeng big guy’s explanation, but it seems that his version has a little old TAT

Node module load part of some source code, a brief introduction of the entire program to run, in fact, is introduced to initialize a main module, require when each reference file in the new module, at the same time cache, record relations, and finally form a tree structure. The whole process can be seen from here.

/** * Removed some debug output details, Specific to see can look at the source * @link https://github.com/nodejs/node/blob/0817840f775032169ddd70c85ac059f18ffcc81c/lib/internal/modules/cjs/loader.js#L874:33 * / 
Module._load = function(request, parent, isMain) {
  const filename = Module._resolveFilename(request, parent, isMain);
  /** * the result is cached once, so each module import is executed only once. * This cache will be proxied to require * in the reqire definition: 'require.cache = module. _cache; ` * / 
  const cachedModule = Module._cache[filename];
  if (cachedModule) {
    // The reference relationship of the module is recorded once, which affects the reference collected by gc.
    updateChildren(parent, cachedModule, true);
    return cachedModule.exports;
  }

  const mod = NativeModule.map.get(filename);
  if (mod && mod.canBeRequiredByUsers) {
    return mod.compileForPublicLoader(experimentalModules);
  }

  /* * Don't call updateChildren(), Module constructor already does. */ 
  const module = new Module(filename, parent);

  if (isMain) {
     // The reference structure of the project can actually be mainModule, recursively printed
    process.mainModule = module;
    module.id = '. ';
  }
  Module._cache[filename] = module;
  
  /** * If you are interested, you can take a look at this function. Set the possible reference folder (you can see the result in the Paths of module) * + directory * + node_modules * + /node_modules * 2. Gets the file suffix (last ". Start the content, tips: with ". 3. Call the default function resolution according to the file suffix. * * This process actually involves a bunch of attempts. I won't go into too much detail here. * to see 'tryPackage, tryFile, tryExtensions * so for the sake of good performance to a diu diu, everyone can write clear reference as far as possible. * Omit the program to guess the path you are referencing, You should also read package.json yunyun ***** ***** * emmm, which has an experimentalModules * @link https://github.com/nodejs/node-eps/blob/master/002-es-modules.md */
  tryModuleLoad(module, filename);

  return module.exports;
};
Copy the code

Here is a simple supplement to the js file parsing process

First of all, our files are really just text messages, wrapped in a function. Function (exports, require, module, __filename, __dirname) {' \n} '); Then call the 'compileFunction' or vm module's 'runInThisContext' method depending on whether you've changed the wrapping method. Emmm both implement c++. , it is not very understand TAT @ the link https://github.com/nodejs/node/blob/5f8ccecaa2e44c4a04db95ccd278a7078c14dd77/src/node_contextify.h.Copy the code

At this point we can probably see how modules are loaded in Node. I’ve probably forgotten the title, but just to recap, what we’re trying to figure out is how to load content into the program. We also know by the way where these constants __filename come from.

How do I delete old references

[FBI WARNING] This is actually very troublesome, MY approach will not be perfect, but do not do it can easily lead to higher memory consumption

Now that we have a way to attract new ones, let’s look at how to get rid of old ones in Node. A quick look at the GC mechanism.

I just vegetable dog, do not understand c++ source code. Webkit Tech Insider didn’t sit through any of this, just a few articles and a quick look at nodeJS. The following is a summary, mainly from nodeJS chapter 5.

Split Wall Amway

  • In V8, the main memory is divided into the new generation and the old generation,
  • Objects in the new generation are objects with a short lifetime, and objects in the old generation are objects with a long lifetime or resident memory.
  • The total size of the V8 heap is the memory used by the new generation plus the memory used by the old generation. No dynamic extension! The default is about 1.4 GB for 64-bit systems and 0.7 GB for 32-bit systems
  • through--max-old-space-sizeand--max-new-space-sizeSet the maximum value
  • The insane are recycled mainly through the Scavenge algorithm
    • The Scavenge algorithm is a garbage recovery algorithm based on the Cheney algorithm and replication
    • Divide the heap memory into two Semispace, one used and one idle
    • When an object is allocated, it is allocated in from. When it is reclaimed, it checks for survivability and copies to TO, freeing non-survivability and finally swapping space
  • Be Scavenge multiple times in the Cenozoic era (twice as some articles point out) and survive to be promoted into the old age
  • The old generation uses a combination of Mark-sweep and Mark-compact for recycling
    • Mark-sweep iterates through all objects in the heap during the marking phase, marking the living objects and cleaning up the unmarked objects
    • Mark-compact is used to solve the memory discontinuity problem caused by mark-sweep. After objects are marked dead, they are moved to one end while being cleaned.
  • Incremental Marking was introduced because GC cleaning had to pause the program (otherwise, memory and objects would not match and the process would explode), so gc actually caused Node to stop doing anything else for a while
    • Incremental marking instead of what was supposed to be done in one sitting
    • The V8 follow-up also introduces lazy sweeps and incremental compaction.
  • There is actually a point in some of the articles you see, if you judge survival
    • At first people turned this into whether or not there was a reference, but there was a problem introduced, recursive references were bad. It is also prone to memory leaks caused by external references, which can probably be reproduced in IE6 and 7
    • After the improvement of a way, from the root to find, can not find is not dead. Perfect solution to the above problem.
  • Another point is that large objects are not very friendly to GC, every time you have to judge this object is really super tired, save the GC kids
summary

After the above series of forces, we generally understand that to delete the old, delete the reference to play ball. So we just need to

  • Every timerequireAfter deletingrequire.cacheThe content on the
  • Delete references to the updateChildren records in the previous require source code
  • You can just quote yourself
  • End and spend

conclusion

So for node hot update (hot deployment) I personally give the following methods

  • Take the initiative to
    • Listen for file changes (set up interfaces) find a way to sneak in your update and trigger new content to load
    • Disable the node cache mechanism and delete itrequire.cacheWait, and mostly your quote!
  • passive
    • Disable the node cache mechanism and delete itrequire.cacheWait, and mostly your quote!
    • Go when you’re calledrequire

End and spend