This article starts with ByteDance’s micro front end project runtime isolation task to explain in depth the technical implementation of sandbox and some of the problems in the actual implementation process. Some key details and years of pit mining experience are fully shared with readers, hoping to help and inspire you. As for the overall thinking on the micro front end, please refer to “The Polishing and Application of Front-end Micro Service in Bytedance”. I hope readers have understood this before reading this article.
1. What should a sandbox do
Let’s start with what sandboxes mean for microfronts and even front ends. The concept of sandboxes is not new to software engineering, and the need for isolation at the front end has been around for a long time. And according to different actual business scenarios, there have been many and characteristic exploration.
The ancient Iframe
It all starts with Iframe, which looks like a pretty solution anyway. That’s what people who haven’t actually used it imagine. But some might not be known until you actually do a full aggregation via iframe. Simple iframe aggregation is cumbersome and requires a lot of labor to make up for.
The old iframe scheme can solve the coupling problem to some extent. Specifically, a site page is divided into N frames, and each frame runs an independent domain name.
Its benefits are very clear, independent up and down, independent operation, no one will interfere with each other. But is this the end of the sandbox? It’s hard to say whether this is a sandbox or not, and there are many different opinions and theories. For example, some people might think that the sandbox is not completely independent like this, but is independent in isolation simulation. We’ll revisit this idea later and share some of our thinking from an implementation perspective.
Because a complete project contains a large number of common functions and code, such as login identity, in-site message, the business module is only one part of it. This part is time-consuming and laborious to implement entirely using cross-window communication, and iframe is not as effective after a single page is demonstrated using React or similar loading techniques. Breaking through these limits is much more difficult.
The age-old difficulty
The first one, I don’t need to say much about deeplinking, right, at least to be a project, especially since the MVC era routing has been very important.
And there are all kinds of shared things, like how to share login. Iframe is certainly not out of the question. Like many of the problems mentioned before and after, they are not impossible, but very troublesome, and there are many difficulties to be solved for him. In effect, you can end up with a nice iframe sandbox.
Another obvious difficulty is the parent-child transfer of component libraries, component styles, and the underlying code and in-memory object transfer of rendering engines like React VUE. The initial implementation is to add the fragmentation packaging function, disassemble Common Chunk and deploy it on THE CDN independently, and finally accelerate access through the browser’s own cache capability when loading. But runtime memory is not shared, and runtime changes to packages are hard to reuse.
Data layer design, data Store, etc. The data layer should have at least some event access. You might have to release four or five projects to change one requirement.
2. What should a sandbox look like
As mentioned earlier, we don’t want sandboxes to be completely separate runtime environments, but rather robust environments with sharing and collaboration. In this chapter we hope to clarify what functions we want sandboxes to provide and how to use them.
Virtualization, containerization, Docker
And here comes the good stuff, Docker. From the perspective of decoupling, microservices on the server side mainly realize the underlying support of virtualization through Docker technology, so that service developers can not experience the difference in environment and erase the difference in runtime. It can be said that for microservices, Docker has been a cornerstone of such development over the years.
The concept of microservices itself, there was this concept a long time ago, there was the theory of service-oriented programming. But the development is very little, cash is still very difficult, make virtual machine very troublesome. But also the development experience stuff, the image I packaged — do I have to include the entire operating system in order to deliver it consistently? This is bad for the development experience.
Before Docker was widely used, the use of micro services on the server side was mainly based on virtual machines. In contrast, the use is very complex and maintenance costs are higher. Virtual machines are not much trouble, we all understand. It’s not on the same order of magnitude as containerization. Or when you want to take a snapshot and eat the disk. It is also extremely difficult to coordinate and allocate resources effectively between multiple services.
Many of these increased costs were not solved until Docker’s sandbox system. Microservices have become a trend. Unfortunately, such a container environment does not exist at runtime within the front-end browser.
So you can already see that our expectation of a front-end sandbox is to make it like a Docker (and iframe is the virtual machine in this metaphor). The mechanism we developed is like the front-end runtime container of Docker, like Docker, so that front-end splitting can be easier, sharing easier and resource saving. This is not, of course, a negation of iframe.
3. How should sandboxes be made
So how do we do this kind of sandbox, this kind of lightweight, emphasizes the collaboration between components communication, is very resource-efficient sandbox? I will introduce it in three directions. (There is no concrete implementation of this yet, but it is possible to create a sandbox in the browser.)
3.1 Single-process vs. multi-process
Emulate the process switchover policy by referring to single-core and OPERATING system processes. Our sandbox essentially allows a browser to run multiple “standalone” applications, so there is no escape from OS imitation and eventual convergence. In this respect, JavaScript takes advantage of a unique execution feature compared to other languages: it is itself single-threaded. Everything I do is essentially in one thread. Our operating system has been limited to a single core from the beginning.
In an operating system, how do you do multiple processes in parallel? Single processes can be controlled by simple rules like root routing and so on, so you can only activate one at a time, so you can switch the context; Multi-process parallelism takes advantage of JavaScript’s ability to encapsulate each individual event loop. Such as setTimeout, handler for various event callbacks, we switch the context outside of the actual function, and then execute the function you wanted to bind. This is thread-safe. These are the following two things:
- Encapsulate with route switching and simulate single-core single process.
- With the overall encapsulation of event cycle, simulate single core multi – process.
3.2 the Context switch
Context switching simulates thread-safety by looking for the currently active other child applications as each isolated child application “process” is about to start activating, and then recording the full live state of the “operating system” for the exiting application and saving it as its context. Finally, restore and create its own context for the new “process” to be activated.
As mentioned above, I record the current state as a context to ensure that each child application applies to its own context without affecting or changing the context of others. This operation is all switched by the parent that hosts the child application.
This time focuses on landing practice, and some experience of stepping pits. Welcome to continue to pay attention to, we will output more about this part of the practical experience.
The delete key must be traversed twice to ensure that each object is traversed once. One important point here is that when you get a comparison between the old object and the new object, it’s not enough to go through the key in one object and look in the other, because you might delete something again. Delete the key caused by the absence of nature is not traversal. To reflect this deletion, you have to go through both objects twice, both old and new, to see who has what and who has what. It’s especially easy to forget this detail when comparing “free” to a new sandbox.
Is Context switching performance good enough? Let’s start with the spatial performance of this snapshot. If you have N sandboxes, how many combinations of switches do you need? Is the entire context, or any context difference between any two sandboxes, stored in its entirety? Actually not. We only need to record the difference, the context change, and only the difference he makes to the “idle” state. For example, A, B, C, D, E, F, and G. Instead of recording A switch from A→B to A→C, we can create A virtual idle state: O, both A→O, B→O, and keep only the differences between them and O. The number of variables that need to be compared is changed from multiplication of child applications to addition. Write a loop to compare changes quickly.
To sum up, each child application starts, ends and switches back to a virtual “initial state”, restores the scene, and then enters the activated sandbox state. Each switch records only one sandbox information. The problem of comparing and saving sandbox switching information is avoided when switching algorithm computes Cartesian square product.
4. Bytedance’s sandbox solution
Although the title of this chapter is the implementation of Bytedance, and as mentioned above, we will mainly share the technical details of the implementation level, but we will not only discuss and share the solutions adopted by ourselves, but also discuss the content that has been studied and compared. If we think there are good technical solutions that are suitable for other scenarios, we try to share them.
4.1 CSS sandbox
Start with the CSS sandbox. This webComponent has done a lot and evolved a lot. I can’t help but say that one aspect of the Web standard that fascinated me and made me feel really interesting was scoped CSS — where you can combine an Attribute with a DOM tree to limit the scope of your CSS. That standard has since been dropped. It gave way to the ShadowDOM system.
I don’t quite understand this: scoped CSS is where outside rules come in and inside rules don’t go out, but shadowDOM is completely split. This huge difference makes their engineering significance very different. We’ll talk about CSS Modules later, which behave exactly like scoped style, not Shadow.
Both CSS Modules and CSS in JS write or compile styles into scripts and add an Nounce attribute to the outermost layer of the DOM generated by the script. Then apply this “attribute” to all controlled CSS rules. The downside is that it’s a bit more cumbersome and requires complete control over all DOM creation. Angular does this naturally in a front-end framework.
As I’ll mention later, the most popular NPM package for this has a fun feature that can cause accidental bugs.
We use DOM sandboxes to protect tags inside the head. Such styles and links themselves can be sandboxed uniformly. In practice, our child application developers also use CSS modules in business components, which we don’t care about – removing tags is the safest thing anyway.
The DOM sandbox is just a DOM tag. If you want to change it, change it back when you switch the sandbox. The bound style and link tags work for the vast majority of cases. But this is limited to the single-process scenario.
In the case of multiple processes, as mentioned earlier (i.e., a parallel system with N sandboxes running together). CSS is definitely not the same as JavaScript’s single-threaded runtime, so use moduled CSS. It’s not hard to do, and many open source libraries are available. Don’t worry if people refer to different versions of the same component library, hack, and accidentally create “unitary” things, because they’re all compiled and scoped.
Be careful using the Styled – Component package on NPM, as they will determine the environment based on environment variables; We then enable a schema called “Speedy” for the Prod environment, which instead of writing style rules with innerText, uses the full set of addRules apis. But the standard does not seem to clearly define how the tag should behave when removed from the document DOM tree, perhaps because it is obvious that rules should be removed as well. But when we cut it back in, the ambiguity goes all wrong. What the browser is really saying is that there are no rules for removing and inserting tags. This is obviously something we need to do extra.
4.2 Global variable sandbox
Another important problem is global variable interference. Polyfill and other environment-related global objects, environment variables and other specific implementations are very different, but all affect the global. They belong to their own global external environment for sub-applications, modular sub-components.
This is a major focus of microfront-end implementation. Personally, I think so. You can tell people aren’t convinced. “Who knows not to write global variables? No one would be so unreliable.” In fact, there are a lot of them when you actually try them. Toutiao, for example, uses a plugin library for clipping images. It’s a very sophisticated, decent, and classic package that supports React and JQuery. It writes a singleton implementation for the whole world. And our teams in different lines of business actually used different versions of this package during development and debugging.
Of course it’s not important and it’s not a problem. A more serious example is this — reGeneratorRuntime. It is used to compile async syntax, and Babel in a config deletes this object. Exactly what it is isn’t clear and doesn’t need to be explained, but it certainly conflicts and causes problems. There was a time when our Watermelon team’s Polyfill rule had such a conflict with another line of business. So compare delete, restore delete, switch back to watermelon and delete.
Identifiers are another concern. Are you fully aware of what Identifier is? A Identifier is the name of a variable that operates within a scope, including function, let, class, and const. Only the things produced by var are special and do not occupy the Identifier. The above ones do, and cannot be used repeatedly after being occupied.
These are things you can’t walk through first of all, there’s no enumerator; Second, they are not members of an object, but merely compile level names. Once it’s created, it can’t be deleted.
When var a is used in global scope, it actually generates an out-of-scope Identifier and creates an additional key on global with the same name, pointing to the same address. This is an additional operation of the VAR statement. This allows us to handle global variables by iterating over the window.
But if you take a const there’s no way, there’s no way. Same thing with class. No way to enumerate and no way to delete. The best you can do is overwrite it with the class keyword declaration.
In short, do not think too much about this matter, and it is almost necessary to wrap the new function. You can also pass in an input such as setTimeout to control asynchronous implementation of “multi-process” parallelism.
There’s also a location, don’t move it, it will refresh the page. Blacklist it.
Here’s another fun fact: function, like var, adds an extra key to the window. The property is configured without any additional information, which is false. But you can assign.
So if you use light var a, you can delete window.a; A is undefined. Function a and delete A will not work. But if you write function a and var a = 1. You bind an undeletable number to the window, continuing function a’s undeletable property and var a’s value.
The more interesting thing is class, you class B {}, console log window.B, undefind. Let’s write B = 1; How about looking at window.b? B = 1 doesn’t work.
Shows a potential mechanism in the class keyword, a property called B on global, but is not enumerated and access to the property, the property has a writable true, enumerable: True, signals: Hidden properties beyond true.
4.3 other
There are many other objects that need process security, such as cookies, but this is not particularly important. Simply specify one that uses path. Cookies can set path as well as domain. Most people just don’t set it (that is, to the root directory “/”).
LocalStorage can also be protected. It depends on your business. Because these are Windows global variables, implementing a wrapped class integration that mimics the original behavior of localStorage is ok. Make all of its methods prefix key first and then perform the method’s super. This prefix can simply override the uUID of the current sandbox, since window.localstorage itself is protected by the sandbox as a global variable.
5. Additional features of the sandbox
Now, the last chapter will talk about some of the special things in the sandbox that require extra processing. Among them focus on the burial point. Most microfront-end projects have buried points in a page that belong to different projects, so it’s a matter of figuring out which subapplications, which statistical code to use, and which levels of caching to handle.
5.1 Buried cache system
As previously said, the Storage cache all sandboxed, buried point system is not finished. The vast majority of buried systems send events asynchronously, looking for network idle. And the source code is usually in the SDK, not directly controlled by the parent project. So there’s not much room for manoeuvre. The only way to do this is to store the cached data, the project information, and match the cache of the collected data to the sandbox state of the generated data.
5.2 the console
Sandboxes can pack one or more layers of runtime, so console reading can be tiring. It can be handled extra during development to provide convenience for developers. And in the current front-end project, the online hope console printing as little as possible, the content as normal as possible, and debugging is very taboo others left behind printing interference. That’s all you can do with sandboxes. We even did an upload log that connected the content directly to the collection system.
Specifically, the callStack is injected into the log using a new Error. You can get the call stack through error. Stack. This value is directly a string, a newline split Markdown, which can be linked to, or corresponding to the source code of the debug window.
The same is true for real exceptions. You can hack out the error. Stack values from the catch to the exception before throwing them again. Remove unnecessary stacks — like the new Function layer in your package — and delete that line. The content of the prompt can also be changed.
5.3 sourceMapping
SourceMapping was invented by Google Closure and is now an ES6 standard. The principle is a mapping from character position to character position. Will it work in the sandbox under New Function? B: Sure.
Let’s talk about the performance of new Function first. In Chrome it is debugged as a new anonymous environment: anonymous, where the character line position starts from the first line at the beginning of the function string. If you compile and generate the sourceMap bundle and put it in the new function, this location is exactly the same and no additional hack is required.
This is also because Chrome correctly recognizes the sourcemappingUrl= comment at the end of the new Function parameter string. That corresponds to everything in the Callstack. Many times we found that the business side was worried about this, thinking that the.js I downloaded directly was packaged, and worried about debugging the Call Stack and sourceMap. No problem, actually. Both are fine.
In addition, as we have mentioned on other occasions, one of the many requirements for a micro front end is to have a service discovery resource and version management platform to manage the independent release, offline and combination testing of the micro front end. This incidentally also gives us a condition that sourceMap can manage.
conclusion
The above is the end of this sharing, sharing two years of experience in using and implementing the micro front end sandbox Bytedance, as well as the thought process of facing these challenges. We were lucky enough to have enough supportive partners for our project, which was a great success and significantly improved the quality of our heavyweight products.
Like many cutting-edge and newly developed concepts, the micro front end itself is still in the process of rapid evolution and verification, and our specific practices have been changing rapidly, constantly discovering weaknesses and correcting them, and trying to develop more possibilities. In this struggle from imperfect to more perfect, it is our privilege to share our results with readers. And after sharing, we would appreciate and welcome comments, discussions and suggestions. Also welcome more people of insight to join us, contact please send to the email [email protected] or follow the internal push link.
Welcome to Bytedance Technical Team