Recently, the term “micro front end” came into my mind several times as I tried to improve the architecture of existing websites.
But on the net about the micro front end articles always said specious, so I found this article for translation. And basically understand the concept of a micro front end. It has not yet been decided whether to use a microfront-end architecture, as there does not seem to be an industry consensus on best practices.
The translation begins with an abridgment. The original link
The introduction
Getting the front end right is hard, and getting multiple teams working on large front-end applications is even harder. There is a trend towards breaking up front-end applications into smaller, more manageable gadgets. How does this architecture improve the efficiency of the front end team?
This paper will elaborate on these problems. In addition to discussing the pros and cons, we’ll introduce some examples available and delve into a complete sample application.
Microservices have exploded in popularity in recent years, with many organizations using this architectural style to avoid the limitations of large monolithic applications. While there are many articles about microservices, many companies are limited to monolithic front-end applications.
Suppose you want to build a progressive Web application, but it’s hard to implement new functionality into your existing overall application. Let’s say you want to start using the new JS syntax (or TypeScript), but you can’t use the corresponding build tools in your existing build process. Or maybe you just want to expand your development team so that multiple teams can work on the same product at the same time, but the coupling and complexity of the existing application keeps each developer out of the loop. These are real problems, and they greatly reduce the productivity of large teams.
Recently, we have seen more and more front-end focus on the architecture of complex front-end applications. In particular, how to break up the whole front end so that each piece can be developed, tested, and deployed independently while still being a whole to the user. This technology is called a micro front end and we define it as:
An architectural style that combines individual front-end applications into a larger whole
Of course, there is no free lunch when it comes to software architecture. Some micro front-end implementations can lead to very repetitive dependencies, which can increase downloads for users. Moreover, team autonomy can lead to team fragmentation. Nonetheless, we believe the risks are manageable and the benefits outweigh the costs.
Advantages of a micro front end
The incremental upgrade
For many teams, this is the first reason to embark on the micro front end journey. Technical debt hindered the development of the project and had to be rewritten. To avoid the risk of a complete rewrite, we prefer to replace old modules one by one.
Simple, decoupled code base
Each individual miniature front-end application will have much less source code than the entire front-end application. These smaller code bases are easier for developers to maintain. In particular, we avoid the complexity of coupling between components.
Independent deployment
As with microservices, the independent deployment capability of the microfront end is key. With reduced deployment scope comes reduced risk. Every microfront-end application should have its own continuous delivery path, built, tested, and deployed.
Team autonomy
Each team needs to be organized vertically around business functions, not based on technical capabilities. This brings greater cohesion to the team.
conclusion
In short, a microfront is about cutting big, scary things into smaller, more manageable pieces, and then explicitly demonstrating their dependencies. Our technology choices, our code base, our team, and our release process should be able to operate and evolve independently of each other without too much coordination.
example
Imagine a food-ordering website. On the surface, this is a very simple concept, but there are a lot of surprising details if you want to do it well:
- There should be a landing page where customers can browse and search for restaurants. These restaurants should be searchable and filtered by a number of attributes, including price, dish name, or what customers previously ordered
- Each restaurant needs to have its own page that displays its menu and allows customers to place orders, select discounts and fill out additional requirements
- Customers should have a profile page where they can view order records and customize payment methods
Each page is complex enough that from a micro front end perspective we can assign each page to a dedicated team, and each team should be able to work independently of the others. They should be able to develop, test, deploy, and maintain their code without fear of conflict with other teams.
integration
Given the fairly loose definition above, there are many ways to implement a microfront end. In this section, we’ll show some examples and discuss trade-offs. Each page has a container application that can:
- Renders common page elements, such as headers and footers
- Address cross-domain issues such as authentication and navigation
- Centralize the various micro-front end applications onto the page and tell each micro-front end when and where to render
Integration of back-end templates
We started in a very traditional way, rendering multiple templates into HTML on the server. We have an index.html that contains all the common page elements, and then use include to introduce other templates:
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>Feed me</title>
</head>
<body>
<h1>🍽 Feed me</h1>
<! --# include file="$PAGE.html" -->
</body>
</html>
Copy the code
Then configure nginx
server {
listen 8080;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
ssi on;
Redirect/to/Browse
rewrite ^/$ http://localhost:8080/browse redirect;
Access HTML by path
location /browse {
set $PAGE 'browse';
}
location /order {
set $PAGE 'order';
}
location /profile {
set $PAGE 'profile'
}
All other paths are rendered /index.html
error_page 404 /index.html;
}
Copy the code
This is a fairly standard server-side application. We can call it a micro front end because we make each page independent and deliverable by a separate team.
For greater independence, you can have a single server rendering and servicing each mini-front end, with one server sitting on the front end and making requests to the other servers. With caching, latency can be minimized.
This example shows that microfrontend doesn’t have to be a new technology, nor does it have to be complicated. As long as we maintain code isolation and team autonomy, we can achieve the same results no matter what technology stack we use.
Package integration
One approach someone might use is to publish each microfront as a Node package and have the container application use all of the microfront applications as dependencies. Like this package.json:
{
"name": "@feed-me/container"."version": "1.0.0"."description": "A food delivery web app"."dependencies": {
"@feed-me/browse-restaurants": ^ "1.2.3"."@feed-me/order-food": "^ 4.5.6." "."@feed-me/user-profile": "^ 7.8.9." "}}Copy the code
At first glance, this might not seem a problem, but it would result in a deployable package that we could easily manage our dependencies.
However, this approach meant that we had to recompile and publish every microfront-end application before we could publish the changes we made to an application. We strongly discourage the use of this micro-front-end solution.
Integration through IFrame
Iframe is one of the simplest ways to integrate. In essence, iframe pages are completely independent and can be built easily. Moreover, iframe provides many isolation mechanisms.
<html>
<head>
<title>Feed me!</title>
</head>
<body>
<h1>Welcome to Feed me!</h1>
<iframe id="micro-frontend-container"></iframe>
<script type="text/javascript">
const microFrontendsByRoute = {
'/': 'https://browse.example.com/index.html'.'/order-food': 'https://order.example.com/index.html'.'/user-profile': 'https://profile.example.com/index.html'};const iframe = document.getElementById('micro-frontend-container');
iframe.src = microFrontendsByRoute[window.location.pathname];
</script>
</body>
</html>
Copy the code
Iframe is not a new technology, so the code above may not seem that exciting.
However, if we reexamine the main advantages of the microfront end listed earlier, iframe is fine as long as we are careful about how we divide microapplications and team up.
We often see that many people are reluctant to choose iframe. Because iframe is kind of annoying, but iframe actually has its advantages. The ease of isolation mentioned above does make iframe inflexible. It complicates routing, history, and deep links, and makes responsive pages difficult.
Integration with JS
This approach is probably the most flexible and the most frequently adopted. Each microfront corresponds to a
<html>
<head>
<title>Feed me!</title>
</head>
<body>
<h1>Welcome to Feed me!</h1>
<! These scripts will not render the application immediately -->
<! -- instead of exposing global variables separately -->
<script src="https://browse.example.com/bundle.js"></script>
<script src="https://order.example.com/bundle.js"></script>
<script src="https://profile.example.com/bundle.js"></script>
<div id="micro-frontend-root"></div>
<script type="text/javascript">
// These global functions are exposed by the script above
const microFrontendsByRoute = {
'/': window.renderBrowseRestaurants,
'/order-food': window.renderOrderFood,
'/user-profile': window.renderUserProfile,
};
const renderFunction = microFrontendsByRoute[window.location.pathname];
// Render the first microapplication
renderFunction('micro-frontend-root');
</script>
</body>
</html>
Copy the code
The above is a very basic example, demonstrating the general idea of JS integration.
Unlike package integration, we can deploy each application independently with different bundle.js.
Unlike iframe integration, we have complete flexibility in that you can use JS to control when to download each application and pass extra parameters when rendering the application.
The flexibility and independence of this approach make it the most commonly used scheme. We’ll explore this in more detail when we present the full example.
Integrate with Web Component
This is a variation of the previous approach, where each microapplication corresponds to one HTML custom element for the container to instantiate, rather than providing global functions.
<html>
<head>
<title>Feed me!</title>
</head>
<body>
<h1>Welcome to Feed me!</h1>
<! These scripts will not render the application immediately -->
<! -- instead of providing custom tags -->
<script src="https://browse.example.com/bundle.js"></script>
<script src="https://order.example.com/bundle.js"></script>
<script src="https://profile.example.com/bundle.js"></script>
<div id="micro-frontend-root"></div>
<script type="text/javascript">
// These tag names are defined by the code above
const webComponentsByRoute = {
'/': 'micro-frontend-browse-restaurants'.'/order-food': 'micro-frontend-order-food'.'/user-profile': 'micro-frontend-user-profile'};const webComponentType = webComponentsByRoute[window.location.pathname];
// Render the first microapplication (custom tags)
const root = document.getElementById('micro-frontend-root');
const webComponent = document.createElement(webComponentType);
root.appendChild(webComponent);
</script>
</body>
</html>
Copy the code
The main difference is the use of Web Component instead of global variables. If you like the Web Component specification, this is a good choice. If you want to define your own interface between container applications and microapplications, you may prefer the previous example.
style
The CSS has no module system, namespace, or encapsulation. And when they do, they often lack browser support. These problems are even worse in a microfront-end environment.
For example, if a team’s microfront-end style sheet is H2 {color: black; }, while the other team’s is h2 {color: blue; }, and the two selectors are attached to the same page, will conflict!
This is not a new problem, but it is harder to avoid because these selectors are written by different teams at different times, and the code can be scattered across different libraries.
Over the years, there have been many ways to make CSS easier to manage. Some people choose to use strict naming conventions, such as BEM, to ensure that the range of selectors is small enough. Others use preprocessors, such as SASS, whose selectors nesting can be used as namespaces. A newer approach is to programmatically write CSS through CSS modules or various CSS-in-JS libraries. Some developers also use shadow DOM to isolate styles.
As long as you choose a solution that ensures that the styles of the developers don’t interfere with each other.
Shared component library
As mentioned above, visual consistency is important, and one solution is to share a library of reusable UI components across applications.
Easier said than done. The main benefit of creating such a library is to reduce the workload. In addition, your component library can act as a style guide and an important bridge for collaboration between developers and designers.
The first thing that can go wrong is creating too many components too soon. Let’s say you’re trying to create a component library that contains all the common UI components. However, experience has taught us that it is hard to guess what a component’s API should look like until it is actually used, and imposing a component can lead to early confusion. Therefore, we prefer to let the team create their own components based on requirements, even if this leads to some duplication initially.
Let the API come naturally, and once the component’s API becomes obvious, you can consolidate the repetitive code into a shared library.
As with any shared internal repository, ownership and governance rights are difficult to assign. One is that all developers own the library, which in practice means that no one owns the library. Without clear conventions or technical foresight, shared component libraries can quickly become a hodgepodge of inconsistent code. At the other extreme, fully centralized development of shared libraries, the result is a huge disconnect between the people who create the components and the people who use them.
The best collaboration we’ve seen is where anyone can contribute code to the library, but a custodian (a person or team) is responsible for ensuring the quality, consistency, and effectiveness of that code.
People who maintain shared libraries need to be skilled and have poor communication skills.
Communication across micro applications
One of the most common questions about microfrontends is how to get applications to talk to each other. We recommend communicating as little as possible, as this often introduces unnecessary coupling.
However, there is a need for cross-application communication.
- Using custom event communication is a good way to reduce coupling. Doing so, however, obfuscates the interfaces between microapplications.
- Consider the mechanism common in React applications: callbacks and data are passed from top to bottom.
- The third option is to use the address bar as a communication bridge, which we’ll explore in more detail later.
If you are using Redux, you will usually create a global state for the entire application. But if each microapplication is independent, then each microapplication should have its own Redux and global state.
Either way, we want our microapplications to communicate via messages or events and avoid any shared state to avoid coupling.
You should also consider how to automatically verify that the integration is not broken. Functional testing is one solution, but we tend to do only partial functional testing because of the implementation and maintenance costs. Alternatively, you can implement consumer-driven interfaces where each microapp specifies what it wants from the other microapps, so you don’t have to actually integrate them all together and test them in the browser.
The back-end communication
What about back-end development if we have separate teams working independently on front-end applications?
We believe in the value of a full-stack team, from interface code all the way through backend API development to database and website architecture.
Our recommended model is the Backends For Frontends model, where each front-end application has a corresponding back-end that serves only the needs of that front-end. The initial granularity of the BFF pattern may be one back-end application per front-end platform (PC page, mobile page, etc.), but eventually it will be one back-end application per micro application.
It should be noted here that a back-end application may have separate business logic and a database, or it may simply be an aggregator of downstream services. If a microfront-end application has only one API to communicate with, and that API is fairly stable, then building a separate backend for it may not be worth much at all. The guiding principle: Teams building microfront-end applications don’t have to wait for other teams to build something for them.
Therefore, if a new function used by a micro front end requires a change in the back-end interface, the first and second areas should be developed by a team.
Another common question is, how do I authenticate and authenticate?
Obviously the user only needs to authenticate once, so authentication should be placed in the container application. The container may have some form of login through which we can obtain some token. This token will be owned by the container and can be injected into each microfront end at initialization. Finally, the micro front end can send tokens to the server, which then validates them.
test
In terms of testing, we didn’t see much difference between the monolithic front end and the microfront end.
The obvious gap is the integration testing of container applications against various microfronts.
Detailed examples
Let’s implement a detailed example.
The focus is on how container applications and microapplications integrate together in JavaScript, as this is probably the most interesting and complex part.
You can check out the final deployment at Demo.microfrontends.com, and the full source code is available on Github.
This project uses React. Js, but it’s worth noting that React isn’t the only option.
The container
We’ll start with the container, because that’s where we start. Package. Json:
{
"name": "@micro-frontends-demo/container"."description": "Entry point and container for a micro frontends demo"."scripts": {
"start": "PORT=3000 react-app-rewired start"."build": "react-app-rewired build"."test": "react-app-rewired test"
},
"dependencies": {
"react": "^ 16.4.0"."react-dom": "^ 16.4.0"."react-router-dom": "^ 4.2.2." "."react-scripts": "^ 2.1.8"
},
"devDependencies": {
"enzyme": "^ 3.3.0"."enzyme-adapter-react-16": "^ 1.1.1"."jest-enzyme": "^ 6.0.2." "."react-app-rewire-micro-frontends": "^ 0.0.1." "."react-app-rewired": "^ 2.1.1"
},
"config-overrides-path": "node_modules/react-app-rewire-micro-frontends"
}
Copy the code
Create-react-app creates a React application.
Note that I didn’t include any other microapplications in the package.json dependencies.
If you want to know how to select and display microapps, take a look at app.js. We use the React Router to match the current URL with the predefined route list and render the corresponding component:
<Switch>
<Route exact path="/" component={Browse} />
<Route exact path="/restaurant/:id" component={Restaurant} />
<Route exact path="/random" render={Random} />
</Switch>
Copy the code
The Browser and Restaurant components look like this:
const Browse = ({ history }) = > (
<MicroFrontend history={history} name="Browse" host={browseHost} />
);
const Restaurant = ({ history }) => (
<MicroFrontend history={history} name="Restaurant" host={restaurantHost} />
);
Copy the code
Both components render a MicroFrontend component. In addition to the history object (which becomes important later), we also specify a unique name for the application and the corresponding back-end host. The value of the host may be http://localhost:3001 or browse.demo.microfrontends.com.
MicroFrontend is just another React component:
class MicroFrontend extends React.Component {
render() {
return <main id={` ${this.props.name}-container`} / >; }}Copy the code
When rendering, all we need to do is place a container element on the page whose ID is unique to the microfront-end application. We use React componentDidMount as a trigger for downloading and installing microapplications:
// class MicroFrontend
componentDidMount() {
const { name, host } = this.props;
const scriptId = `micro-frontend-script-${name}`;
if (document.getElementById(scriptId)) {
this.renderMicroFrontend();
return;
}
fetch(`${host}/asset-manifest.json`)
.then(res => res.json())
.then(manifest => {
const script = document.createElement('script');
script.id = scriptId;
script.src = `${host}${manifest['main.js']}`;
script.onload = this.renderMicroFrontend;
document.head.appendChild(script);
});
}
Copy the code
You must get the URL of the script from the manifest file, because the compiled JavaScript file that react-scripts outputs has a hash value in the file name to facilitate caching.
After setting the URL of the script, all that is left is to add it to the document and initialize it:
// class MicroFrontend
renderMicroFrontend = (a)= > {
const { name, history } = this.props;
window[`render${name}`] (`${name}-container`, history);
// E.g.: window.renderBrowse('browse-container', history);
};
Copy the code
The last thing to do is clean up. When MicroFrontend removes components from the page, we should also uninstall the related microapplications.
componentWillUnmount() {
const { name } = this.props;
window[`unmount${name}`] (`${name}-container`);
}
Copy the code
Microfront-end application
Here’s how the window.renderBrowse method is implemented:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
window.renderBrowse = (containerId, history) = > {
ReactDOM.render(<App history={history} />, document.getElementById(containerId));
registerServiceWorker();
};
window.unmountBrowse = containerId => {
ReactDOM.unmountComponentAtNode(document.getElementById(containerId));
};
Copy the code
The above code USES ReactDOM. Render and ReactDOM. UnmountComponentAtNode.
To independently develop and run the microfront end. Each microfront-end application also has additional index.html to render independently outside the container:
<html lang="en">
<head>
<title>Restaurant order</title>
</head>
<body>
<main id="container"></main>
<script type="text/javascript">
window.onload = () => {
window.renderRestaurant('container');
};
</script>
</body>
</html>
Copy the code
From now on, the micro front end is mostly just plain old React applications. ‘Browser’ applies the restaurant list, providing for searching and filtering, and wrapping the results with so that the user can navigate to a particular restaurant when clicked. We will then switch to the second ‘Order’ micro-application, showing a restaurant page with a menu.
Communicate across applications through routing
As we mentioned earlier, cross-application communication should be kept to a minimum. In this example, our only communication is that the Browser page needs to tell the Order page which restaurant to load. We use routing to solve this problem.
There are three React applications involved, all of which use the React Router for routing, but are initialized in two slightly different ways.
For the container application, we create one that internally instantiates a History object, which we use to process client history, and which can also be used to link multiple React Routers together. The route initialization mode is
<Router history={this.props.history}>
Copy the code
The histroy is provided by the container application, and the history object is shared by all microapplications. This makes it easy to use urls as a means of messaging. For example, we have a link like this:
<Link to={`/restaurant/${restaurant.id}`} >Copy the code
After clicking this link, the path will be updated in the container, which will see the new URL and determine that the restaurant microapplication should be installed and rendered. The micro-application’s own routing logic will then extract the restaurant ID from the URL.
I hope this example demonstrates the flexibility and power of urls. Using urls for messaging should satisfy the following criteria:
- The URL structure is open and transparent
- Access to urls is global
- The length of the URL is finite
- For user modeling, urls should be easy to understand
- It is declarative, not imperative. So the URL is where the page is, not what the page is doing. Okay
- It forces micro-front-end applications to communicate indirectly rather than rely on them directly
When routing is used as a means of communication between micro-front-end applications, the route we choose constitutes a contract. Once a contract is decided, it can’t be changed easily, so we should carry out automated tests to check whether the contract is adhered to.
Share content
While we want each team and micro-application to be as independent as possible, some things will be shared.
The shared component library was mentioned above, but it would be too large for this small application. So, we have a small common content library that includes images, JSON data, and CSS, which is shared by all the other microapplications.
There is one more important thing to share: dependency libraries. Repetitive dependencies are a common shortcoming of the microfront end. Even sharing these dependencies between applications is difficult, so let’s discuss how to implement sharing of dependency libraries.
The first step is to select the dependencies to share. An analysis of our compiled code shows that about 50% of the code is contributed by React and react-DOM. These two libraries are our core dependencies, so it would be very effective to separate them out as shared libraries. Finally, they are very stable and mature libraries that have been upgraded prudently, so upgrading should not be too difficult.
As for how to extract, all we need to do is mark the library as externals in the WebPack configuration:
module.exports = (config, env) => {
config.externals = {
react: 'React'.'react-dom': 'ReactDOM'
}
return config;
};
Copy the code
Then, add several tags to each index.html file with script to get the two libraries from the shared content server:
<body>
<div id="root"></div>
<script src="% REACT_APP_CONTENT_HOST % / react. Prod - 16.8.6. Min. Js." "></script>
<script src="% REACT_APP_CONTENT_HOST % / react - dom. Prod - 16.8.6. Min. Js." "></script>
</body>
Copy the code
disadvantages
As with all architectures, there are trade-offs in the microfront-end architecture. We get the benefits, but we also get the costs.
downloads
JavaScript files built independently can lead to repeated common dependencies, which can increase downloads for users. For example, if each microapplication included its own copy of React, users would have to download React multiple times.
The problem is not easy to solve, and that can be mitigated. First, even if we don’t do any optimization, it’s possible that each individual page will load faster than building a single integrated front end. The reason is that if we compile each page independently, we can effectively split the code so that the page only loads the dependencies of the current page. This can result in quick initial page loading, but slow subsequent navigation as the user is forced to re-download the same dependencies on each page. We can analyze the pages that users frequently visit and optimize their dependencies individually.
Every project is different and you have to tailor your analysis accordingly.
Environmental differences
As the number of microapplications increases, you will not be able to get all the microapplications and the corresponding backend up and running locally, so you will have to simplify the environment locally.
This can often cause problems if the environments of the development and production environments are different. So you need to make sure that if developers want to fully emulate the build environment, they can. It just takes a lot of time.
Governance complexity
As a more distributed architecture, the microfront will inevitably have to manage more things: more code bases, more tools, more build pipelines, more servers, more domain names, and so on. So before adopting such an architecture, you need to consider several issues:
- Do you have adequate automation measures to configure and manage the additional infrastructure required?
- Is your front-end development, testing, and release process scalable to multiple applications?
- Are you ready for decisions to become more diffuse and even difficult to control?
- How will you ensure the level of quality, consistency, or governance of multiple independent front-end codebase?
conclusion
Over the years, as the front end has grown in complexity, we have seen an increasing need for a more scalable architecture. We should be able to develop software through independent, autonomous teams.
While the micro front end is not the only solution, we have seen many practical examples of the micro front end achieving these goals, and over time we have been able to gradually apply the technology to older sites. Whether or not the micro front end is the right approach for you and your team, the micro front end is a trend in which front-end engineering and front-end architecture will become increasingly important.
Finish the translation.
More in-depth reading:
- Probably the most complete microfront-end solution you’ve ever seen