Front-end system construction from the beginning of the front-end full-time development slowly began to evolve. In the past, we used to describe the technical details needed for the front-end system without seeing the connections behind it. The way to build is to build the connections backwards and forwards. At the front end of the project development is interactive vision, downstream is back-end development. The connection between these two characters is very important to us.
There are two aspects to this. One is the system between teams, especially the connection between the upper and lower teams. The second is the system within the front-end team, which can be understood as the connections among the roles.
The connection with the designer
The front end of the front end is interactive vision, and the inherent understanding of interactive vision must be “uncontrollable” factors, even if we set interactive vision standards.
Precipitation component architecture standard
We decompose the basic interaction logic of the system, which must be made up of basic components. We can completely solidify and precipitate into a base component library. For the basic components, we identify a “variable” range, including background color, border color, rounded corners, font, etc., and comb out a set of “style variables.” We can then tweak multiple sets of basic interactive visual templates within visual range.
Different forms of product user experience flow are different, resulting in different interactive vision. Therefore, for the front end to connect to it, it must be under the same interactive vision specification. This scope can be defined by user experience flow consistency, such as operation control classes, data presentation classes, etc. Today, designers tend to condense product design concepts into a design language, such as ANTD for backend applications.
At this point, we must consider the underlying problem of how we can target different interactive visual abstraction components. The RC-Component behind ANTD highly abstracts the component behavior, leaving it up to the antD to choose which interactions to use. Form a good component architecture, and it is easy for other front-end to build a new design language based on this.
Therefore, the component system of abstract interaction combined with different visual interaction standards can form a series of component libraries, thus constituting the front-end component system, and the style part of which can become the elements of the theme configuration through our abstraction, and the final form is the theme configuration platform. When our business chooses to generate a set of components, it can also generate the component templates needed for interaction, making it very easy for visual to do both page and application level work.
Material Design is a good demo. The Material Design-based Sketch Plugin developed by Google designers is intended to generate complex Material Design-style interactive controls. Further, sketch can be used to generate template code. It’s a chain reaction.
Better mutual understanding
In connection with interactive vision, the front end works much like a “translator.” However, engineers themselves are not good at design and even aesthetic feeling, but we often encounter the situation that designers are unconstrained in their work and the cost of front-end development is high. In addition, many designers try their own interactive demos in a “technology first” way.
In the professional field, such as data visualization, with data research and development process, at this time, not only from the perspective of visual interaction, but from the perspective of data itself combined with visual interaction to help understand the business, so as to design a suitable visualization form.
Therefore, not all relationships are positively connected for the roles themselves. The front-end is also responsible for helping designers better understand how pages are built, how visualizations are used, and so on.
Connectivity to back-end development
Front-end and back-end connection is back-end research and development, now the application for efficient interactive experience has rarely used synchronous request, therefore, we and back-end connection is mainly in the asynchronous interface definition, there is a part of the page initialization page template rendering two parts.
The interface definition is a contract between the front and back ends that connects people in two role teams to work together efficiently. For the traditional front and back end interaction, for the convenience of parallel development, interfaces are agreed before the project, and then developed separately. Finally, formal interfaces are coordinated in the pre-release environment. There are several problems with this process.
The data model
The front-end’s understanding of the data is not based on the underlying data, but on interface elements. Although the conventions of the front and back ends for interfaces are agreed upon at the beginning of the project, there is uncertainty on the back end about the source of the final data. There are always some tweaks to fields and formats, which is a lot of risk to the project.
For generic businesses, the front and back ends tend to abstract a set of entity models. So as long as the front and back end maintain this set of data model can achieve no adjustment online. When combined with GraphQL, this feels like a “complete” separation of the front and back ends, which is even better for data adaptation.
Consider the data adaptation layer alone. For both the front and back ends, this layer can be done. There are trade-offs. Considering the cost of publishing, data, backend, and front-end publishing costs, the front-end must be the least. In the event of data or back-end problems, the front end can be quickly adapted for emergency handling. Therefore, in reality, often by the front end to do a layer of adaptation.
For personalized services, interfaces can only be agreed case by case. At this point, we need a Mock interface service to act as the Mock interface and implement the proxy to the online interface. Synchronize this process by seamlessly switching between the platform and native tools. This was done to minimize the timing of the joint ops. The Easy Mock platform is a good open source option.
Across the adapter
As different end products and research and development perspectives are different, research and development will have different understanding of data. For example, data requirements are more streamlined on the wireless side, which is generally not the case on the Web.
So we think that the underlying data is similar, but the upper application interface is different. So, we let the original application-oriented interface submerge the microservices formed to improve abstractness and stability. A BfF (the back-end that serves the front end) layer is introduced at the front end. Instead of creating a common interface for all clients, we can have multiple BFFS, corresponding to Web, mobile, and so on.
The benefit of this layer is that the main back-end focus sinks into the microservices architecture, so if one of the services is to be migrated, then one of the BfF’s can call the new service, while the others remain unchanged. The decoupling of the system is further improved.
The problem with this pattern, of course, is who maintains the BfF layer. Choosing a lightweight language, such as PHP, will speed up development if the backend maintains this layer. If the front end maintains this layer, Then Node is a good fit. The interface services are mainly high IO scenarios, which match Node’s advantages. This choice depends on the mix of people on the team.
Template management
In the past, when there were few interface services, the main connection between the front and back ends was in templates.
For the back end, system variables are written directly into the template, which makes front-end maintenance difficult. For the front end, our script and style configuration is in the template, often need to change the address for new versions.
For the part that’s so coupled. For today’s front-end, it has slowly found its way. Node manages this template as a generic service, and the backend sends system variables to the service, which does the subsequent template generation.
Another benefit of this is that we only need to do the logic in this layer when we do SSR.
Ideally, then, we would abstract a user experience adaptation layer to do two things, template rendering and data adaptation. This is also one of the performance of the front edge of work today. This layer is quite light for the back end and smoother for our connection for the front end.
Stability guarantee
We are all familiar with the digitalized operation of products, which is generally reflected in the breakthrough of operational efficiency, innovation of business model and improvement of customer value.
Then, behind the commercial value is the stability of the product. From the perspective of the product, the stability link runs through the link of the interaction. In which, a series of interactions lead to a series of requests from the back end, the back end and the database. We call this end-to-end stability assurance.
End-to-end monitoring, in addition to the detailed specifications we must develop, focuses on the performance of each link, and can know where the link is wrong in the shortest time. Each request sent by the front end is accompanied by a traceId, so that the class on the back end can be immediately traced, and associated with which SQL calls, which associated tables are located in which rooms.
Because of this full link guarantee, our different roles are independent of each other and connected to each other. Become the cornerstone of product assurance.
Better mutual understanding
As with designers, working with backend developers is also about helping the backend understand the front end better.
The difference in understanding between the front end and the back end often comes from the data interface. The back-end is the server environment, the front-end is the browser environment, often in different positions, can not feel the same deeply. The front-end is concerned with user experience enhancement, so in order to have its own considerations in terms of data transfer size and times, this time is related to the work of the back-end.
As a result, modern front ends tend to move the job of smoothing out this layer of difference to the “middle layer.” In order to do more extreme front-end decoupling and performance optimization.
Front end self-connection
Back to the last connection of the front end itself. From the industrial Revolution in the age of steam to today’s information age, tools have evolved. Several levels of work efficiency and stability have been promoted. Today’s engineering revolution at the front end is about two or three years for the half-life of knowledge. Also experienced standard innovation, tool innovation and so on.
How do we connect ourselves in this age.
Selection of the architecture
As the front end moves from the page level of the past to the application level of today, we look at the complexity of the system as it grows. Technical architecture is closely related to the team. Define two key words for technical architecture, future-oriented and scenario-oriented.
If today’s business-oriented scenarios are almost static pages, my architectural choices will focus on re-templating language designs that facilitate automated builds.
Today, our products are show-heavy modular interfaces. React is the foundation of my technical architecture. React represents the front-end of today’s application-level development, which requires a complete concept of aggregation components. However, there are two sets of templates in the upper layer. One is based on flux, which ensures the control of data flow in complex state, and the other is based on Observable, which ensures that it is easy to use and convenient for back-end engineers. Use in different scenarios and scales.
Of course, architectural changes are only a few years away on the front end, as we deal with the rapid development of the Internet and its forms of human-computer interaction. Let’s say we can do coexistence and iteration for different technical architectures in one system. Technology is inherently fluid, and technology development does not stagnate in the team.
So, in situations where the framework capabilities are similar, I don’t mix them, I just use one or the other, but not to the exclusion of future transformations. I’m thinking about how to control their iteration.
Process standardization
For front-end engineers who lived through that tool-poor era, front-end engineering projects often didn’t have build, debug, or test tools, and involved simple scaffolding and compression packaging. The most common thing you can provide to the front end is to create a new Makefile under the project to set some commands. The scripting language is usually shell or Python.
It wasn’t until Node came along that front-end engineering really opened the door. We’ve seen the rise of process-based tools based on the Node environment from Grunt to today’s popular Webpack. We always change process tools to keep up with The Times, but the logic of the process itself — scaffolding, build, debug, test, package and release — has never changed or been expected.
These are the features I will always need for today’s local process tools, but scalability, the ability to connect locally online, and data monitoring must be behind them.
1. Scalability
Command-line tools are by nature very easy to encapsulate, and git has a flow extension called GitFlow. Then the expansion ability of process tools lies in its ability to rewrite every link. For example, the Web process and wireless process have different tool architectures, but the process is the same and can be extended with different plug-in capabilities.
2. Decentralization
Decentralization means the ability to work locally and focus on individual work. Online, on the other hand, is meant to be the end of multiple collaborations, where you can see the results of different people.
Local Git, for example, has a repo for management, which we can always manage locally without pushing to a remote server. Online, we have platforms such as Github, which can manage issues and projects, which are done by a team.
For local process-based tools, packaging and publishing need to be associated with online, because publishing itself is a part of team collaboration, and we need to manage it uniformly. As a local tool, we know its convenience, and online platform ensures team collaboration.
3. Data monitoring
Git is a database in and of itself. The local process development tool is just a command line tool. The significance of data reporting is to unify the efficiency of every link on the server side. How can we optimize these links?
Self understanding
We have interface logic, which is a non-logical precipitate. We have data logic, which is logical precipitation. We are dealing with the gap between their understanding and the product understanding. The final human-computer interface shows the work of the front end, and our work has great uncertainty. It is our job to settle this uncertainty as much as possible.
conclusion
The most basic principle is to reduce the connection between different roles and strengthen the standard construction. The front end itself is primarily to improve the ability to process management, while constantly abstracting the application architecture from the scene. The R&D process is a balance between efficiency and stability. The trade-off between the overall structure is reflected in the size of the team and the speed of business development:
1. Every step is in accordance with Unix principles at the smallest possible granularity
2. Each link has the ability of customization and further ecological
3. Each link provides input and output for standardized precipitation
4. Let the system complete the link connection
5. All links are supported by data, and the trade-offs are constantly adjusted
In each role stage, there will be its own services, such as Demo management platform, PRD management, automated test system, etc. But they are scattered and do not take a holistic view of the project. Therefore, in the overall research and development system construction, the whole link research and development management is concentrated in the process management system, which will extract the core process to form a set of system.
In large-scale enterprises, there are many functional teams, so there is a balance between functional team and functional team, functional team and upper and lower level team. We need to consider both the creativity of people in the environment and the cost of important construction, and all structures must be designed to maximize the immediate benefit at all times.