This article has a lot of information, most of which is the reflection of the current situation and the demonstration of existing technology. Interested friends can first collect, and then slowly read. This article condenses all my thinking and exploration in the Field of Zhongtai, I believe that after reading this article, you can have a new understanding of the common business scenarios and solutions in the field of Zhongtai.

Please indicate the source of this article.

On the weekend of May 11, 2019, I conducted a share on the Middle Platform domain at FDCon 2019. The title of the share was “The Exploration of Business Standardization in the Middle Platform Domain”, and I released the RCRE library on site, and introduced how to use RCRE to solve various problems faced by the middle platform business development.

After the meeting, I saw some jokes made by some students. Maybe it was because of the way I shared, I did not elaborate on the background and reasons of RCRE and the pain points I was actually facing at that time. Instead, I just introduced how to use RCRE, which was inevitably suspected of advertising.

The birth of RCRE did not happen overnight, but the essence of my years of experience in this field. Each line of code is a reflection of what I thought after I got out of the pit, and a solution to all the problems and scenarios described below.

The first public sharing will inevitably lead to lack of experience and unclear control over the needs of the audience. Live demo code may not fully capture the context and reason for these apis. Therefore, in order to meet the needs of everyone at that time, this article, instead of talking about codes, only talks about thinking and demonstration, introduces the problems I was facing in the middle Taiwan field at that time, as well as my views and thoughts on these problems, and finally why I designed such functions in RCRE to solve these problems.

More perfect state management scheme

In the past few years, there have been a number of very good UI component libraries, such as Ant.Design, Element-UI, etc. These component libraries have solved the problem that front-end engineers have faced in the past. By adopting a unified Design style of UI components, This allows front-end engineers to focus less on cutting diagrams and writing CSS and more on implementing page logic.

In the implementation level of page logic, UI component state management has also made great progress. With the introduction of Flux, Redux, Mobx, etc., using a state management library to manage the state of an application has become the mainstream of the front end. Even in the latest React, UseReducer API derived from Redux appears.

The community has developed two very different approaches to state management.

Polarization of state management

A story is the dominant immutable data flow scheme, by making the whole application sharing a global Store, and emphasis on every data updates to ensure that the State completely immutable, and completely avoid the use of object reference assignment way to update the status this way to ensure that the page data operation, make the whole application traceability, Rollback, debugable features. Such features have an extraordinary advantage in large and complex applications with mountains of code to quickly locate and solve problems.

However, the mode like Redux also has some disadvantages. First of all, it requires developers to write a large number of actions according to the official description. The Reducer sample code will greatly expand the number of code lines, making the development of a small function very complicated. Using a single State to manage requires developers to complete the structure design of the State, at the same time the immutable data State management is just a story emphasizes an idea and requirements, because did not provide effective avoid object reference assignment solutions, moment requires developers to comply with this model, in order to avoid damage to the immutable.

Therefore, Redux’s design pattern is effective, but too much complexity and emphasis on patterns are also its drawbacks.

The other is the exact opposite of Redux, such as Mobx. It encourages developers to change state through object reference assignment. Mobx obtains the properties that each user and each React component depend on by adding a Proxy to the object. In this way, Mobx obtains the property mapping relationship between the object and the component, so that Mobx can automatically update the component based on these dependencies. With Mobx, many of the details of State are managed by Mobx, so there is no redux-style State design, and there is no boilerplate code like Redux, where you can modify the State data directly to achieve the desired effect.

Mobx’s idea is very similar to Vue’s, but with the same drawback — there is no copy of the state, so you can’t roll back the state. The relationship between data is elusive, and the updated implementation is completely hidden inside Mobx, invisible to developers. When the state is complicated, it will cause problems such as difficult debugging and difficult to reproduce bugs.

Mobx design mode can greatly improve the development efficiency in the early stage, but in the later stage of the project, it will cause certain difficulties in maintenance and debugging, resulting in reduced efficiency.

As you can see, both immutable data and mutable data have their own advantages and disadvantages in state management. It seems that you cannot have your cake and eat it too. So the question is, is there a new technology solution that combines the best of Redux and Mobx?

For more details on the comparison between Redux and Mobx, follow this article: www.educba.com/mobx-vs-red…

Simple and reliable state management

In the case of large complex application development, where Redux is reliable but not simple and Mobx is simple and unreliable, a simple and reliable approach to state management is needed.

Redux’s reliability lies in its ability to make state traceable and monitorable, and using a single state reduces the complexity of having too many modules. Mobx’s simplicity is that it’s easy to use, doesn’t require a lot of code, and doesn’t require a lot of code to implement features.

For large, complex applications, state traceability and monitoring of these features are critical so that the entire application doesn’t get too complex and out of control. So the direction of optimization is to see if you can use Mobx’s simplicity to reduce the cost of Redux.

In the context of unidirectional immutable data flow, reducing the cost of Redux requires three efforts:

  1. The use of combineReducer will make development more tedious, so it is necessary to avoid the need for State structure design every time development
  2. Every data operation needs to write actions. Reducer also makes development more complicated, so we need to avoid writing a large number of actions, Reducer
  3. Not all Reducer written by people can guarantee immutable State modification, so an alternative plan is needed to modify State

In view of the above three aspects, I think the following methods can be adopted to solve the problem:

  1. By mapping the structural relationships between components to State, the structure of State can be inferred at the outset, which in turn automatically helps developers do things like the combineReducer.
  2. The combination of multiple actions provides developers with a direct way of general Action, and the use of parameters to distinguish between multiple actions to solve the problem of too many actions.
  3. Encapsulate apis for State manipulation for developers, implement immutable data manipulation internally, and prevent developers from directly touching State.

With these three basic ideas in mind, the next step is to figure out how to integrate with the existing Redux architecture.

Use Redux like Mobx

First, the second and third methods can be combined into a single API — a generic, guaranteed state immutable STATE modification API. This is very much like Mobx changing the data state and updating it — calling the API takes care of that.

For the first point, those familiar with react-Redux know that State in REdux maps the State to the Props of the component by writing the mapStateToProps function. The parameter to the mapStateToProps function is the State of the entire Redux, which needs to be evaluated from State to map to the component. However, when we design the state at the beginning, we still need to think of a name to complete the structural design of the whole state. After a careful comparison, we will find that one Key is needed to complete the state before and after. Why not use the same Key? Such a Key can not only complete the division of each Reducer in the State of Redux, but also complete the reading of the properties of mapStateToProps.

Therefore, we only need to put a Key of this on a component property to accomplish the State division that used to be accomplished by combineReducer through component mounting, and then mapStateToProps according to this property to complete the mapping of State to Props. In this way, the entire State structure can be completely controlled on the components, and there is no need to use API like combineReducer.

We can encapsulate them into a React component that helps us manage state and modify state through its API. This is the idea behind the Container component in RCRE.

The simplicity of Mobx is that you don’t have to think about how to update and manage state, and a big advantage is that you can change state anywhere. Mobx is much more convenient than Redux, which uses props to pass values and functions.

So, even now that we have a component like Container that helps us automatically manage state, we need a design like Mobx that bypasses props and passes data and methods.

React introduced the new Context API in version 16, which is officially recommended as a solution for passing data across props. So you can use the API to read and modify the state from anywhere inside the Container component. That’s the idea behind the ES component in RCRE.

In summary, the key to solving the problem of high cost of Redux is to find areas that can be reused, that are not very diverse, and then encapsulate them so that they work very well.

In summary, the entire model can be summarized using the figure below.

Address complexity caused by component linkage

Anyone who has written about a mid-stage type system knows that when it comes to component linkage requirements, project schedules are long. Because once the components in the page are related, it will take a lot of time to deal with the update of each component behind each linkage, and the creation and destruction of the data state of the component. If you are not careful, you may write the Bug that the component is not updated properly after linkage, or the data of the component is not destroyed.

When the daily maintenance is such a large system containing countless linkage relations, the loss caused by every Bug is immeasurable, the difficulty behind this can be imagined.

The nature of component linkage

Component linkage itself is not complicated, we can simply describe it as: when one component changes the global state after updating, other components need to make corresponding responses according to the state. While a component changing state is not necessarily a synchronous operation, it can also be an asynchronous operation, such as calling an interface. It is easy to understand that component state modification is a one-way operation, and the reason why people find it difficult to develop features with component linkage is that it is very complicated to determine which components will interact with the component after the state is updated.

The value of the idea of one-way data flow

In the past MVC architecture applications, such scenarios were very difficult to deal with. Because components and communication between the components was conducted by publishing to subscribe to this model, after the complex relationship between components, can form a mesh dependency structure, under this kind of structure, farther can clarify the relationship between them, light is likely caused by the circular dependencies death cycle, has let developers.

React’s idea of one-way data flow is, in my opinion, the best solution to this problem. Because in the architecture of one-way data flow, the relationship between components has changed from a network structure to a tree structure. In the tree structure model, there are only parent-child relationships and sibling relationships between components, and there are no ring dependencies. This greatly simplifies the set of problems associated with complex relationships and keeps the entire component structure stable.

Each component has to take care of itself

The next step is to think about how to update other components when one component is updated.

When the scenario is complex and it is difficult to know which components are triggered by a component update, it is best to let each component react to the current situation on its own initiative.

React provides lifecycle functions for each component. When components begin to interact, we don’t need to figure out which components a component needs to influence, but let each component mind its own business, just as parents and teachers often tell children, mind your own business, you are already an adult.

Component linkage can be realized in a sustainable way by triggering a component to drive the update of its parent, which in turn drives the update of all its components, and then each child component checks the data and makes corresponding response when updating.

Improve efficiency by combining life cycle and component state

When a component is affected by another component, there are roughly three different states:

  1. Component mounted
  2. Component updates
  3. Component destroyed

If a complete business component wants to support linkage triggering by other components, in addition to the basic rendering structure of the component, it still needs to add some implementation for this component in the above three aspects. But when there are many, many components in the system, repeatedly implementing three aspects of functionality for each component can seem repetitive.

Instead of writing this logic individually for each component, we needed to find a more general approach. First, we need a more detailed analysis of the functions of these three aspects.

When a component is mounted, in addition to the initialization of some private data and state, the default value of the component may affect other components. When the component is initialized, the initialization of the component must be immediately written to the state to complete the initial default value required by some specific business requirements.

When a component is updated, there is normally no processing required if the entire component rendering data is completely controlled from props.

When a component is destroyed, the business needs to automatically remove the state of the component from the state as well.

From the above analysis, it can be seen that all state-related operations implemented in the life cycle are related to the addition or deletion of a specified Key. To improve efficiency, this Key can be used as an attribute of the component, and then common mount logic and destruction logic can be implemented at the bottom. Simple configuration completes the integration of the lifecycle and the state of the component.

These reflections can be found in the ES component of RCRE:

  1. The Key: name attribute that performs the status operation
  2. DefaultValue for component initialization: defaultValue property
  3. Controls whether the component automatically clears data when it needs to be destroyed: the clearWhenDestory property

A common pattern for interface invocation

Interface calls are common in regular mid-platform applications, and any application that involves adding, deleting, modifying, or checking will rely on some back-end interface.

In some simple scenarios, you might just call fetch inside a callback function to get the data for the interface.

However, for more complex scenarios and medium and large applications, the interface calls need to be normalized. Hence the use of actions to invoke interfaces. However, as scenarios become more complex, such as one Action calling multiple interfaces, simple solutions like Redux-Thunk become inadequate, so more advanced libraries such as Redux-Saga have emerged to support parallel calls to multiple interfaces. However, redux-Saga is not cheap to learn, and there is even a paper on what a saga is, so much effort is spent on learning various libraries and concepts, and when it comes to practical application in the business, there is still no idea. For students without any development experience, it is still difficult to deal with the problem of how to call the interface.

So with asynchronous data retrieval, I think a simpler, dumber design is needed that provides a unified approach across multiple business scenarios to help developers quickly understand and do what they need to do.

Common business scenarios related to interfaces

In view of such problems, thinking from the business perspective is a very good direction, in this direction, we can quickly solve the functional requirements of those common scenarios in the business.

First, we need to analyze some common functions related to asynchronous data retrieval in mid-platform systems:

  1. Queries triggered by various parameters and conditions
  2. The required data is initialized at the beginning of the page
  3. Data required for component linkage
  4. Parallel calls to undependent interfaces
  5. Serial calls to interdependent interfaces

The above three aspects cover almost all the scenarios that require interfaces in addition to form validation in normal mid-stage business requirements. The next step is to figure out what they have in common so that a more general design can cope with the uncertainty of changing requirements.

Relationship between interface parameters and interface triggers

For different business functions, the firing timing of interface parameters and components is variable, depending on the fields required by the current business and the callback functions that each UI component fires. What does not change is that each interface request will be accompanied by an update of the component, after all, after receiving the interface data, the component must be updated before the interface data can be passed to other components.

Therefore, for the first type of function, no matter how the component in the page changes, as long as the component can trigger the interface, it will inevitably affect the parameters of the interface request, otherwise the request without parameter change will not meet the current business requirements. So the key is the relationship between parameter changes and the request interface:

If the parameters change, the triggering interface parameters remain unchanged and the interface is not triggeredCopy the code

As it happens, any state update triggers an update of the container component, which in turn updates the entire application component. Therefore, we can use the feature of attaching hooks to the container component to automatically trigger the interface, and before the request, reading the latest state to dynamically calculate the interface parameters, and then determine whether the interface needs to trigger.

Therefore, we can design a trigger process like this very cleverly:

Various operations update the state -> container component update -> recalculate interface parameters -> determine and fire the interfaceCopy the code

Problems with interface initialization diversity

For the second type of functionality, in the simplest case, when a page is initialized, the interface on which it depends is fired unconditionally. This is not the case, however, because the initialization of some interfaces is conditional, depending on the data of a component or an interface.

However, in daily business development, only in the simplest scenario, the interface calls are placed within the life cycle such as componentDidMount, other conditional interface initialization calls are not placed within componentDidMount, but scattered elsewhere. At worst, it can be placed in a component’s callback function and wait for the interface to call before the next operation, or at best, it can wrap a Redux Middleware package and call it through a globally intercepting method.

If you think about it, there are a number of drawbacks to this approach. The first one is that the interface calls are not centralized, it is scattered, and this creates a huge barrier to code management in large applications. The second point is that interface calls require a specific precondition, which may depend on where the code is called, or may come from a bunch of if and else judgments, which make it difficult to manage and organize interfaces.

However, if we broaden our horizons from focusing on how to invoke an interface to the state of components and the relationships between interfaces, we see that such problems can be solved using the trigger flow derived above.

The trigger process model above can be reused by putting all the data that triggers an interface request into State and adding trigger strings to each interface:

General component mount --> component initialization data --> Status update --> Container component update --> Determine whether the interface meets the request condition --> recalculate interface parameters --> Determine and trigger the interfaceCopy the code

In this way, we can use the same mechanism and model to fulfill the interface requirements for both type 1 and Type 2 scenarios.

The development cost of complex component linkage increases dramatically

Component linkage is one of the more complex scenarios in the mid-stage world because it involves the impact of a data change to one component on the state of another.

When a component’s data changes on a page, if some component interacts with that component, all involved components react. This behavior usually includes the mounting of a new component, the updating of an existing component, and the destruction of the component. The linkage between components is not fixed, but entirely dependent on the current business logic. If you need to invoke a new interface in such a complex component relationship, such as a request interface to provide data for a new drop-down option component, then where to invoke that interface is another question worth considering.

It’s not as simple as writing an interface call in a new component’s componentDidMount, because that interface call doesn’t necessarily meet the requirements after the current component is mounted, it’s possible that a new interface call, Two or more interfaces are required to mount and initialize data before a request can be made. In this case, interface calls can only be ported to status updates, and then decisions can be written separately.

As you can see, linkage of components and specific interface trigger conditions can dramatically increase the difficulty of completing requirements. If we compare the mechanism described above with the existing scenario, the interface triggering caused by the linkage of components is nothing more than a paper tiger.

Component linkage inevitably involves state. Whether one-to-one linkage or one-to-many linkage, component states must be modified behind the linkage. State is a constant reflection of the current component.

Because component linkage is nothing more than a change in the state of several components, we can still use the model described above to solve a class of problems such as:

Component A is triggered --> status update --> Components B and C respond --> Status update --> Container component update --> Interface determines whether the request condition is met --> recalculates interface parameters --> Determine and triggers the interfaceCopy the code

In this way, we can use the same mechanism and model to meet the requirements for interfaces in the first, second and third scenarios.

How to handle the relationship between interfaces

As applications become more complex, there are many relationships not only between components, but also between interfaces. Each interface request consumes a certain amount of network time. However, whether interfaces are related to each other depends entirely on current business requirements and data status. When the conditions triggered by an interface do not come from the data returned by another interface, we can assume that there is no association between interfaces.

Without any async await or redux-saga tools, it is easy to call multiple interfaces within a function with callback hell and a burden on interface management.

However, if we look closely, we see that each interface writes back data, or some of it, to the state at the end. So if we give each interface a name, and when the interface returns, we write the returned data to this value named Key. The interface has returned successfully by checking whether the name exists directly in the state, no different from checking whether the value of any other component is in the state.

With that in mind, deciding whether an interface returns is as simple as deciding whether a component is returned, and can therefore be subsumed into whether an interface meets the criteria.

conclusion

If you want to make such a complicated thing as calling an interface silly, you need to find out the common ground of these operations in different scenarios. By finding the common ground, you can design a common model to solve a series of problems and realize the interface requirements in different scenarios. This is also the idea behind the DataProvider functionality of the Container component in RCRE.

Process task management

There is also a special class of business functions in mid-stage systems that are difficult to generalize in a single model — linear interaction logic triggered by user behavior.

This type of business function has some obvious characteristics:

  1. It is not complicated and is usually a series of operations
  2. It is triggered by user behavior and may involve some continuous interactive functionality.
  3. It’s completely dominated by business logic and doesn’t have much in common

Typically, this logic is scattered throughout the components of the system and looks like a callback function for some event. However, as requirements are iterated over and over, components become so bloated that the maintainability of the entire component is affected.

Because each function is fully customized according to requirements, engineers still need to write codes to complete the connection between multiple functions when some common business functions are highly encapsulated.

This leads to a problem — functionality is not particularly reusable because a significant portion of the code is glue code that can’t be reused. So if you want to improve the overall code reuse, you need to think about how to reduce the development of glue code.

Analyze the internal details of the interaction logic

If carefully to analysis reveals that the combination of general logic of glue code, whether perform synchronous or asynchronous operation, it is based on linear way to execute, compared with the relationship between the components and components, interaction logic structure of this kind of code is relatively simple, they are all in the completion of an operation to carry out only after an operation, When some execution error or exception is encountered, the operation is terminated.

task1 --> task2 --> task3 --> task4
Copy the code

Therefore, the problem can be changed to how to find a mechanism to unstructure synchronous or asynchronous operations.

Multiple asynchronous operations can be called serially using promises, and synchronous operations can be wrapped as asynchronous operations that return immediately. So you can use Promise to even out the difference between asynchronous and synchronous.

Serial calls are very common operations in the program world, such as the Reduce function, which is a good example. If you can place each call to an operation in an array, then you can use one call to perform batch operations.

Batch data sources

For each interaction logic, it needs to read some parameters to do its job. For example, parameters are required to initiate a request, a prompt message is required when a confirmation box is displayed, and data is required for data verification. The data source for these operations may be the event object when the user fires the event, the current state of the entire application, or the return value from the previous operation.

Therefore, it is necessary to encapsulate all sources of data if you want to create a batch mechanism that allows each operation to run smoothly.

Therefore, before calling the function encapsulated by each operation, we need to collect all the current data information, assemble an object and pass it into the function to meet the data required by different business requirements.

During each operation, it is possible to read data from the following sources:

  1. The return value of the previous operation
  2. The value passed when the event is triggered
  3. Status of the global application

Of course, batch processing also requires error capability — when any one operation returns an exception, the entire operation is aborted.

Configure the aggregation and control center

For anything discrete to work in an organized way, there must be a control center.

In the past, the processing interaction logic was very decentralized, and even now there are batch operations like Reduce, if it is still walking in some unknown corners, it still cannot solve the confusion caused by decentralization. So we also need to aggregate the batch configuration and put it in the most visible and fixed location, so that everyone knows that to find out how this logic works, you just need to look here.

So there is a need to think about where to place such a control center that contains information about all operations.

Components on a page are organized in a tree structure, so no matter how many components there are on the page, they must have a topmost parent component. So this component, standing at the top of the pyramid, is the perfect place to put the control center, how it looks and feels like the real world.

In React, the container component that communicates directly with the state is the component that aggregates configuration information. That’s why in RCRE, the Task function exists as a property of the Container component.

Process task management is the idea behind the task group function of RCRE. Through such a mechanism, decentralized interaction logic can be traced and easily adjusted.

Easier form validation

Forms have always been the epitome of high development costs in the mid-stage space. It contains countless interaction scenarios and is the hardest hit area where business requirements change most frequently.

Implementing a single form validation is not too difficult. The purpose of form validation is to validate the component data entered by the user and give the user some feedback by verifying the validity of the data. Thus form validation has only two functions: first, a change in component data triggers validation, and second, feedback validation results to the user.

The data in a page is variable, and it is not enough to add hooks to a component’s onChange event to trigger validation, because the component’s data may come from other components as well as from itself. In addition, there are special interactions such as onBlur events that trigger form validation for special components such as page input fields.

Therefore, data verification needs to be implemented from three aspects, the first is the trigger of onChange event, the second is triggered when the data read by the component changes, and the third is the special scene such as onBlur event.

As for the result feedback in the page, because it involves the rendering of the component, so it needs to be controlled by a unified state, so that it can be rendered to the page by the component and then give the user a hint.

Therefore, to sum up, the implementation of a component verification function is not just a simple data verification logic, but to accomplish the following work:

  1. Validation logic for data
  2. OnChange event hook
  3. OnBlur The hook for the event
  4. Determination of data changes when components are updated
  5. The State that stores form validation State
  6. Component that displays error information

That’s all you need to do to validate a component, but it’s not the most annoying part. It’s the amount of code you need to write when you have to do every component that needs to be validated.

Use state to drive form validation

If you look closely at the scenarios that trigger the form, you can see that 2, 3, and 4 are the most common application scenarios in the business, and the three behind them are perfectly consistent with status updates. Since the onChange event, onBlur event and the judgment of data change all have component state change before verification, making full use of this feature to save work is the breakthrough point to solve the problems of 2, 3 and 4.

Form validation and component state changes are synchronized, so simply adding hooks that trigger form validation logic within the different lifecycle and callback functions of component changes can make the form change with the component state.

Common business scenarios for form validation

Form validation can be triggered automatically by state, so we can list all the scenarios in which form validation can be triggered by state:

  1. The component fires the onBlur event to trigger validation
  2. The component fires the onChange event to trigger validation
  3. Validate data through an interface
  4. Components are linked by other components to trigger validation
  5. Special authentication scenarios, such as specific authentication logic
  6. When a component is deleted, the authentication status of the component must be cleared simultaneously

Also, in addition to the relationship between forms and states, forms have some unique scenarios:

  1. Trigger validation of all components before sending the request by clicking the Submit button
  2. Skip validation for disabled buttons
  3. Authentication between multiple components is mutually exclusive

At the same time, the disabled feature of the form will also be related to component linkage: through one component, to control the disabled property of another component, and then operate the verification status.

Provides support for form-specific scenarios

Based on the above analysis, forms have three unique scenarios that need to be supported. In the first scenario, when the user clicks the submit button, the outermost Form component fires the onSubmit event, thus providing the developer with a wrapped callback function to use. Inside the callback function, each component’s validation function needs to be triggered in turn to perform global validation to ensure that each item is verified when it is submitted.

In the form, disabled components do not need validation because the user cannot change the input of the component and validation is meaningless, so you need to specifically monitor the disabled property of the component so that when the component is disabled, the validation status of the component is recharged immediately.

The special validation logic of component validation mutual exclusion can be thought of as a function that integrates component state and validation state. Because in order to achieve mutual exclusion of validation, it is necessary to read the validation status of other components and reverse itself, so it is only necessary to provide a function for the developer to customize the extension of validation, and the specific logic implementation is left to the developer to deal with. It is important to note, however, that the ability to read global state is also provided along with custom validation, as this is implemented not only by reading its own data, but also by reading data from other components, which is an area of concern.

In RCRE, components are already fully equipped to help developers automatically complete forms validation scenarios triggered by various state changes.

The private state of the form itself

Because forms also need to store current validation and error information, forms, like components, need to hold some state.

Therefore, to save the development effort of validation and error messages when developing forms, it is necessary to provide developers with a common state store. At the same time, the state of the form is not similar to the state of the component, there will be linkage function, each component verification is independent of each other, only the responsibility of the current component.

React State is a lightweight State management feature that can be directly used to hold component validation State. By encapsulating it as a React component, developers can use it. This is the idea behind the RCREForm /> component in RCRE.

In addition to a single component that stores the state of the entire form, the validation state of each component needs to be synchronized to such a unified storage area in real time. The React Context API is used to synchronize form authentication states. This is the idea behind components in RCRE.

With these two mechanisms, the developer does not need to write a manual implementation to maintain the validation state of the form, so the repetitive work of points 5 and 6 above is eliminated.

Write in the last

All the content of this article is all the design ideas and thoughts behind the RCRE library. I think you can understand why there is a library like RCRE. If you are interested in learning more about the project, you can click on the link below:

Github.com/andycall/RC…

If you have any questions, feel free to leave a comment below, and I’ll make it as good as I can.