What is the nature of front-end development? What are the advantages of reactive programming over MVVM or Redux? Can the ideas of responsive programming be applied to back-end development? This paper takes a news website as an example to explain how to use responsive programming in front-end development. Then take the calculation of e-commerce platform double 11 hourly transaction amount as an example to share the similarities and differences of the same idea in real-time calculation.
## what is front-end development developing
In the process of front-end development, you may have thought about such a question: what is front-end development developing? In my opinion, the essence of front-end development is to make web views respond correctly to relevant events. There are three key words in this sentence: “Web view”, “respond correctly” and “related events”.
“Related events” might include page clicks, mouse swipes, timers, server requests, etc. “Responding correctly” means modifying some state based on related events, and “Web view” is the most familiar part of front-end development.
From this point of view we can give the formula for view = response function (event) like this:
View = reactionFn(Event)
In front-end development, events that need to be handled fall into three categories:
● Users perform page actions, such as click, mousemove, etc.
● Remote server and local data interaction, such as FETCH, websocket.
● Local asynchronous events, such as setTimeout, setInterval async_event.
In this way, our formula can be further derived as:
View = reactionFn(UserEvent | Timer | Remote API)
## 2. Logical processing in applications
To further understand the relationship between this formula and front-end development, let’s take a news website as an example, which has the following three requirements:
● Click Refresh: Click Button to refresh data.
● Check refresh: check Checkbox automatically refresh, otherwise stop automatic refresh.
● Pull-down Refresh: Refresh data when the user pulls down from the top of the screen.
If analyzed from the front end, these three requirements correspond to:
● Click refresh: click -> fetch
● Check refresh: change -> (setInterval + clearInterval) -> fetch
● pull down refresh :(touchstart + touchmove + touchend) -> fetch news_app
1.MVVM
In the MVVM Model, the response function described above will be executed either between Model and ViewModel or between View and ViewModel. Events are handled between the View and the ViewModel.
MVVM can well abstract the view and data layers, but its response functions will be fragmented between transitions, making it difficult to accurately trace the assignment and collection processes. In addition, because the processing of events is closely related to the View part in the model, the logical reuse of Event processing between View and ViewModel is difficult.
2.Redux
In Redux’s simplest model, a reducer function can be conceived as a reducer function corresponding to the response function described above, when a combination of events corresponds to one Action.
But in Redux:
● State can only be used to describe intermediate states, not intermediate processes.
● The relationship between Action and Event is not one-to-one, so it is difficult for State to track the actual change source.
Responsive programming and RxJS
Wikipedia defines responsive programming as follows:
In computing, Reactive programming, or Reactive programming, is a declarative programming paradigm for data flow and change propagation. This means that static or dynamic data flows can be easily expressed in a programming language, and the associated computational model automatically propagates the changing values through the data flows.
Rethink the flow of users using the application in terms of data flow:
● Click the button -> Trigger the refresh event -> send the request -> Update the view
● Select auto refresh
● Touch the screen with your finger
● Automatic refresh interval -> Trigger refresh event -> send request -> Update view
● Slide your finger across the screen
● Automatic refresh interval -> Trigger refresh event -> send request -> Update view
● Finger stops swiping on the screen -> Triggers a drop-down refresh event -> Sends a request -> Updates the view
● Automatic refresh interval -> Trigger refresh event -> send request -> Update view
● Turn off automatic refresh
Marbles diagram:
Breaking down the logic above gives you three steps for developing your current news application using reactive programming:
● Define the source data flow ● compose/transform the data flow ● consume the data flow and update the view
Let’s describe them in detail.
Define the source data flow
Using RxJS, we can easily define various Event data streams.
1) Click
Involves the Click data stream.
click$ = fromEvent<MouseEvent>(document.querySelector('button'), 'click');
Copy the code
2) Check the operation
The change data flow is involved.
change$ = fromEvent(document.querySelector('input'), 'change');
Copy the code
3) Pull down operation
Touchstart, TouchMove and Touchend are involved.
touchstart$ = fromEvent<TouchEvent>(document, 'touchstart');
touchend$ = fromEvent<TouchEvent>(document, 'touchend');
touchmove$ = fromEvent<TouchEvent>(document, 'touchmove');
Copy the code
4) Refresh periodically
interval$ = interval(5000);
Copy the code
5) Server request
fetch$ = fromFetch('https://randomapi.azurewebsites.net/api/users');
Copy the code
Compose/transform data streams
1) Click refresh event stream
When we hit refresh, we want multiple hits in a short period of time to trigger only one last time, which is done through RxJS ‘debounceTime Operator.
clickRefresh$ = this.click$.pipe(debounceTime(300));
Copy the code
2) Automatically refresh the stream
The RxJS switchMap works with the previously defined interval$data stream.
autoRefresh$ = change$.pipe(
switchMap(enabled => (enabled ? interval$ : EMPTY))
);
Copy the code
3) Pull down the refresh stream
Combined with the previously defined touchstart touchmovetouchmovetouchmove and touchend $data flow.
pullRefresh$ = touchstart$.pipe(
switchMap(touchStartEvent =>
touchmove$.pipe(
map(touchMoveEvent => touchMoveEvent.touches[0].pageY - touchStartEvent.touches[0].pageY),
takeUntil(touchend$)
)
),
filter(position => position >= 300),
take(1),
repeat()
);
Copy the code
Finally, we will be defined by the merge function clickRefresh autoRefreshautoRefreshautoRefresh merged with pullRefresh $, get the refresh data flow.
refresh$ = merge(clickRefresh$, autoRefresh$, pullRefresh$));
Copy the code
Consume the data stream and update the view
By flattening the refresh data stream directly through switchMap to the defined fetch$in the first step, we get the view data stream.
View streams can be mapped directly to views by async Pipe in the Angular framework:
<div *ngFor="let user of view$ | async">
</div>
Copy the code
In other frameworks, you can subscribe to get the real data in the data stream and then update the view.
At this point, we have completed the current news application using responsive programming, with sample code [1] developed by Angular and no more than 160 lines long.
Let’s summarize how the three processes involved in developing front-end applications using responsive programming ideas correspond to the formula in Section 1:
View = reactionFn(UserEvent | Timer | Remote API)
1) Describe the source data flow
And events UserEvent | Timer | Remote apis, in RxJS corresponding functions are:
● UserEvent: fromEvent ● Timer: interval, timer ● Remote API: fromFetch, webSocket
2) Combined transformation data flow
The response function, and its partial response in RxJS, is to
● COMBINING: Merge, combineLatest, ZIP ● MAPPING: Map ● FILTERING: Filter ● REDUCING: Reduce, Max, count, scan ● TAKING: take, SKIPPING: SKIPPING, SKIPPING, SKIPPING delay, debounceTime, throttleTime
3) Consume data streams to update views
Corresponding to View, available in RxJS and Angular:
● subscribe ● async pipe
What are the advantages of reactive programming over MVVM or Redux?
● Describe the event itself, not the calculation or intermediate state.
● Provides a way to compose and transform data streams, which means we have a way to reuse constantly changing data.
● Because all data streams are composed and transformed by layers, it means that we can accurately trace the source of events and data changes.
If we blur the timeline of RxJS ‘Marbles diagram and add a vertical section every time the view is updated, we see two interesting things:
● Action is a simplification of EventStream.
● State is the correspondence of Stream at a certain time.
No wonder the Redux website says: If you already use RxJS, you probably don’t need Redux anymore.
The question is: do you really need Redux if you already use Rx? Maybe not. It’s not hard to re-implement Redux in Rx. Some say it’s a two-liner using Rx.scan() method. It may very well be!
At this point, can we further abstract the statement that web views respond correctly to related events?
All events — find –> related events — make –> respond
The chronological events, which are essentially data streams, can be extended to:
Source Data Flow — Transform –> Intermediate Data Flow — Subscribe –> Consume data flow
This is the basic idea behind responsive programming that works so well on the front end. But does this idea only apply to front-end development?
The answer is no. This idea can be applied not only to front-end development, but also to back-end development and real-time computing.
Break down the wall of information
Front and back end developers are often separated by a wall of information called REST apis, which insulate the responsibilities of front and back end developers and improve development efficiency. But it also puts a wall between the front and back end developers’ eyes, so let’s try to tear down that wall of information and see how the same ideas apply to real-time computing.
1. Real-time computing with Apache Flink
Before moving on to the next part, let’s introduce Flink. Apache Flink is an open source stream processing framework developed by the Apache Software Foundation for stateful computation on both borderless and bounded data streams. Its data stream programming model provides event-at-a-time processing capabilities over finite and infinite data sets.
In practical applications, Flink is usually used to develop the following three applications:
● Event-driven applications Event-driven applications extract data from one or more streams of events and trigger calculations, status updates, or other external actions based on incoming events. Scenarios include rule-based alarm, exception detection, anti-fraud, and so on.
● Data analysis applications Data analysis tasks require extracting valuable information and indicators from raw data. For example, double 11 transaction amount calculation, network quality monitoring and so on.
● Data Pipeline (ETL) application Extract – Transform – Load (ETL) is a common method for converting and migrating data between storage systems. ETL jobs are typically triggered periodically to copy data from a transactional database to an analytical database or data warehouse.
Here, we take the calculation of the hourly transaction amount of the e-commerce platform double 11 as an example to see whether the scheme we got in the previous chapter can still be used.
In this scenario, we first need to obtain the user’s shopping order data, then calculate the hourly transaction data, and then transfer the hourly transaction data to the database, which is cached by Redis, and finally obtained through the interface and displayed on the page.
The data flow processing logic in this link is as follows:
User order data stream — Convert –> Hourly transaction data stream — Subscribe –> Write to database
As described in the previous chapter:
Source Data Flow — Transform –> Intermediate Data Flow — Subscribe –> Consume data flow
Thought exactly the same.
If we described the process in Marbles, we would get something like this, which might seem simple enough to do the same thing using RxJS’s Window operator, but is that really the case?
2. Hidden complexity
Real real-time computing is much more complex than responsive programming on the front end. Here are a few examples:
Event order
During front-end development, we also encounter events that are out of order, the classic case of a request being initiated and a response being received can be represented by the following Marbles diagram. There are several ways to handle this on the front end, but we’ll skip over it here.
What we want to introduce today is the time disorder in data processing. In front-end development, we have an important premise that greatly reduces the complexity of developing front-end applications: front-end events occur at the same time as processing times.
Imagine the development complexity of the whole front end if the user performs page actions such as Click, Mousemove, etc., all become asynchronous events with unknown response times.
However, event occurrence time is different from processing time, which is an important prerequisite in the field of real-time computing. Again, using the example of hourly transaction volume calculation, when the raw data stream passes through layers, the order of the data at the compute node is likely to be out of order.
If we still divide the window according to the arrival time of data, the final calculation result will produce an error:
In order to make the calculation result of Window2 Windows correct, we need to wait for the late event to arrive for calculation, but then we are faced with a dilemma:
● Wait forever: Late events can be lost in transit, and Windows 2 never produces data.
● Too short waiting time: The late event has not arrived, and the calculation result is incorrect.
To solve this problem, Flink introduced the mechanism of Watermark, which defined when to no longer wait for a late event, essentially providing a compromise between real-time computing accuracy and real-time performance.
To Watermark, the teacher would close the door to the class and say, “Anyone who arrives after this hour will be counted as late and will be sent to stand.” In Flink, Watermark acts as the teacher closing the door.
Data back pressure
When using RxJS in a browser, an Observable produces faster data than an operator or observer can consume, resulting in a large amount of unconsumed data being cached in memory. This condition is known as backpressure, and fortunately, having data backpressure on the front end can cause a lot of browser memory usage, but not much more.
But in real-time computing, when the speed of data generation is faster than the processing capacity of the intermediate node, or exceeds the consumption capacity of the downstream data, what should be done?
To ensure that data loss is unacceptable for many streaming applications, Flink devised a mechanism that:
● Ideally, buffer data in a persistent channel.
● When data is generated faster than the processing capacity of the intermediate node, or exceeds the consumption capacity of the downstream data, the slower receiver will slow down the sender as soon as the buffering effect of the queue is exhausted. More figuratively, when the flow of data slows down, the entire pipe is “backpressurized” from the sink to the water source, and the water is throttled so that the speed is adjusted to the slowest part, thus reaching a stable state.
Checkpoint
In real-time computing, billions of pieces of data can be processed every second, which cannot be processed by a single machine. In fact, in Flink, the operator operation logic will be executed by different subtasks on different TaskManagers. At this time, we are faced with another problem. When a certain machine has a problem, how to deal with the overall operation logic and state to ensure the correctness of the final operation result?
Checkpoint mechanism is introduced in Flink to ensure that the status and calculation location of jobs can be recovered. Checkpoint ensures that Flink has good fault tolerance. Flink uses a variation of the Chandy-Lamport algorithm called asynchronous barrier snapshotting.
When checkpoint is started, it makes all sources record their offsets and inserts numbered checkpoint barriers into their stream. These barriers mark the portion of the flow before and after each checkpoint as they pass through each operator.
When an error occurs, Flink recovers the status based on the checkpoint state to ensure the correctness of the final result.
The tip of the iceberg
Due to space, today’s introduction can only be the tip of the iceberg, but:
Source Data Flow — Transform –> Intermediate Data Flow — Subscribe –> Consume data flow
The model is universal in both responsive programming and real-time computing, and I hope this article will make you think more about the idea of data flow.
Related links: [1]github.com/vthinkxie/n…