As mentioned above, useReducer’s dispatch can actually be regarded as “Maxwell’s demon” in a closed system where two subsystems fuse to ensure a low-entropy environment for one system. This may be a bit of a mouthful, but the focus should be on maintaining low entropy (low number of states and low number of state changes). Don’t worry too much about other concepts

So what does the number of low states, the number of changes in ground states, mean? Let’s start with an example:

Given two states a and b, output the value of the most recently changed a/b after 1000ms with no state change

If it’s limit function + one-way thinking, focusing on 1000ms with no changes, recent changes, i.e., combining debounce, merge, you get the desired result quickly, even in alphabetical numbers:

const result$ = merge(a$,b$).pipe(debounceTime(1000))
Copy the code

Seems like an effortless job, right? Not necessarily, for the simple reason that many implementations are hidden — scheduling is hidden, data relationships are hidden, initiator recipient relationships are abstracted, and the entire logical structure of the system is hidden

This code is great for personal code or for repackaging mature code, but for an engineering project in the midst of an ongoing iteration, this kind of “show-off” makes no sense — are you sure you’re not going to screw up?

Let’s take a look at how many levels of abstraction there are in the limit function code (even more so, there should be no equals sign) :

  1. Reactive data abstraction: The abstraction of data and all changes into a structure (similar to vUE’s ref and React state)

  2. Scheduling abstraction: Schedules are abstracted into flows, and relationships between schedules are abstracted into logic between flows (i.e. bullet diagrams)

There are two levels of abstraction. Is that a good thing? A layer of abstraction communication have increased the difficulty, two layers of abstraction obviously does not meet the engineering requirement, of course, the above example is simple, if it is ng events such as scheduling (zone scheduling) technology stack, this is the best practice (make sure this level of abstraction team members can accept), even so, the complex logic is still not recommended flow, Because the project is not about one person.

Ok, let’s do this logic with data-driven state, just reactive abstractions under hooks.

const [a,setA] = useState(' ')
const [b,setB] = useState(' ')
const [result,setResult] = useState(' ')
useEffect(() = >{
  setTimeout(() = >{
      // ...
  },1000)
},[a,b])
Copy the code

Wait a minute! Suddenly there are two problems that make it hard to move!

  1. SetTimeout is an event outside the React scheduler. Its callback also requires additional dependencies, otherwise the latest changes cannot be retrieved

  2. We need to get the latest changed value, so we need to know the change time of A and B. Obviously, only a, B and result are not enough

  3. Although the initiator of effect is A and B, setTimeout also implies a change time data, without which we cannot determine the change of A and B

const [a,setA] = useState(' ')
const [b,setB] = useState(' ')
const [result,setResult] = useState(' ')
// the time when a and B change
const [abEffect,setAbEffect] = useState<Date.undefined>()

useEffect(() = >{
  setAbEffect(new Date())
},[a,b])

// Schedule data
const[aTimeout, setATiemout] = useState<[Date.string] |undefined> ()const[aTimeout, setATiemout] = useState<[Date.string] |undefined> ()// a Change record time
const handleATimeout = useCallback(() = >{
  // Write down the value of a
  setATimeout([new Date(),a])
},[a])
useEffect(() = >{
  setTimeout(handleATimeout,1000)
},[handleATimeout])

// b Change the record time
const handleBTimeout = useCallback(() = >{
  setBTimeout([new Date(),b])
},[b])
useEffect(() = >{
  setTimeout(handleBTimeout,1000)
},[handleBTimeout])

// When a, b change time, find the most recent change of a, b

useEffect(() = >{
    if(aTimeout && bTimeout){
      const [aT,a] = aTimeout
      const [bT,b] = bTimeout
      const abT = aT
      const result = a
      if(aT.getTime()-bT.getTime()>0){
          abT = aT
          result = a
      }else{
          abT = bT
          result = b
      }
      if(abT.getTime()-abEffect.getTime()>1000){
          setResult(result)
      }
    }
},[aTimeout,bTimeout,abEffect])
Copy the code

See the above string, you will find that the code in a mess at the same time, the performance is poor!

Why is that? The reason is that React itself cannot sense events other than synthesized events, namely timeout, websocket, webworker, Fetch, mediastream, etc. In order to ensure data consistency, they must be called in useEffect and ensure dependency transfer

For this reason, the processing of these events must also be data-driven, that is, you do not see async, no events, purely from the point of view of data state and implementation, which brings huge performance consumption + lengthy code

Note that this is only true for non-React aware events, that is, events that can’t trigger a React dispatchEvent. In other words, In addition to the React management synthedicEvent-> Effect ->setState->useMemo-> JSX process, the use of pure data drivers is a huge burden. It does not mean that the normal React MVI model will be so bad. No asynchronous hooks code is terrifyingly efficient

We will find that in the event of uncertain events and uncertain systems, the huge increase in entropy will inevitably lead to data inflation (i.e., the three quantities a, B and result are completely insufficient, and the increase in entropy is the expansion of data inflation and the change of data possibility).

The so-called aTimeout, bTimeout, abEffect and other data belong to the entropy of timeout system rather than the current component code

Therefore, at this time, we should take out the Maxwell elf in the useReducer and lock aTimeout, bTimeout, abEffect and other data in the useReducer structure to ensure the low entropy environment of the external structure

const [state,dispatch] = useReducer((state,[type,payload]) = >{
    // call back
    if(type= = ='change') {const now = new Date(a)if(now.getTime()-state.triggerTime.getTime()>1000) {return{... state,result: payload}
      }
    }
    / / API calls
    if(type= = ='trigger') {return{... state,triggerTime: new Date()}}return state
},undefined.() = >({result:' '.triggerTime:new Date()}))


const handleA = useCallback(() = >{
  dispatch(['change',a])
},[a])
useEffect(() = >{
    dispatch('trigger')
    setTimeout(handleA)
},[handleA])

const handleB = useCallback(() = >{
  dispatch(['change',b])
},[b])
useEffect(() = >{
    dispatch('trigger')
    setTimeout(handleB)
},[handleB])
Copy the code

Note that useReducer is a state machine and is not designed to handle asynchracy

Maybe give it power with Thunk? UseEffect is a responsive dispatch processing center. It does not need useReducer to handle asynchron, which is different from Redux (there is no way to handle asynchron in Redux).

Thunk (reducer) thunk (reducer)

const [action,setAction] = useState(()=>()=>['default',undefined])

useEffect(()=>{
   const [type, payload] = action()
   if(type === 'init'){
    // reducer
   }
},[action])
Copy the code

This is a simple reducer plan, and the type of setAction is dispatch, but the biggest problem is that it will distribute events according to react scheduling, that is:

  1. Event delay scheduling

  2. Synchronized events are merged

These will not be an issue in the vast majority of event calls, but in the rare case of instant, concurrent events (synchronous concurrency), useReducer is better

UseMemo is the status memo, and useReducer is the event memo

Yes, to sum up, if React development is regarded as a process of entropy control (development itself is a process of entropy control), useMemo will conduct entropy control for the main process of React, that is, reduce the possibility of states

The useReducer is more like the event Memo, which reduces the entropy of the event