What are immutable data structures
The concept of immutable data structures comes from functional programming, in which a program processes data in a series of pure functions. For pure functions, a new data object will be returned each time, so as not to affect the previous data, ensuring the immutability of data, increasing the stability and readability of the program.
In JavaScript, there is no concept of immutable data; all reference types except the base type are mutable.
var test = {
a: {
b: 1
}
};
var copy = test;
test.a.b = 2;
console.log(copy) // {a: {b: 2}}}
Copy the code
As can be seen from the above example, reference types in JS share the same memory address, and only changes in data will affect all variables pointing to this memory address.
Immutable data structures in React
In React and Redux, immutable data is required to manage state.
In React, when you change the state of a component, that component and all its children trigger updates. When an application is very large, this full component tree update can be very expensive and even lead to page lag. React provides ShouldComponentUpdate to optimize component rendering. This function is used to determine if the current component needs to be rerendered.
In the shouldComponentUpdate method, it checks if the props and state of the component have changed and tells the component to re-render by returning true and false.
Because of this optimization, if you’re using a component wrapped in PureComponent or React. Memo, or if you’re optimizing it with ShouldComponentUpdate, your component state must be immutable. Otherwise, when comparing before and after data, it will always judge that the data has not changed, resulting in the component will not be re-rendered.
Regular writing
Although JavaScript Object. Assign and other methods can quickly generate a new Object, but these methods are shallow copy, for nested deconstruction of deep data, is powerless.
Although it is possible to make a deep copy of the state in each operation so that the state is a brand new object each time, this can be problematic. On the one hand, deep copy can be a very expensive operation if the data is large and the operation is frequent. On the other hand, shouldComponentUpdate doesn’t work every time a new component is generated, and every time the parent component’s data changes, the child component is rerender.
In the React component, in order to return a new state for each operation, we typically use extended operators for processing.
A simple example:
handleAdd = () => {
this.setState((state) => ({
words: [...state.words, 'music']
});
}
Copy the code
But when dealing with complex nested data, this approach can be unreadable, and the code can be difficult to maintain when complex data manipulation is required
handleClick = () => { this.setState((state) => { return { address: { ... state.address, province: { ... state.address.province, city: 'hangzhou', } } }; })}Copy the code
As with state in the component, this nesting of data destructions occurs when the Reducer function is written in Redux.
For this cumbersome data deconstruction, there are two mainstream immutable data libraries in the community that can help us deal with this problem.
ImmutableJS
ImmutableJS is an open source library for Facebook, which implements Persistent Data Structure and uses Structural Sharing to achieve high performance Data Sharing. If a node in an object tree changes, only that node and its parent need to be modified. The other nodes are shared by the new object tree.
Var test = {name: 'music', list: [1,2,3,4]}; var a = Immutable.fromJS(test); var b = a.set('name', 'netease'); console.log(a.list === b.list) // trueCopy the code
A is an Immutable object converted by test. B is generated by A, and the name property is modified, but the list property is not modified, so a and B share list.
Although the ImmutableJS implementation is an immutable data structure, there are some obvious problems:
- ImmutableJS implements its own set of data destructions internally, and is incompatible with JS. It requires the fromJS method to convert JS objects into its own data destructions. If JS is to use Immutable data, the toJS method is used to convert the data.
- ImmutableJS itself has a lot of concepts and apis, which are very expensive to learn, and the library is quite large
Immer.js
Immer is an IMmutable library written by mobx authors. It uses ES6 proxies to hijack data, share unmodified data from objects, and use defineProperty for compatibility with browsers that do not support Proxy.
Immer has a simple design, no complex API, and is implemented through the built-in SYNTAX of JS. There is no cost to learn, and it basically meets the needs of immutable data.
/** * Classic React.setState with a deep merge */ onBirthDayClick1 = () => { this.setState(prevState => ({ user: { ... prevState.user, age: prevState.user.age + 1 } })) } /** * ... But, since setState accepts functions, * we can just create a curried producer and further simplify! */ import produce from 'immer'; onBirthDayClick2 = () => { this.setState( produce(draft => { draft.user.age += 1 }) ) }Copy the code
The use-immer library is also provided to support hooks
import React from "react";
import { useImmer } from "use-immer";
function App() {
const [person, updatePerson] = useImmer({
name: "Michel",
age: 33
});
function updateName(name) {
updatePerson(draft => {
draft.name = name;
});
}
function becomeOlder() {
updatePerson(draft => {
draft.age++;
});
}
return (
<div className="App">
<h1>
Hello {person.name} ({person.age})
</h1>
<input
onChange={e => {
updateName(e.target.value);
}}
value={person.name}
/>
<br />
<button onClick={becomeOlder}>Older</button>
</div>
);
}
Copy the code
New ECMAScript proposal: Record & Tuple
The Record and Tuple proposals are currently in stage 2 and subject to change.
While third-party libraries can be used to implement immutable data, they are not at the JavaScript language level.
In contrast to third-party libraries, the proposed Record and Tuple are built-in, deeply immutable data deconstructions.
- Record and Tuple are easier to debug
- Records and tuples are close to objects and arrays in usage and writing, without the special operations that some third-party libraries require to convert different data structures
- By avoiding costly transformations between regular JS objects and immutable structures, developers can keep using immutable data structures
Records and tuples are deeply immutable by enforcing specifications for data structures that can only contain data of the base type, Record and Tuple.
Simple example
Record
const proposal = #{ id: 1234, title: "Record & Tuple proposal", contents: `... `, // tuples are primitive types so you can put them in records: keywords: #["ecma", "tc39", "proposal", "record", "tuple"], }; // Accessing keys like you would with objects! console.log(proposal.title); // Record & Tuple proposal console.log(proposal.keywords[1]); // tc39 // Spread like objects! const proposal2 = #{ ... proposal, title: "Stage 2: Record & Tuple", }; console.log(proposal2.title); // Stage 2: Record & Tuple console.log(proposal2.keywords[1]); // tc39 // Object work functions on Records: console.log(Object.keys(proposal)); // ["contents", "id", "keywords", "title"]Copy the code
Tuple
const measures = #[42, 12, 67, "measure error: foo happened"];
// Accessing indices like you would with arrays!
console.log(measures[0]); // 42
console.log(measures[3]); // measure error: foo happened
// Slice and spread like arrays!
const correctedMeasures = #[
...measures.slice(0, measures.length - 1),
-1
];
console.log(correctedMeasures[0]); // 42
console.log(correctedMeasures[3]); // -1
// or use the .with() shorthand for the same result:
const correctedMeasures2 = measures.with(3, -1);
console.log(correctedMeasures2[0]); // 42
console.log(correctedMeasures2[3]); // -1
// Tuples support methods similar to Arrays
console.log(correctedMeasures2.map(x => x + 1)); // #[43, 13, 68, 0]
Copy the code
Like Records, we can think of Tuples as an array-like structure
const ship1 = #[1, 2];
// ship2 is an array:
const ship2 = [-1, 3];
function move(start, deltaX, deltaY) {
// we always return a tuple after moving
return #[
start[0] + deltaX,
start[1] + deltaY,
];
}
const ship1Moved = move(ship1, 1, 0);
// passing an array to move() still works:
const ship2Moved = move(ship2, 3, -1);
console.log(ship1Moved === ship2Moved); // true
// ship1 and ship2 have the same coordinates after moving
Copy the code
Prohibited operation
As mentioned earlier, Records and Tuples are depth immutable, so inserting an object into them raises TypeError
const instance = new MyClass();
const constContainer = #{
instance: instance
};
// TypeError: Record literals may only contain primitives, Records and Tuples
const tuple = #[1, 2, 3];
tuple.map(x => new MyClass(x));
// TypeError: Callback to Tuple.prototype.map may only return primitives, Records or Tuples
// The following should work:
Array.from(tuple).map(x => new MyClass(x))
Copy the code
grammar
In this proposal, new syntax fragments are defined that will be added to the JavaScript language.
We denote a Record or Tuple by adding the # modifier to the front of a normal object or array.
#{}
#{ a: 1, b: 2 }
#{ a: 1, b: #[2, 3, #{ c: 4 }] }
#[]
#[1, 2]
#[1, 2, #{ a: 3 }]
Copy the code
Grammar mistakes
Unlike arrays, Tuples do not allow empty placeholders
const x = #[,]; // SyntaxError, holes are disallowed by syntax
Copy the code
The __proto__ identifier is not allowed to define attributes in Record
const x = #{ __proto__: foo }; // SyntaxError, __proto__ identifier prevented by syntax
const y = #{ "__proto__": foo }; // valid, creates a record with a "__proto__" property.
Copy the code
Abbreviations for methods are not allowed in Records
#{ method() { } } // SyntaxError
Copy the code
Runtime error
Records only allows strings as keys, not Symbols as keys
const record = #{ [Symbol()]: #{} };
// TypeError: Record may only have string as keys
Copy the code
Records and Tuples can only contain base types and other Records and Tuples. Attempts to add variables of any type other than Record, Tuple, String, Number, Symbol, Boolean, Bigint, undefined, and NULL will raise TypeError.
equality
Like Boolean and string primitives, Records and Tuples compare values for equality, not references.
assert(#{ a: 1 } === #{ a: 1 });
assert(#[1, 2] === #[1, 2]);
Copy the code
For js objects, there are different results
assert({ a: 1 } ! == { a: 1 }); assert(Object(#{ a: 1 }) ! == Object(#{ a: 1 })); assert(Object(#[1, 2]) ! == Object(#[1, 2]));Copy the code
The order of the keys in Record does not affect the result of the comparison because the keys are implicitly sorted
assert(#{ a: 1, b: 2 } === #{ b: 2, a: 1 });
Object.keys(#{ a: 1, b: 2 }) // ["a", "b"]
Object.keys(#{ b: 2, a: 1 }) // ["a", "b"]
Copy the code
If the structure and content are of the same depth, then Recrod and Tuple will be equal based on the following equality operators: Object.is(), ==, ===, and the SameValueZero algorithm (which compares the keys of Maps and Sets). But there’s a little bit of a difference in how they deal with minus 0
- Object.is(), when encountering -0 and 0, assumes that they do not want to wait
- ==, === and SameValueZero thinks -0 with 0 is equal
The == and === operators are more direct for other types of comparisons nested in Records and Tuples, returning true if and only if the contents are identical (except 0/-0). This directness affects NaN comparisons as well as those of other types.
assert(#{ a: 1 } === #{ a: 1 }); assert(#[1] === #[1]); assert(#{ a: -0 } === #{ a: +0 }); assert(#[-0] === #[+0]); assert(#{ a: NaN } === #{ a: NaN }); assert(#[NaN] === #[NaN]); assert(#{ a: -0 } == #{ a: +0 }); assert(#[-0] == #[+0]); assert(#{ a: NaN } == #{ a: NaN }); assert(#[NaN] == #[NaN]); assert(#[1] ! = # (" 1 ")); assert(! Object.is(#{ a: -0 }, #{ a: +0 })); assert(! Object.is(#[-0], #[+0])); assert(Object.is(#{ a: NaN }, #{ a: NaN })); assert(Object.is(#[NaN], #[NaN])); // Map keys are compared with the SameValueZero algorithm assert(new Map().set(#{ a: 1 }, true).get(#{ a: 1 })); assert(new Map().set(#[1], true).get(#[1])); assert(new Map().set(#[-0], true).get(#[0]));Copy the code
Standard library support
A Tuple functions in much the same way as an Array. Similarly, a Record can be manipulated by an Object static method.
assert(Object.keys(#{ a: 1, b: 2 }) === #["a", "b"]);
assert(#[1, 2, 3].map(x => x * 2), #[2, 4, 6]);
Copy the code
Convert Object and Array
Conversion can be done using Record() and tupl.from ()
const record = Record({ a: 1, b: 2, c: 3 });
const record2 = Record.fromEntries([#["a", 1], #["b", 2], #["c", 3]]); // note that an iterable will also work
const tuple = Tuple.from([1, 2, 3]); // note that an iterable will also work
assert(record === #{ a: 1, b: 2, c: 3 });
assert(tuple === #[1, 2, 3]);
Record.from({ a: {} }); // TypeError: Can't convert Object with a non-const value to Record
Tuple.from([{}, {} , {}]); // TypeError: Can't convert Iterable with a non-const value to Tuple
Copy the code
Note that the expected input values for Record() and tupl.from () are collections of Records, Tuples, or other base types. Nested object references will raise TypeError.
Iterative agreement
Tuples, like arrays, are iterable.
const tuple = #[1, 2];
for (const o of tuple) { console.log(o); }
// output is:
// 1
// 2
Copy the code
Like objects, a Record can only be iterated over by apis like Object.entries
const record = #{ a: 1, b: 2 };
// TypeError: record is not iterable
for (const o of record) { console.log(o); }
// Object.entries can be used to iterate over Records, just like for Objects
for (const [key, value] of Object.entries(record)) { console.log(key) }
// output is:
// a
// b
Copy the code
JSON.stringify
- Json.stringify (Record) behaves equivalent to an object using json.stringify
- Json.stringify (tuple) behaves equivalent to an array using json.stringify
JSON.parseImmutable
We recommend adding the json.parseimmutable method so that we can extract Record/Tuple data directly from JSON strings rather than Object/Array.
Json.parseimmutable is the same signature as json.parse, the only difference being that it returns a Record/Tuple.
Tuple.prototype
Tuples support instance methods similar to arrays, but with some changes, all methods supported on tuples.
typeof
Records and Tuples will be identified as different types
assert(typeof #{ a: 1 } === "record");
assert(typeof #[1, 2] === "tuple");
Copy the code
In the Map | Set | WeakMap | WeakSet in use
Record and Tuple can be used as Map keys or as Set values. When they are used, they are compared by value.
You cannot use Record and Tuple as WeakMap keys or WeakSet values because they are not reference types and their declaration cycles cannot be observed.
Map
const record1 = #{ a: 1, b: 2 };
const record2 = #{ a: 1, b: 2 };
const map = new Map();
map.set(record1, true);
assert(map.get(record2));
Copy the code
Set
const record1 = #{ a: 1, b: 2 };
const record2 = #{ a: 1, b: 2 };
const set = new Set();
set.add(record1);
set.add(record2);
assert(set.size === 1);
Copy the code
WeakMap and WeakSet
const record = #{ a: 1, b: 2 };
const weakMap = new WeakMap();
const weakSet = new WeakSet();
// TypeError: Can't use a Record as the key in a WeakMap
weakMap.set(record, true);
// TypeError: Can't add a Record to a WeakSet
weakSet.add(record);
Copy the code
Why depth immutability?
Record and Tuple are defined as composite base types so that nothing in them can be a reference type. This brings some disadvantages (referencing objects becomes difficult, but it can still be done), but it also makes immutability more guaranteed and avoids some common errors.
const object = {
a: {
foo: "bar",
},
};
Object.freeze(object);
func(object);
// func is able to mutate object’s keys even if object is frozen
Copy the code
In the example above, we tried to use Object.freeze for more assurance of immutable data, but since Freeze does not support deep freezing, the a property can still be manipulated. With Record and Tuple, this immutable constraint is inherent:
const record = #{
a: #{
foo: "bar",
},
};
func(record);
// runtime guarantees that record is entirely unchanged
assert(record.a.foo === "bar");
Copy the code
Finally, deep immutable data structures reduce the need for deep copy objects to achieve immutable data
const clonedObject = JSON.parse(JSON.stringify(object));
func(clonedObject);
// now func can have side effects on clonedObject, object is untouched
// but at what cost?
assert(object.a.foo === "bar");
Copy the code
Deep Path Properties in Record Literals
Records sometimes contain deeply nested structures, and using object deconstruction to reuse and extend this data can be cumbersome and tedious. This proposal introduces a new syntax to describe such deeply nested structures in a more concise and readable manner.
The sample
const state1 = #{ counters: #[ #{ name: "Counter 1", value: 1 }, #{ name: "Counter 2", value: 0 }, #{ name: "Counter 3", value: 123 }, ], metadata: #{ lastUpdate: 1584382969000, }, }; const state2 = #{ ... state1, counters[0].value: 2, counters[1].value: 1, metadata.lastUpdate: 1584383011300, }; assert(state2.counters[0].value === 2); assert(state2.counters[1].value === 1); assert(state2.metadata.lastUpdate === 1584383011300); // As expected, the unmodified values from "spreading" state1 remain in state2. assert(state2.counters[2].value === 123);Copy the code
If you don’t have the syntax for this proposal, you can create state2 in other ways:
// With records/tuples and recursive usage of spread syntax const state2 = #{ ... state1, counters: #[ #{ ...state1.counters[0], value: 2, }, #{ ...state1.counters[1], value: 1, }, ...state1.counters, ], metadata: #{ ... state1.metadata, lastUpdate: 1584383011300, }, } // With Immer (and regular objects) const state2 = Immer.produce(state1, draft => { draft.counters[0].value = 2; draft.counters[1].value = 1; draft.metadata.lastUpdate = 1584383011300; }); // With Immutable.js (and regular objects) const immutableState = Immutable.fromJS(state1); const state2 = immutableState .setIn(["counters", 0, "value"], 2) .setIn(["counters", 1, "value"], 1) .setIn(["metadata", "lastUpdate"], 1584383011300);Copy the code
A simple example
const rec = #{ a.b.c: 123 };
assert(rec === #{ a: #{ b: #{ c: 123 }}});
Copy the code
The key that calculates the depth path
const rec = #{ ["a"]["b"]["c"]: 123 }
assert(rec === #{ a: #{ b: #{ c: 123 }}});
Copy the code
You can mix the. Operator with computed properties
const b = "b";
const rec = #{ ["a"][b].c: 123 }
assert(rec === #{ a: #{ b: #{ c: 123 }}});
Copy the code
Combine the attributes of the deep path with deconstruction
const one = #{ a: 1, b: #{ c: #{ d: 2, e: 3, } } }; const two = #{ b.c.d: 4, ... one, }; assert(one.b.c.d === 2); assert(two.b.c.d === 4);Copy the code
Can be used to traverse a Tuple
const one = #{ a: 1, b: #{ c: #[2, 3, 4, #[5, 6]] }, } const two = #{ b.c[3][1]: 7, ... one, }; assert(two.b.c === #[2, 3, 4, #[5, 7]]);Copy the code
Points to be aware of
TypeError is raised when a deconstructed object does not have a specified deep path attribute.
const one = #{ a: #{} }; # {... one, a.b.c: "foo" }; // throws TypeError #{ ... one, a.b[0]: "foo" }; // also throws TypeErrorCopy the code
TypeError is raised if the depth path property attempts to set a non-numeric key on the Tuple.
Const one = #{a: #[1,2,3]}; # {... one, a.foo: 4 }; // throws TypeErrorCopy the code
Proposals to link
proposal-record-tuple
Deep Path Properties in Record Literals