Noun explanation

Webworker: Creates a multithreaded environment for JavaScript, allowing the main thread to create Worker threads and assign some tasks to the latter to run. While the main thread is running, the Worker thread is running in the background without interfering with each other. Wait until the Worker thread completes the calculation and returns the result to the main thread. The advantage of this is that when computationally intensive or high-latency tasks are taken on by Worker threads, the main thread (usually responsible for UI interactions) will flow smoothly and will not be blocked or slowed down.

IndexedDB: As the capabilities of browsers continue to increase, more and more sites are considering storing large amounts of data on the client side, which can reduce the need to fetch data from the server and directly from the local. Existing browser data storage schemes are not suitable for storing large amounts of data: cookies are less than 4KB in size, and each request is sent back to the server; LocalStorage is between 2.5MB and 10MB (depending on the browser), does not provide search, and cannot create custom indexes. So a new solution was needed, and this was the context in which IndexedDB was born.

webworker

We should pay attention to the following points when using Webworker:

Origin restriction: The script files assigned to Worker threads must be of the same origin as the main thread script files.

DOM restriction: Unlike the main thread, the global object of the Worker thread cannot read the DOM object of the page where the main thread is located, nor can document, window, parent objects be used. However, Worker threads can have navigator objects and Location objects.

Communication: Worker threads and main threads are not in the same context, they cannot communicate directly and must be done via messages.

Scripting limitation: Worker threads cannot execute alert() and Confirm () methods, but can make AJAX requests using XMLHttpRequest objects.

File restrictions: the Worker thread cannot read local files, that is, cannot open the native file system (file://), and the scripts it loads must come from the network.

indexedDB

Key-value pair storage: IndexedDB uses an object Store internally to store data. All types of data can be stored directly, including JavaScript objects. In the object warehouse, data is stored as “key-value pairs”. Each data record has a corresponding primary key. The primary key is unique and cannot be duplicated, otherwise an error will be thrown.

Asynchronous: IndexedDB operates without locking the browser and users can still perform other operations, in contrast to LocalStorage, which operates synchronously. Asynchronous design is designed to prevent massive data reads and writes from slowing down the performance of a web page.

Support for transactions: IndexedDB supports transactions, which means that if one of the steps fails, the entire transaction is cancelled and the database is rolled back to the state before the transaction occurred, without overwriting only a portion of the data.

Origin restriction: IndexedDB is subject to origin restriction, with each database corresponding to the domain name that created it. Web pages can only access databases under their own domain names, but not cross-domain databases.

Large storage space: IndexedDB has much more storage space than LocalStorage, usually no less than 250MB or even no upper limit.

Binary storage supported: IndexedDB can store binary data (ArrayBuffer objects and Blob objects) as well as strings.

How to improve application performance?

Here I recommend using Dexie, which is a packaged tool library for manipulating indexedDB.

Why not use native apis? Here’s a quote from MDN. Portal.

Note: The IndexedDB API is powerful, but can seem too complex for simple cases. If you prefer a simpler API, try libraries such as localForage, dexie.js, PouchDB, IDB, IDB-KeyVal, JsStore, or LoveField that make IndexedDB more developer-friendly.

Using Indexed as a native cache database, the functionality of the native API was just too much for us. For functions like transactions, caching scenarios are not suitable, so I prefer to use the existing wheel to add, delete, change and check the database, which can greatly reduce the amount of code, and also greatly reduce the complexity of code logic.

Q: How to ensure data consistency between local and remote databases?

A: Personally, I would prefer to cache some of the apis with minor changes in the local database.

If the API changes frequently, caching it into a local database is not recommended.

Q: What about scenarios that are not frequent, but do occasionally change?

A: You can use a local database first and then perform a comparison in the FETCH API. If the comparison result has no diff, no processing is done.

If there is diff, we update the local database and rerender the page at the same time.

Q: So even though the page is displayed, the fetch still occupies the main thread, causing the main thread to hang?

A: So another protagonist webworker goose-stepping towards us, we can pick up a Webworker instead of us

Fetch and diff, so that we don’t need to use the main thread, and can help us carry bricks. Wow…

Create with indexedDB

// instantiate a database named tikHawk_db
const db = new Dexie('dbname') as ILocalIndexDB;

// In the Dexie framework, we do not need to manage complex versions of the system. By default, we use internal processing of the framework.
// add a table common to the database and stamp the primary key
db.version(1).stores({
  common: 'stamp'});// Continue encapsulating our common table operations.
// why? Caching databases is not like server-side databases in that we need data in a fixed format, and I prefer to use a table to cache data.
// In this way, we can obtain the entire data object of our cache by primary key with higher performance.

export const dbPut = <T>(data: T, stamp: string) = > {
  db.common.put({data, stamp})
}

export const dbGet = (stamp: string) = > {
  return db.common.get(stamp);
}

// A database has been created.

const enum IndexedDBStampEnum {
  PRODUCT = "product",}// Insert a record with primary key product into the table, create if the record does not exist, overwrite if the record does exist.
await dbPut({name: "pencil".price: 100}, IndexedDBStampEnum.PRODUCT);
// select * from product where primary key is product.
const {data, stamp} = await dbGet(IndexedDBStampEnum.PRODUCT);
Copy the code

A database named “dbname” has been successfully created above.

A data table named “Common” is created in the TikHawk_DB database.

A record {name: “pencil”, price: 100} is inserted.

Open the console to see if the record exists in the database.

Create a web Worker

// Export a worker method in worker.js
export function entitiesWorker() {
  // What to do when you receive a message from the main thread
  self.onmessage = e= > {
  // Request the interface
    fetch(self.location.origin + '/api/entities', {
      method: 'GET',
    }).then( r= > r.json()).then(
      r= > {
      // Make a comparison
      // Or use deep copy
        if(JSON.stringify(e.data) === JSON.stringify(r.data[0]) {// No update required
          self.postMessage({
            isUpdate: false}); }else {
        // Update required
          self.postMessage({
            isUpdate: true.data: r.data, }); }})}}/ / main thread
// Since web workers can only be imported through urls and have the same origin policy, we need to process the worker.js file ourselves
// transWorker is our tool for transferring workers
// Convert the worker into an IIFE, encapsulate it into a blob, and pass it to the worker.
export const transWorker = (worker) = > {
  let blob = new Blob([ "(" + worker.toString() + ') () '] and {type: "application/javascript"});
  return new Worker(URL.createObjectURL(blob));
}
let apiWorker = transWorker(entitiesWorker);

// After some simple work above, our worker is built
Copy the code

The Web Worker communicates with indexedDB

Having written the Web Worker and indexedDB above, we will combine the two for local data caching and single-threaded data diff and update.

const {data} = await dbGet(IndexedDBStampEnum.PRODUCT);
// If there is data in the local database
if(data) {
  // Give local cached data to State
  this.setState({ data });
  // Open the worker thread
  let apiWorker = transWorker(entitiesWorker);
  // Since the worker cannot access indexedDB, the data needs to be passed manually to the worker thread
  apiWorker.postMessage(data);
  // Wait for the worker to respond
  apiWorker.onmessage = e= > {
    // If the data has diff
    if(e.data.isUpdate) {
      // Update database and rerender page
      dbPut(e.data.data, IndexedDBStampEnum.PRODUCT);
      this.setState({ data: e.data.data });
    }
    // Close the threadapiWorker.terminate(); }}else {
  // Perform API pull if there is no data, and don't forget to update indexedDB data at the API level.
  await this.getState();
}
Copy the code

Now that the API is cached locally, let’s see how much performance we can expect.

It can be seen that the loading time before optimization is about 3s. Loading often is our API request often. Let’s take a look at the local cache.

Not only has the first screen accelerated, but the API loading market has shrunk to around 300-400ms. Performance is about 10 times better. In addition, our API is pulled and compared asynchronously through web workers, so even if the data is updated, the page performance will not be lost.

However, this method is still only suitable for requests where the data does not change frequently. Caching some requests locally can greatly improve performance and provide a better user experience.

All the above are written by myself, if there is a quotation, please note thank you.