For developers working in 3D WebGL, loading a large number of HDR, GLB, GLTF and other files is often a headache, because these files are large and will consume a lot of time to load on the network side, which will affect the user experience. For these large files, the cache capacity of localStorage and sessionStorage is definitely not enough. So at this point we’re going to call out IndexedDB.
IndexedDB is a way for you to store data persistently within the user’s browser, allowing you to store large amounts of data, provide a lookup interface, and create indexes. IndexedDB is also fairly compatible. It is generally not compatible with older browsers, but is still usable.
IndexedDB is not a relational database (it does not support SQL query statements). It is more like a NoSQL database and can be simply considered as a key-value front-end database based on transaction operations. Its API is mostly asynchronous. The IndexedDB syntax is relatively low level, so you can use some libraries based on the IndexedDB wrapper to simplify operations:
- localForage: A client-side data storage spacer (Polyfill) that provides a simple syntax for name:value, based on the IndexedDB implementation, and automatically backs back only WebSQL and localStorage in browsers that do not support IndexedDB.
- Dexie.js: Encapsulation of IndexedDB for faster coding development by providing a more friendly and simple syntax.
- ZangoDB: A MongoDB like implementation of the IndexedDB interface that provides most of the common MongoDB features such as filtering, projection, sorting, updating, and aggregation.
- JsStore: a simple and advanced encapsulated implementation of IndexedDB with SQL-like syntax.
Other introduction and basic use of IndexedDB can be found in MDN, the Introduction to Browser Database IndexedDB tutorial, and the HTML5 IndexedDB front-end local storage database instance tutorial, which I won’t cover here.
With IndexedDB capacity
What’s the capacity of IndexedDB? Take Chrome for example. Before Chrome67, it was 50% of the disk space, and since Chrome67,
- In Chrome normal mode
If the value “Should Remain Available” is hit, then the quota for a source (” site “) will be zero. The value should Remain Available relates to keeping free space on mass storage. Start with Chrome 67, which is the lower value of 2 GB and 10% of the total capacity of mass storage. Once this limit is reached, other writes to the temporary store will fail, but the existing data in the temporary store will not be deleted.
If the value should Remain Available has not been reached, the quota will be 20% of the shared pool. This represents 20% of the size of all data that Chrome has saved in temporary storage, plus the fact that Chrome can save all data to local storage without reaching the value should Remain Available.
For example, IF I have a 256GB hard drive, the value of “Should Remain Available” is 2GB, which means that the browser’s temporary storage space is 254GB. If 4GB of temporary storage has been used at this point, the IndexedDB available size is 50GB.
We can see the corresponding description from the Chromium source code and chrome Developer documentation:
// The amount of the device's storage the browser attempts to
// keep free. If there is less than this amount of storage free
// on the device, Chrome will grant 0 quota to origins.
//
// Prior to M66, this was 10% of total storage instead of a fixed value on
// all devices. Now the minimum of a fixed value (2GB) and 10% is used to
// limit the reserve on devices with plenty of storage, but scale down for
// devices with extremely limited storage.
// * 1TB storage -- min(100GB,2GB) = 2GB
// * 500GB storage -- min(50GB,2GB) = 2GB
// * 64GB storage -- min(6GB,2GB) = 2GB
// * 16GB storage -- min(1.6GB,2GB) = 1.6GB
// * 8GB storage -- min(800MB,2GB) = 800MB
const int64_t kShouldRemainAvailableFixed = 2048 * kMBytes; // 2GB
const double kShouldRemainAvailableRatio = 0.1; / / 10%
// The amount of the device's storage the browser attempts to
// keep free at all costs. Data will be aggressively evicted.
//
// Prior to M66, this was 1% of total storage instead of a fixed value on
// all devices. Now the minimum of a fixed value (1GB) and 1% is used to
// limit the reserve on devices with plenty of storage, but scale down for
// devices with extremely limited storage.
// * 1TB storage -- min(10GB,1GB) = 1GB
// * 500GB storage -- min(5GB,1GB) = 1GB
// * 64GB storage -- min(640MB,1GB) = 640MB
// * 16GB storage -- min(160MB,1GB) = 160MB
// * 8GB storage -- min(80MB,1GB) = 80MB
const int64_t kMustRemainAvailableFixed = 1024 * kMBytes; // 1GB
const double kMustRemainAvailableRatio = 0.01; / / 1%
Copy the code
- In Chrome incognito mode
Fixed size of 100MB
Large file storage in IndexedDB
IndexedDB can store not only strings, but also binary data (ArrayBuffer objects and bloBs), so you can convert images or 3D model files into Blob format files in IndexedDB. You can eliminate the second load network request time.
For example, I have a 20 MB image on a static server, request the image in the browser, and store it in IndexedDB 500 times with different keys, which is equivalent to about 10 gb of bloBs crammed into IndexedDB. Forage is used to simplify the IndexedDB operation by using localForage and setting the responseType to Blob so that the requested image is a Blob object:
axios({
url: 'DSC06753-HDR-2.jpg'.method: 'get'.responseType: 'blob'
}).then(result= > {
this.start = new Date().getTime()
console.log(this.start, 'start save')
console.log('is Blob', result instanceof Blob, 'result')
const number = 500 // Number of cycles
const setItemArray = []
const getItemArray = []
for (let i = 0; i < number; i++) {
setItemArray.push(localforage.setItem(`img${i}`, result))
}
// Store 500 times
Promise.all(setItemArray).then(result= > {
this.save = new Date().getTime()
console.log(this.save - this.start, 'total save time(save time - start time)')
for (let j = 0; j < number; j++) {
getItemArray.push(localforage.getItem(`img${j}`))}// Read 500 images
Promise.all(getItemArray).then(value= > {
console.log(new Date().getTime() - this.start, 'read time(read time - start time)')
console.log(new Date().getTime() - this.save, 'read time(read time - save time)')
console.log(value[value.length - 1] instanceof Blob, 'get')
// Turn the Blob object into ObjectURL to display the image on the page
const URL = window.URL || window.webkitURL
const imgURL = URL.createObjectURL(value[value.length - 1])
this.src = imgURL
})
})
})
Copy the code
In chrome developer Tools, we can clearly see that all images are stored in IndexedDB as Blob objects:
Take another look at the data read and store speeds:
It took 59.345s to store 10GB of data and 152ms to read (measured on an I7 MacBook Pro with 16GB of RAM and 256GB SSD).
To sum up, IndexedDB can fully meet the needs of storing large-volume files, and IndexedDB can be used in workers, including Web workers and Service workers. When 3D needs to perform complex calculations, You can use the Service Worker to store some data in the IndexedDB or the Web Worker to read the data in the IndexedDB for multithreaded calculation. Note that IndexedDB also follows the same- Origin policy, so you can only access data stored in the same domain, not in other domains.