background

For the Android lightweight storage solution, there are SharedPreferences that most people are familiar with; There is also a mMAP-based high-performance component MMKV, the underlying serialization/deserialization using Protobuf implementation, high performance, strong stability; And Jetpack DataStore is a data storage solution that allows you to store key-value pairs or typed objects using protocol buffers. DataStore uses Kotlin coroutines and flows to store data in an asynchronous, consistent transactional manner. This article will analyze the ins and outs of these three schemes one by one, and in-depth source analysis. (This article is based on Android 29 source code)

SharedPreferences

SharedPreferences is an easy-to-use, lightweight storage solution for Storing App information. It is essentially an XML file that stores data in key-value mode. The file path is /data/data/ application package name /shared_prefs. The file content is as follows:


      
<map>
    <string name="pref.device.id">8207e635-bd88-4220-9fc6-59c5e367ad82</string>
    <string name="pref.contact.chat">Let's chat(test)</string>
    <boolean name="pref.user.birthday.modifiable" value="true" />
    <int name="pref.user.birthday.day" value="0" />
    <string name="pref.contact.phone">The 123-456-8888</string>
    <string name="pref.user.phone"></string>
    <int name="pref.user.birthday.year" value="0" />
    <boolean name="pref.is.login" value="true" />
    <boolean name="pref.first_launch" value="false" />
</map>
Copy the code

Each time data is read, the value corresponding to the specified key is obtained by parsing the XML file.

SharedPreferences are designed for lightweight storage. What happens to memory if we store a lot of data?

Preliminary source code analysis

Now let’s look at SharedPreferences source design, first of all, from our regular call Context. GetSharedPreferences (name, mode), will be called to finally

//ContextImpl.java
@Override
    public SharedPreferences getSharedPreferences(String name, int mode) {... the File File;synchronized (ContextImpl.class) {
            if (mSharedPrefsPaths == null) { // Record all SP files. The file name is key and file is value
                mSharedPrefsPaths = new ArrayMap<>();
            }
            file = mSharedPrefsPaths.get(name);
            if (file == null) { file = getSharedPreferencesPath(name); mSharedPrefsPaths.put(name, file); }}return getSharedPreferences(file, mode);
    }
Copy the code

Now look at the method getSharedPreferences method

/ / ContextImpl getSharedPreferences method
    @Override
    public SharedPreferences getSharedPreferences(File file, int mode) {
        SharedPreferencesImpl sp;
        synchronized (ContextImpl.class) {
            final ArrayMap<File, SharedPreferencesImpl> cache = getSharedPreferencesCacheLocked();
            sp = cache.get(file);
            if (sp == null) { checkMode(mode); ...// The real implementation class of SharedPreferences is SharedPreferencesImpl
                sp = new SharedPreferencesImpl(file, mode);
                cache.put(file, sp);
                returnsp; }}...return sp;
    }
Copy the code

SharedPreferences is an interface whose real implementation class is SharedPreferencesImpl

final class SharedPreferencesImpl implements SharedPreferences {
    @UnsupportedAppUsage
    private final File mFile; // The corresponding XML file
    private final File mBackupFile;
    private Map<String, Object> mMap; // Map caches all key/value pairs in the XML file...@UnsupportedAppUsage
    SharedPreferencesImpl(File file, int mode) {
        mFile = file;
        mBackupFile = makeBackupFile(file);
        mMode = mode;
        mLoaded = false;
        mMap = null;
        mThrowable = null;
        startLoadFromDisk(); // Start a thread to load the XML file content}}Copy the code

Each time the SharedPreferencesImpl constructor is called, the startLoadFromDisk method is called, and a child thread is started in that method to load the contents of the XML file, and finally the entire contents of the XML are loaded into mMap

map = (Map<String, Object>) XmlUtils.readMapXml(str);
Copy the code
Memory footprint

From the above analysis it can be seen when the XML data is too large, will surely lead to memory footprint is too high, although the Context. GetSharedPreferences (name, mode) when the XML data in all load to the mMap cause memory footprint is too big, Is the space in time, at the same time ContextImpl. GetSharedPreferencesCacheLocked

private static ArrayMap<String, ArrayMap<File, SharedPreferencesImpl>> sSharedPrefsCache;   / / static
@GuardedBy("ContextImpl.class")
    private ArrayMap<File, SharedPreferencesImpl> getSharedPreferencesCacheLocked(a) {
        if (sSharedPrefsCache == null) {  
            sSharedPrefsCache = new ArrayMap<>();
        }

        final String packageName = getPackageName();
        ArrayMap<File, SharedPreferencesImpl> packagePrefs = sSharedPrefsCache.get(packageName);
        if (packagePrefs == null) {
            packagePrefs = new ArrayMap<>();
            sSharedPrefsCache.put(packageName, packagePrefs);
        }

        return packagePrefs;
    }
Copy the code

As you can see, this static sSharedPrefsCache holds all sp’s, and then the value value of sSharedPrefsCache holds all key-value pairs, meaning that the used SP is always in memory.

At the same time, developers should also pay attention to the size of each SP (XML) control, after all, the read and write operation will also have a certain impact, specific differentiation can be based on the corresponding business.

But SharedPreferences is designed for lightweight data storage, so there is nothing wrong with the design, and designers should pay attention to their own scenarios. After all, a good design can’t handle all scenarios.

The first call may block the main thread

We take a look at SharedPreferencesImpl. Get string () method

    @Override
    @Nullable
    public String getString(String key, @Nullable String defValue) {
        synchronized (mLock) {
            awaitLoadedLocked();
            String v = (String)mMap.get(key);
            returnv ! =null? v : defValue; }}@GuardedBy("mLock")
    private void awaitLoadedLocked(a) {...while(! mLoaded) {try {
                mLock.wait();
            } catchInterruptedException unused {}} ···}Copy the code

As you can see, the main thread will block until the sp is loaded, until the child thread that loaded the SP is loaded. For the above problem, we can call the getSharedPreferences method ahead of time to let the child thread load the SP content ahead of time.

Prevent multiple edit/commit/apply sequences
        SharedPreferences sp = getSharedPreferences("jackie", MODE_PRIVATE);
        sp.edit().putString("a"."ljc").commit();
        sp.edit().putString("b"."cxy").commit();
        sp.edit().putString("c"."lsm").apply();
        sp.edit().putString("c"."dmn").apply();
Copy the code

Each call to the Edit method creates an Editor object, causing additional memory footprint. Many designers encapsulate SharedPreferences, hiding the Edit () and commit/apply() calls, while ignoring the editor.mit /apply() design philosophy and usage scenarios. For complex scenarios, you can perform commit/apply() after multiple putXxx methods, that is, update multiple key/value pairs at a time, and perform only one IO operation.

ANR problems caused by commit/apply

Commit is a synchronous commit to the hardware disk, with a return value indicating whether the modification was successful. If the commit is committed in the main thread, the thread will be blocked, affecting subsequent operations and may cause ANR. Apply commits the modified data to memory and then asynchronously actually commits to the hardware disk with no return value. Let’s take a closer look at why Apply causes ANR problems, starting with the source code for Apply:

@Override
        public void apply(a) {
            final long startTime = System.currentTimeMillis();

            final MemoryCommitResult mcr = commitToMemory();
            final Runnable awaitCommit = new Runnable() {
                    @Override
                    public void run(a) {
                        try {
                            mcr.writtenToDiskLatch.await(); / / wait for
                        } catch(InterruptedException ignored) {} ···}}; QueuedWork.addFinisher(awaitCommit);// Join the queue

            Runnable postWriteRunnable = new Runnable() {
                    @Override
                    public void run(a) { awaitCommit.run(); QueuedWork.removeFinisher(awaitCommit); }}; SharedPreferencesImpl.this.enqueueDiskWrite(mcr, postWriteRunnable);
            notifyListeners(mcr);
        }
Copy the code

First add the runnable with await to the QueuedWork queue, The postWriteRunnable write task is then handed to HandlerThread (Handler + Thread) for execution through enqueueDiskWrite. The tasks to be processed are queued for execution. We then go to the handleStopActivity method of the ActivityThread and see the following code

        // Make sure any pending writes are now committed.
        if(! r.isPreHoneycomb()) { QueuedWork.waitToFinish(); }Copy the code

Let’s take a look at the source code for waitToFinish

Is called from the Activity base class's onPause(), after BroadcastReceiver's onReceive,
     * after Service command handling.etc. (so async work is never lost*/ // This comment is importantpublic static void waitToFinish(a){...try {
            while (true) {
                Runnable finisher;

                synchronized (sLock) {
                    finisher = sFinishers.poll();
                }

                if (finisher == null) {
                    break;
                }

                finisher.run(); / / key, equivalent to call ` MCR. WrittenToDiskLatch. Await ` ()}}finally {
            sCanDelay = true; }}Copy the code

Remember queuedWork.addFinisher (awaitCommit), where the awaitCommit was waiting for the write thread, and if the user was using too much Apply, there would be too many writes in the write queue. With only one thread writing, it is easy to create ANR when a large number of reads and writes are involved. (Prior to Android 8.0, the implementation queuedwork.waittoFinish was flawed. In more than one life cycle method, the main thread for task queue to complete, and because of the CPU scheduling task queue of the thread is not necessarily in the execution status, and when the apply to submit task is long, waiting for all tasks completed, consumes a lot of time, it’s likely ANR), Because the source code of this article is based on Android 29, there is no ANR problem in this version or after Android 8.0, because 8.0 has been greatly optimized, it will actively trigger processPendingWork to take out the write task list and execute it in sequence, rather than just waiting. There is a more important optimization:

We know that when apply is called, the changes are committed synchronously to an in-memory map, the write task to disk is added to the queue, the write task is removed from the queue in the worker thread, and the write task is executed in sequence. Note that both memory writes and disk writes are full writes to an XML sp file. There is room for optimization. For example, for the same SP file, n times of continuous call of apply will result in n times of disk write task execution. In fact, only the last time is needed to execute the last time. This is an optimization made in Android 8.0, which is controlled by version.

The solution

A callback is set to the Handler by reflecting the H(Handler) variable in the ActivityThread. The Handler’s dispatchMessage handles the callback first. Queue cleaning requires a reflection call to QueuedWork. The reason Why Google calls this method before onStop of the Activity/Service is to ensure data persistence of the SP as much as possible. The article also compares the failure rate of the cleared queue with that of the uncleared queue (there is little difference).

Another solution is that, because SharedPreferences is an interface, you can implement Apply yourself (asynchronously calling system COMMIT without blocking like System Apply), Override both the Activity and Application getSharedPreference methods to return your implementation directly. However, this scheme has more obvious side effects than clearing waiting locks: system Apply updates the cache synchronously first and then asynchronously writes files. The callers read and write the cache synchronously in the same thread, so there is no need to worry about the synchronization problem of reading and writing the context data. Commit Directly updates the cache in the child thread and then writes the file. The caller needs to pay attention to the context thread switchover. Asynchronization may cause inconsistent read and write data. So the first option is still recommended.

Security mechanism

Security mechanisms can be divided into thread safety, process safety and file backup mechanism.

SharedPreferences is thread-safe by locking, which I won’t go into here. The SharedPreferences class does not support process security.

 *
 * <p><em>Note: This class does not support use across multiple processes. < /em>
 *
Copy the code

SharedPreferences provides the MODE_MULTI_PROCESS Flag to support cross-processes, ensuring that on systems prior to API 11, if the SP has been read into memory, the next time the SP is accessed, if the Flag is present, the MODE_MULTI_PROCESS Flag will be used. I reread the file, that’s all!

    @Override
    public SharedPreferences getSharedPreferences(File file, int mode) { SharedPreferencesImpl sp; ...if((mode & Context.MODE_MULTI_PROCESS) ! =0 ||
            getApplicationInfo().targetSdkVersion < android.os.Build.VERSION_CODES.HONEYCOMB) {
            // If somebody else (some other process) changed the prefs
            // file behind our back, we reload it. This has been the
            // historical (if undocumented) behavior.
            sp.startReloadIfChangedUnexpectedly();
        }
        return sp;
    }
Copy the code

So cross-process communication in SharedPreferences is not reliable at all! To ensure process security, you can use a ContentProvider for unified access, or use file locks.

Finally, let’s look at the file backup mechanism. When we run the program, we may encounter unexpected situations such as phone crash or power outage. At this time, how to ensure the normal and safety of the file is very important. The Android file system itself is protected, but data loss and file corruption can occur, so backup of files is crucial. From SharedPreferencesImpl commit() -> enqueueDiskWrite() -> writeToFile(),

@GuardedBy("mWritingToDiskLock")
    private void writeToFile(MemoryCommitResult mcr, boolean isFromSyncCommit) {...// Try to write to the file
            if(! backupFileExists) {if(! mFile.renameTo(mBackupFile)) {// Name the original file as the backup file
                    Log.e(TAG, "Couldn't rename file " + mFile
                          + " to backup file " + mBackupFile);
                    mcr.setDiskWriteResult(false.false);
                    return; }}else {
                mFile.delete();
            }
            // Writing was successful, delete the backup file if there is one.
          	// The backup file is deletedmBackupFile.delete(); ...}Copy the code

During backup, the original file is renamed as a backup file and then deleted after the backup file is successfully written. Take a look at the loadFromDisk method above

    private void loadFromDisk(a) {
        synchronized (mLock) {
            if (mLoaded) {
                return;
            }
            if(mBackupFile.exists()) { mFile.delete(); mBackupFile.renameTo(mFile); }...}Copy the code

If the write fails due to an exception (for example, the process is killed), if the backup file is found during the next startup, the backup file is renamed as the source file and the incomplete file is discarded.

summary

SharedPreferences memory usage and possible blocking of the main thread, the correct application scenario and appropriate code invocation, ANR issues, and finally we analyze its security mechanism, thread safety, process safety (none), file backup mechanism.

Using SharedPreferences correctly, you can summarize the problems with SharedPreferences, which can lead to high memory footprint, ANR, and process security.

MMKV

MMKV Tencent development based on mMAP memory mapping key-value component, the underlying serialization/deserialization using Protobuf implementation, high performance, strong stability, support multi-process. It has been used on wechat since the middle of 2015, and its performance and stability have been verified over time. Subsequently, it has been ported to Android/macOS/Win32 / POSIX platform, and also open source.

MMKV was originally intended to solve the system crash caused by special text on wechat. In the process of solving the problem, some counters need to be saved (because flash back may happen at any time), so a universal key-value component with very high performance is needed. SharedPreferences, NSUserDefaults, SQLite and other common components do not meet these requirements. Considering that the main appeal of this anti-crash solution is real-time writing, and MMAP memory mapped files just meet such requirements.

use

First import dependencies

dependencies {
    implementation 'com. Tencent: MMKV -static: 1.2.7'
    // replace "1.2.7" with any available version
}
Copy the code

MMKV is very simple to use and all changes take effect immediately without calling sync or apply. Initialize MMKV when App starts, set the root directory of MMKV (files/ MMKV /), for example in Application:

public void onCreate(a) {
    super.onCreate();

    String rootDir = MMKV.initialize(this);
    System.out.println("mmkv root: " + rootDir);
    / /...
}
Copy the code

If different services need different storage, they can create their own instances

MMKV kv = MMKV.mmkvWithID("MyID");
kv.encode("bool".true);
Copy the code

If the service requires multi-process access, add the flag bit mmkv.multi_process_mode to the initialization:

MMKV kv = MMKV.mmkvWithID("InterProcessKV", MMKV.MULTI_PROCESS_MODE);
kv.encode("bool".true);
Copy the code

MMKV provides a global instance that can be used directly:

import com.tencent.mmkv.MMKV;
/ /...

MMKV kv = MMKV.defaultMMKV();

kv.encode("bool".true);
boolean bValue = kv.decodeBool("bool");

kv.encode("int", Integer.MIN_VALUE);
int iValue = kv.decodeInt("int");

kv.encode("string"."Hello from mmkv");
String str = kv.decodeString("string");
Copy the code
Supported data types
  • The following Java language base types are supported:
    • Boolean, int, long, float, double, byte[]
  • Support for the following Java classes and containers:
    • String, Set the < String >
    • Any implementation ofParcelableThe type of
SharedPreferences migration
  • MMKV providesimportFromSharedPreferences()Function to easily migrate data over.
  • MMKV has an additional implementationSharedPreferences,SharedPreferences.EditorThese two interfaces need only two or three lines of code during migration, and the rest of the CRUD operations remain unchanged.
private void testImportSharedPreferences(a) {
    //SharedPreferences preferences = getSharedPreferences("myData", MODE_PRIVATE);
    MMKV preferences = MMKV.mmkvWithID("myData");
    // Migrate old data
    {
        SharedPreferences old_man = getSharedPreferences("myData", MODE_PRIVATE);
        preferences.importFromSharedPreferences(old_man);
        old_man.edit().clear().commit();
    }
    // The same as before
    SharedPreferences.Editor editor = preferences.edit(); / / note preferences. Edit ();
    editor.putBoolean("bool".true);
    editor.putInt("int", Integer.MIN_VALUE);
    editor.putLong("long", Long.MAX_VALUE);
    editor.putFloat("float", -3.14 f);
    editor.putString("string"."hello, imported");
    HashSet<String> set = new HashSet<String>();
    set.add("W"); set.add("e"); set.add("C"); set.add("h"); set.add("a"); set.add("t");
    editor.putStringSet("string-set", set);
    // No need to call commit()
    //editor.commit();
}
Copy the code

You can see that using preferences.edit(); MMKV has been very thoughtful for us. The cost of migration is very low. What are you waiting for if you don’t migrate?

Mmap principle

Mmap is a method of memory-mapping files. A file or other object is mapped to the address space of a process to achieve a one-to-one mapping between the file disk address and a virtual address in the virtual address space of a process. After such mapping is achieved, the process can use Pointers to read and write the memory, and the system will automatically write back dirty pages to the corresponding file disk, that is, the operation on the file is completed without calling system call functions such as read and write. Conversely, changes made by the kernel space to this area directly reflect user space, allowing file sharing between different processes.

On virtual (address) space and virtual memory: Please discard the concept of virtual memory, which is an advertising concept and does not make sense in development. In development, there is only the concept of virtual space, the space made up of all the addresses seen by the process, is virtual space. A virtual space is a process’s remapping of all the physical addresses (allocated and to be allocated) assigned to it. What Mmap does, at the application level, is it allows you to access a certain section of the file as if it were memory.

The MMAP memory mapping file provides a memory block that can be written at any time. App only writes data into it, and the operating system is responsible for writing the memory back to the file. There is no need to worry about data loss caused by crash.

Why Protobuf

With respect to data serialization, we use Protobuf protocol, and PB has a good performance in performance and space occupancy. Protocol buffers, often referred to as Protobuf, is a Protocol developed by Google that allows the serialization and deserialization of structured data. It is not just a message format, but a set of rules and tools for defining and exchanging these messages. Google developed it to provide a better way to communicate between systems than XML. The protocol even goes beyond JSON with better performance, better maintainability, and a smaller size.

But it also has some disadvantages, the binary format is not readable, high maintenance costs. For serialization selection, see this article.

Incremental update mechanism

Standard Protobuf does not provide incremental update capability; each write must be written in full. Considering that the main usage scenario is frequent write update, we need the ability of incremental update: append the incremental KV object directly to the end of memory after serialization; In this way, there will be several new and old copies of the same key, with the latest data at the end. Therefore, when the program starts and turns on MMKV for the first time, the data can be guaranteed to be up-to-date and effective by constantly replacing the previous value with the value read later.

Using Append for incremental updates brings up a new problem, which is that the file size can grow out of control if you append constantly. For example, if the same key is constantly updated, it may consume hundreds of M or even G space. In fact, the entire KV file is only one key, which can be saved in less than 1K space. This is clearly undesirable. We need to make a compromise between performance and space: apply space in unit of memory pagesize, append mode until space runs out; When append to the end of the file, file reorganization, key rearrangement, try serialization save rearrangement results; If you still don’t have enough space after rearranging, double the size of the file until you have enough space.

Multi-process design and implementation

Let’s first look at the original intention of MMKV to solve the problem, the main appeal is real-time writing, and the speed is fast enough, high performance. When it comes to cross-process communication, let’s first look at what we have. The C/S architecture has a ContentProvider, but the problem is obvious. Slow startup and slow access are the pain points of Binder based C/S components in Android. Sockets, pipes, and message queues are slower because they require at least two memory copies.

MMKV is after the ultimate access speed, we want to avoid interprocess communication as much as possible, C/S architecture is not desirable. Considering that MMKV uses MMAP for its underlying implementation, a decentralized architecture is a natural choice. We just need to put the file Mmap into the memory space of each accessing process, add the appropriate process lock, and handle the synchronization of data, and then we can achieve the concurrent access of multiple processes.

The performance comparison
  • Single process performance, it can be seen that MMKV far exceeds SharedPreferences and SQlite in writing performance, and has similar or superior performance in reading performance.

  • Process performance, more MMKV in writing performance and reading performance, are far beyond MultiProcessSharedPreferences & SQLite & SQLite, MMKV is the best choice for Android multi-process key-value storage components.

(The test machine was Huawei Mate 20 Pro 128G and Android 10. The operation was repeated 1K times for each group, and the time unit was ms.)

summary

MMKV can solve the problem that SharedPreferences cannot communicate directly across processes, but SharedPreferences can also be solved by means of ContentProvider or file lock, etc. Personally, THERE are two main advantages of MMKV. SharedPreferences may cause Activity/Service life cycles to do waitToFinish(), causing ANR problems, whereas MMKV does not. Another advantage is real-time writing, high performance, and speed (originally designed).

Although cross-process and ANR problems in SharedPreferences can also be solved with technical solutions, MMKV does not have these two problems naturally, and the component also supports migration from SharedPreferences to MMKV with minimal cost. So MMKV is really a better lightweight storage solution.

DataStore

DataStore is part of Android Jetpack. Jetpack DataStore is a data storage solution that allows you to store key-value pairs or typed objects using protocol buffers. DataStore uses Kotlin coroutines and flows to store data in an asynchronous, consistent transactional manner. If you are currently using SharedPreferences to store data, consider migrating to DataStore.

DataStore provides two different implementations: Preferences DataStore and Proto DataStore.

  • The Preferences DataStore is stored locally as key-value pairs similar to SharedPreferences, and this implementation does not require a predefined schema or ensure type safety.
  • Proto DataStore stores data as instances of custom data types. This implementation requires you to define schemas using protocol buffers, but ensures type safety.
Preferences DataStore usage

Lead-in dependence

dependencies {
  // Preferences DataStore (SharedPreferences like APIs)  
  implementation "Androidx. Datastore: datastore - preferences: 1.0.0 - alpha06"
  // Typed DataStore (Typed API surface, such as Proto)
  implementation "Androidx. Datastore: datastore - core: 1.0.0 - alpha06"
}  
Copy the code

The Preferences DataStore can be used as follows

        / / 1. Construct the DataStore
        val dataStore: DataStore<Preferences> = context.createDataStore(name = PREFERENCE_NAME)

        //2. The Preferences DataStore exists locally as a key-value pair and needs to define a key(for example, KEY_JACKIE)
        // The key in the Preferences DataStore is of type Preferences.Key
      
        val KEY_JACKIE = stringPreferencesKey("username")
        GlobalScope.launch {
            //3. Store data
            dataStore.edit {
                it[KEY_JACKIE] = "jackie"
            }
            //4. Obtain data
            val getName = dataStore.data.map {
                it[KEY_JACKIE]
            }.collect{ //flow calls collect to consume data
                Log.i(TAG, "onCreate: $it")  // Print out Jackie}}Copy the code

Note that data is read and written in a coroutine because DataStore is implemented based on Flow. You can also see that there are no commit/apply() methods, and you can listen for success or failure results.

The Preferences DataStore supports only Int, Long, Boolean, Float, String key-value pairs. It is suitable for storing simple, small data and does not support local updates. If one of the values is changed, the entire file content will be reserialized.

SharedPreferences are migrated to the Preferences DataStore

Next let’s look at the SharedPreferences migrated to DataStore, when building a DataStore to incoming SharedPreferencesMigration, when after the DataStore to construct, You need to perform a read or write operation to complete the migration. After the migration is successful, the SharedPreferences file is automatically deleted

val dataStoreFromPref = this.createDataStore(name = PREFERENCE_NAME_PREF
                ,migrations = listOf(SharedPreferencesMigration(this,OLD_PREF_NANE)))
Copy the code

Our original SharedPreferences data is as follows


      
<map>
    <string name="name">lsm</string>
    <boolean name="male" value="false" />
    <int name="age" value="30" />
    <float name="height" value="175.0" />
</map>
Copy the code

The original file directory is as follows:

The files and directories are as follows:

It can be seen that the original SharedPreferences are deleted after the migration, and the files in the DataStore are smaller. An interesting situation is found during the migration. If I do not read any values directly after the migration, I cannot find the migrated files in the corresponding directory. Only when I read arbitrary values will I find the file in the corresponding directory. I don’t know if it is a bug or if it is by design. The complete code is as follows:

val dataStoreFromPref = this.createDataStore(name = PREFERENCE_NAME_PREF
                    , migrations = listOf(SharedPreferencesMigration(this, OLD_PREF_NANE)))
// After the migration, you need to read the file manually to find the migrated file
val KEY_NAME = stringPreferencesKey("name")
            GlobalScope.launch { 
                dataStoreFromPref.data.map { 
                    it[KEY_NAME]
                }.collect {
                    Log.i(TAG, "onCreate: ===============$it")}}Copy the code

Let’s move on to Proto DataStore, which is more flexible and supports more types than Preference DataStore

  • Preference DataStore is supported onlyIntLongBooleanFloatString, and Protocol Buffers are supported by Proto DataStore
  • Proto DataStore uses binary encoding compression, which is smaller and faster than XML
Proto DataStore usage

The dependency method has been described above, and some plug-ins need to be used at the same time. Here is a reference to HiDhl’s article, which has specific operations.

  • Protobuf | Gradle plug-in installation compile proto file
  • Protobuf | how to compile on ubuntu installation Protobuf proto file
  • Protobuf | how to install Protobuf compile proto files on MAC

Because the Proto DataStore is a typed object that stores the class, protocol Buffers serialize the objects to be stored locally. Serialization is the transformation of an object into a storeable or transportable state, which can be divided into object serialization and data serialization. In Android, object serialization can be implemented through Serializable and Parcelable. However, Serializable serialization and deserialization process uses a large number of reflection and temporary variables, which can trigger GC frequently. The serialization performance is poor, but the implementation is simple. Parcelable, while much faster than Serializable because both reads and writes are custom serialized stores, is much more complicated to use (although plugins have already solved this problem).

Common methods of data serialization include JSON, Protocol Buffers, and FlatBuffers. Protocol Buffers: Two versions: Proto2 and Proto3. Most projects use proto2, which have different syntax. Proto3 simplifies the syntax of proto2 and improves the development efficiency. Proto DataStore supports both, and we use Proto 3 here.

Create a new person. proto file and add the following:

syntax = "proto3"; option java_package = "com.hi.dhl.datastore.protobuf"; option java_outer_classname = "PersonProtos"; Message Person {// format: field type + field name + field number string name = 1; }Copy the code

Syntax: specifies the version of a protobuf, if proto2 is not specified by default, it must be the first line of the.proto file except for empty lines and comment content

Option: Indicates an optional field

Message contains a string field (name). Note: the = sign is followed by a field number

Each field consists of three parts: field type + field name + field number. In Java, each field is compiled into a Java object.

This is a brief introduction to the syntax, and then Build to see the generated file, as described in the previous article.

And then we’ll look at how it’s used

        //1. Build Proto DataStore
        val protoDataStore: DataStore<PersonProtos.Person> = this
            .createDataStore(fileName = "protos_file",serializer = PersonSerializer)

        GlobalScope.launch(Dispatchers.IO) {
            protoDataStore.updateData { person ->
                //2. Write data
                person.toBuilder().setName("jackie").build()
            }

            //3. Read data
            protoDataStore.data.collect {
                Log.i(TAG, "onCreate: ============"+it.name)
            }

        }
Copy the code

The PersonSerializer class is implemented as follows:

object PersonSerializer: Serializer<PersonProtos.Person> {
    override val defaultValue: PersonProtos.Person
        get() {
            return PersonProtos.Person.getDefaultInstance()
        }


    override fun readFrom(input: InputStream): PersonProtos.Person {
        try {
            return PersonProtos.Person.parseFrom(input) // Is automatically generated by the compiler to read and parse the input message
        } catch (exception: Exception) {
            throw CorruptionException("Cannot read proto.", exception)
        }
    }

    override fun writeTo(t: PersonProtos.Person, output: OutputStream) =
        t.writeTo(output) // t.ritto (output) is automatically generated by the compiler to write serialized messages
}
Copy the code

Read and write are also in the coroutine, create files in this directory:

SharedPreferences are migrated to Proto DataStore

How do SharedPreferences migrate to the Proto DataStore

        1. Create a mapping
        val sharedPrefsMigration =
            androidx.datastore.migrations.SharedPreferencesMigration<PersonProtos.Person>(this,OLD_PREF_NANE){
                sharedPreferencesView,person ->

                // Get SharedPreferences data
                val follow = sharedPreferencesView.getString(NAME,"")
                // Write data, that is, map data to properties of the corresponding class
                person.toBuilder().setName(follow).build()
            }
        //2. Construct the Protos DataStore and pass sharedPrefsMigration
        val protoDataStoreFromPref = this.createDataStore(fileName = "protoDataStoreFile"
            ,serializer = PersonSerializer,migrations = listOf(sharedPrefsMigration))

        GlobalScope.launch(Dispatchers.IO) {
            protoDataStoreFromPref.data.map {
                it.name
            }.collect{

            }
        }
Copy the code

As you can see, the migration requires first creating the mapping, then building the Protos DataStore and passing sharedPrefsMigration. Finally, the SharedPreferences will be deleted, even if only one data has been migrated. The entire SharedPreferences will also be deleted, so the migration must move all the data needed there. Finally, the migrated directory

SharedPreferences vs DataStore function comparison

SharedPreferences vs DataStore performance comparison

The Preferences DataStore is powered by HUAWEI Yal-Al10 Android 10 with 8G ram and 8 core CPU Kirin 980. The Preferences DataStore is powered by HUAWEI Yal-Al10 Android 10

            / / 1. Construct the DataStore
            val dataStore: DataStore<Preferences> = this.createDataStore(name = PREFERENCE_NAME)

            GlobalScope.launch(Dispatchers.IO) { // Note to place dispatchers.io
                val time = System.currentTimeMillis()
                for (index in 1.1000.) {val key = index.toString()
                    val KEY_JACKIE = intPreferencesKey(key)
                    //3. Store data
                    dataStore.edit {
                        it[KEY_JACKIE] = index
                    }
                }
                Log.i(TAG, "onCreate: =========="+(System.currentTimeMillis() - time))
            }
Copy the code

I then tested SharedPreferences on the phone, and here is my code

            GlobalScope.launch(Dispatchers.IO) {
                val time = System.currentTimeMillis()
                for (index in 1.1000.) {val key = index.toString()
                    edit.putInt(key,index).commit()
                }
                Log.i(TAG, "onCreate: =========="+(System.currentTimeMillis() - time))

            }
Copy the code

The three times of statistics are as follows:

com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========1969
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========1981
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========1884
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========750
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========719
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========698
Copy the code

The Preferences DataStore takes a little more than twice as long as SharedPreferences, and tests on another Xiaomi Note 3 show it takes a little more than twice as long.

com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========17373
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========17310
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========7757
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========7793
Copy the code

However, the test results on both groups of virtual machines were several times higher than those in the previous test, which was not much different overall.

// Vm 1 Pixel_3a API 30
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========4751
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========4692
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========2883
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========2736
// Vm 2 nexus-4 API 29
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========3512
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========3503
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========1934
com.jackie.datastoredemo2 I/MainActivity: onCreate: ==========1965
Copy the code

After the above series of tests, we can roughly estimate that the Preferences DataStore takes about twice as long as the SharedPreferences. Note that when testing, specify dispatchers. IO and run the same type of coroutine. Otherwise, the results can be very different.

The comparison between Proto DataStore and SharedPreferences is meaningless, because the former tends to store only one object and is not comparable.

Application scenarios
  • The Preferences DataStore is stored locally as key-value pairs, similar to SharedPreferences, that access simple fields.
  • Proto DataStore stores data as instances of custom data types. Can access some complex objects, suitable for the preservation of some important objects.

conclusion

The SharedPreferences Api is user-friendly and allows you to listen when data changes. However, it may cause ANR prior to 8.0 (optimized after 8.0) and cannot cross processes. DataStore has Preferences DataStore and Proto DataStore. The former is suitable for storing key-value data but is not as efficient as SharedPreferences (which takes about twice as long). The latter is suitable for storing some custom data types, DataStore can also listen when data changes, and use Flow to store data asynchronously and consistently, which is much more powerful, but not cross-process.

Proto DataStore feels like it might have an advantage in storing complex data. When some cached data objects are needed locally, it would be advantageous to use Proto DataStore to quickly fetch the entire object (for example, the home page cache data) and then load the data. But I haven’t compared the speed to other methods yet, so interested readers can try a wave for themselves.

Although MMKV is not officially produced, it beats the two official data storage methods in performance, speed and cross-process. MMKV is preferred for simple data storage that requires cross-processing.

Finally, this article referred to a lot of big men’s articles, I am standing on the shoulders of giants to continue to learn, thank them!

reference

Mp.weixin.qq.com/s?__biz=MzI…

Github.com/Tencent/MMK…

www.cnblogs.com/huxiao-tee/…

Weishu. Me / 2016/10/13 /…

Qingmei2.blog.csdn.net/article/det…

Github.com/Tencent/MMK…

zhuanlan.zhihu.com/p/53339153

Tech.meituan.com/2015/02/26/…

Juejin. Cn/post / 688144…

www.jianshu.com/p/3f64caa56…