preface
SharedPreferences SharedPreferences is a lightweight storage solution provided by Google, which is relatively convenient to use. It can directly store data without another thread, but it also brings many problems, especially ANR problems caused by SP, very common. Because of this, some alternative solutions to SP have emerged, such as MMKV
This paper mainly includes the following contents: 1. Problems existing in SharedPreferences 2. Basic use and introduction of MMKV 3
Problems with SharedPreferences
SP is relatively inefficient
1. Read/write mode: Direct I/O
2. Data format: XML
3. Write mode: Full update
Since SP uses XML format to store data, it can only replace updated data in full every time it is updated. This means that if we have 100 data and update only one data item, we also need to convert all data into XML format and then write to the file through IO, which also leads to low writing efficiency of SP
ANR caused by commit
public boolean commit() {
// Save data to mMap in the current thread
MemoryCommitResult mcr = commitToMemory();
SharedPreferencesImpl.this.enqueueDiskWrite(mcr, null);
try {
// If writing is performed in singleThreadPool, suspend the main thread with await() until the writing is complete.
// Commit synchronization is done here.
mcr.writtenToDiskLatch.await();
} catch (InterruptedException e) {
return false;
}
/* * callback timing: * 1. Commit is to call back when both memory and disk operations end * 2. Apply is to call back when memory operations end */
notifyListeners(mcr);
return mcr.writeToDiskResult;
}
Copy the code
As shown above, 1.com MIT has a return value indicating whether the modification was successfully committed. 2.com MIT The commit is synchronous and will not be completed until the disk operation is successful
So using commit is likely to cause ANR when the data volume is large
ANR caused by Apply
Commit is synchronous, and SP also provides asynchronous apply. Apply commits atoms of modified data to memory, and then asynchronously commits to hardware disk. Commit is synchronous commit to hardware disk. They wait for the commit being processed to be saved to disk before operating, thus reducing efficiency. However, apply is only atomic commits to content, and subsequent functions that call Apply will directly overwrite the previous memory data, thus improving efficiency to a certain extent
But apply also causes ANR problems
public void apply() {
final long startTime = System.currentTimeMillis();
final MemoryCommitResult mcr = commitToMemory();
final Runnable awaitCommit = new Runnable() {
@Override
public void run() {
mcr.writtenToDiskLatch.await(); / / wait for. }};// Add awaitCommit to QueuedWork
QueuedWork.addFinisher(awaitCommit);
Runnable postWriteRunnable = new Runnable() {
@Override
publicvoid run() { awaitCommit.run(); QueuedWork.removeFinisher(awaitCommit); }}; SharedPreferencesImpl.this.enqueueDiskWrite(mcr, postWriteRunnable);
}
Copy the code
- Will a
awaitCommit
的Runnable
Task, add to queueQueuedWork
, in theawaitCommit
Will callawait()
Method wait, inhandleStopService
、handleStopActivity
And so the lifecycle takes this as a judgment and waits for the task to complete - Will a
postWriteRunnable
的Runnable
Write the task, passenqueueDiskWrite
Method to queue a write task while the write task is executed in a thread
To ensure that asynchronous tasks are completed in a timely manner, Queuedwork.waittofinish () is called when the life cycle is handleStopService(), handlePauseActivity(), handleStopActivity() and waits for the write task to complete
private static final ConcurrentLinkedQueue<Runnable> sPendingWorkFinishers =
new ConcurrentLinkedQueue<Runnable>();
public static void waitToFinish() {
Runnable toFinish;
while((toFinish = sPendingWorkFinishers.poll()) ! =null) {
toFinish.run(); / / equivalent to call ` MCR. WrittenToDiskLatch. Await ` () method}}Copy the code
sPendingWorkFinishers
是ConcurrentLinkedQueue
Instance,apply
Method adds a write task tosPendingWorkFinishers
In a queue, write tasks are performed in a pool of individual threads, and thread scheduling is not controlled by the program, meaning that tasks are not necessarily in the execution state when the life cycle switchestoFinish.run()
Method, equivalent to a callmcr.writtenToDiskLatch.await()
Method, will always waitwaitToFinish()
Method does one thing, waits for the write task to complete, does nothing else, and when you have a bunch of write tasks, they execute one after the other, and when you have a big file, it’s inefficient, so it’s not surprising that you end up with ANR, right
Therefore, when the amount of data is relatively large, Apply will also cause ANR
Cause the ANR getXXX ()
All getXXX() methods, not just writes, are synchronized. Calling get on the main thread must wait for SP to finish loading, which may cause ANR to call getSharedPreferences(). The SharedPreferencesImpl#startLoadFromDisk() method is eventually called to start a thread reading data asynchronously.
private final Object mLock = new Object();
private boolean mLoaded = false;
private void startLoadFromDisk() {
synchronized (mLock) {
mLoaded = false;
}
new Thread("SharedPreferencesImpl-load") {
public void run() {
loadFromDisk();
}
}.start();
}
Copy the code
As you can see, start a thread to read data asynchronously, and call getXXX() while we are reading a large number of data.
public String getString(String key, @Nullable String defValue) {
synchronized (mLock) {
awaitLoadedLocked();
String v = (String)mMap.get(key);
returnv ! =null? v : defValue; }}private void awaitLoadedLocked() {
......
while(! mLoaded) {try {
mLock.wait();
} catch (InterruptedException unused) {
}
}
......
}
Copy the code
Wait () will wait for the thread started by the getSharedPreferences() method to finish reading data before proceeding. If a few kilobytes of data are read, it will block the main thread.
The use of MMKV
MMKV is a key-value component based on MMAP memory mapping. Protobuf is used to realize the underlying serialization/deserialization, which has high performance and strong stability. It has been used on wechat since the middle of 2015, and its performance and stability have been verified over time. Recently, it has been ported to Android/macOS/Win32 / POSIX platform, and also open source.
MMKV advantages
1.MMKV implements SharedPreferences interface, which can be seamlessly switched 2. The MMAP memory mapping file provides a memory block that can be written at any time. App only writes data into it, and the operating system is responsible for writing the memory back to the file. There is no need to worry about data loss caused by crash. 3. Protobuf protocol is used in the serialization of MMKV data. Pb has good performance in both performance and space occupation
Detailed usage details can be found in the documentation: github.com/Tencent/MMK…
MMKV principle
Why is MMKV writing faster
IO operations
As we know, SP is written based on IO operations. To understand IO, we need to first understand user space and kernel space
Virtual memory is divided by the operating system into two parts: user space, where the user program code runs, and kernel space, where the kernel code runs. For security, they are isolated so that even if a user’s program crashes, the kernel is not affected. Procedure for writing a file:
1. Call write to tell the kernel the starting address and length of the data to be written
2. The kernel copies the data to the kernel cache
3. It is called by the operating system to copy data to the disk and write data
MMAP
Linux initializes the contents of a virtual memory area by associating it with an object on a disk, a process called Memory mapping.
Mmap files to allocate address space in the process’s virtual memory, creating mapping relationships.
After this mapping is achieved, the memory can be read and written in the way of Pointers, and the system will automatically write back to the corresponding file disk
MMAP advantage
- MMAP reads and writes files only once from the disk to the user’s main storage, reducing the number of data copies and improving the file read and write efficiency.
- MMAP uses logical memory to map disk files. Operating memory is equivalent to operating files without starting threads. MMAP can be operated as fast as memory.
- MMAP provides a block of memory that can be written at any time. App only writes data into it, and the operating system is responsible for writing the memory back to the file when the memory is insufficient and the process exits. There is no need to worry about data loss caused by crash.
It can be seen that the writing speed of MMAP is basically consistent with that of memory, much higher than that of SP, which is the reason why MMKV writes faster
MMKV writing mode
SP data structure
SP stores data in XML format, as shown below
However, this also causes SP to update data only in full if it wants to
MMKV data structure
MMKV data structure is as follows
MMKV uses Protobuf to store data, which has less redundant data, saves space, and can easily add data to the end
Written way
Incremental write Adds data directly to the previous data, regardless of whether the key is duplicated. This is more efficient, and only one data is inserted to update the data.
Of course, there are problems with this. What if the file gets bigger and bigger with incremental additions? If the file size is insufficient, full write is required. If the file size meets the size of the data to be written, you can directly update the full write. Otherwise, you need to expand the file size. (During capacity expansion, the file size that may be needed in the future is calculated according to the average k-V size, so as to prevent frequent full write)
Three advantages of MMKV
- Mmap prevents data loss and improves read and write efficiency.
- Thin data, with the least amount of data to represent the most information, reduce the size of data;
- Incremental update, avoiding full write of a large amount of data each time.
The resources
[Google] SharedPreferences