- IO optimization is not about reading and writing large files on the main thread.
IO basis
- I/O process: Applications send logical I/O commands to file systems, and file systems send physical I/O commands to storage devices or disks
The file system
- File read process: The application calls the read() method, and the system interrupts the process from user space to the kernel, then through the VFS, the concrete file system, the page cache, and if the data is not in the page cache, then the actual I/O request to disk needs to be made
- File System: a way to store and organize data, such as HFS+ for iOS, APFS (Apple File System, iOS 10.3+), Ext4 (Linux) for Android, and flash-friendly File System (F2FS).
You can see a list of all the filesystems recognized by the system at /proc/filesystemsCopy the code
- Virtual file systems (VFS) : Mask specific file systems to provide a unified interface for application operations;
- Page Cache: The file system caches data to improve memory hit ratio.
- Buffer Cache: A Cache of data from a disk. The purpose of this Cache is to merge some file system I/O requests and reduce the number of disk I/ OS.
You can view the memory usage of the cache in /proc/meminfoCopy the code
- When phones run out of memory, the system reclaims their memory, which reduces overall I/O performance.
disk
- Disk: System storage devices, such as CDS, mechanical hard disks, solid-state drives (SSDS);
- Disk I/O process: first through the kernel general block layer, I/O scheduling layer, device driver layer, and finally to the specific hardware device processing;
- Block device: A device that can randomly access data blocks of a fixed size. CDS, hard disks, and SSDS are all block devices.
- Generic block layer: the main function is to receive the disk request sent by the upper layer, and finally send I/O request, so that the upper layer does not need to care about the implementation of the underlying hardware device.
- I/O scheduling layer: Merges and sorts requests based on the set scheduling algorithm
/sys/block/[disk]/queue/nr_requests // Queue length, usually 128. /sys/block/[disk]/queue/scheduler // Scheduling algorithmCopy the code
- Block device driver layer: According to the specific physical device, select the corresponding driver through the control of the hardware device to complete the final I/O request. Such as laser burning CD, flash electronic erasure;
Android I/O
Android Flash Memory (ROM)
- EMMC standard for Android a few years ago, UFS 2.0/2.1 standard in recent years, NVMe protocol for iOS and MacOS
- Flash performance is not only determined by hardware, but also by the standard adopted and the implementation of the file system
Why are files corrupted?
- Format errors or missing content, such as SQLite has a corruption rate of one in ten thousand, and SharedPreference has a corruption rate of one in ten thousand for frequent cross-process reads and writes.
- From the perspective of applications, file systems, and disks:
1. The disk. Flash memory used in mobile phones is an electronic storage device, so data errors can occur during data transmission, such as electronic loss. However, flash memory can also use ECC, multi-level encoding and other methods to increase data reliability, generally speaking, this is less likely to happen. Flash life can also lead to data errors. Due to the internal structure and characteristics of flash memory, the written address must be erased before it can be written again, and each block has a limit on the number of erifications, depending on the memory particle used, from 100,000 to thousands of times (SLC>MLC>TLC) 2. File system. Although a kernel crash or a sudden power failure can cause filesystem corruption, the file system writes data to the Page Cache and waits for the right moment before actually writing it to disk. But the file system is also heavily protected. For example, the system partition is read-only and unwritable, and the exception checking and recovery mechanisms, such as FSCK for ext4 and f2FS, and checkpoint mechanism, are added. 3. Apps. Most I/O methods are not atomic. Writing files across processes or multiple threads, or manipulating files using a closed file descriptor fd, can cause data to be overwritten or deleted. In fact, most file corruption is the result of poorly designed application code, not a file system or disk problem.Copy the code
Why does I/O sometimes suddenly slow?
- Out of memory: When the memory is out of memory, the system reclaims the memory in the Page Cache and Buffer Cache. Most write operations are directly driven to the disk, resulting in low performance.
- Write amplification: flash memory overwrite need to erase, erase operation is the basic unit of block, block, a page page write operation will cause the entire block of data migration, which is a typical write amplification phenomenon, low-end machine or use a longer equipment, due to low disk fragmentation, the remaining space, very prone to write amplification phenomenon.
- Insufficient configuration: The performance of the CPU and flash memory of the low-end machine is relatively poor, and it is prone to bottleneck in the case of high load.
I/O performance evaluation
- The whole process of IO: applications – > system call — — — — > > virtual file system file system — — — — > > block device interface driver – > disk
- I/O performance indicators: throughput and IOPS
- Disk throughput: Disk I/O traffic per second, that is, the size of disk write and read data.
- Storage IOPS: Disk IOPS refers to the number of read/write I/ OS performed on disks in one second.
- The I/O measurements:
- Use proc to track the wait times and times for I/O
proc/self/schedstat: Se.statistics. Iowait_count: number of IO waits se.statistics. // If it is a root machine, we can enable kernel I/O monitoring and dump all block reads and writes to a log file, which can be viewed using dmesg command. echo 1 > /proc/sys/vm/block_dump dmesg -c grep pid .sample.io.test(7540): READ block 29262592 on dm-1 (256 sectors) .sample.io.test(7540): READ block 29262848 on dm-1 (256 sectors)Copy the code
- Use Strace to track the number of I/ O-related system calls and their time consumption
The strace ttT - f - p [pid] read (53, "* * * * * * * * * * * * * * * * *". \ \. \., 1024) = 1024 < 0.000447 > the read (53, "* * * * * * * * * * * * * * * * *". \ \. \.. 1024) = 1024 < 0.000084 > the read (53, "* * * * * * * * * * * * * * * * *". \ \. \., 1024) = 1024 < 0.000059 > / / can also through the strace statistics general situation of the time all the system calls a period of time. However, Strace itself consumes a lot of resources and has an impact on execution time. strace -c -f -p [pid] % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 97.56 0.041002 21 1987 Read 1.44 0.000605 55 11 writeCopy the code
- Use the vmstat
// The values of the buff and cache fields in Memory, bi and BO fields in I/O, CS fields in System, sy fields in CPU, and WA fields are all related to I/O behavior. // We can use the dd command to match the test and see how vmstat output changes. Echo 3 > /proc/sys/vm/drop_caches. Echo 3 > /proc/sys/vm/drop_caches Vmstat 1 // test write speed, write to file /data/data/test, buffer size = 4K, Dd if=/dev/zero of=/data/data/test bs= 4K count=1000Copy the code
Three ways of IO
1. The standard IO
- I/O. I/O. I/O is a Buffered I/O.
- Caching I/O can greatly reduce the number of actual reads and writes to disks, thus improving performance. However, delayed write may cause data loss.
- The modified memory in the Page Cache is called “dirty pages,” and the kernel periodically writes data to disk through the Flush thread.
/ / specific written terms we can through the/proc/sys/vm file or sysctl -a | grep command to get / / vm flush every 5 seconds to perform a vm. Dirty_writeback_centisecs = 500 / / Dirty_expire_centisecs = 3000 dirty_expire_centisecs = 3000 // Indicates that if dirty pages occupy more than 10% of the total physical memory, Dirty_background_ratio = 10 // Maximum size of the system's dirty page cache VM. Dirty_ratio = 20Copy the code
- In practice, we should use synchronous write if some data is too important to be lost.
- When system calls such as sync, fsync, and msync are used in an application, the kernel immediately writes the corresponding data back to disk.
2. Direct I/O
- Many databases already do their own data and index cache management and rely less on page caches. They want to bypass the page caching mechanism so that one less copy of the data doesn’t contaminate the page cache.
- Both read and write are executed synchronously, which may cause the program to wait
- Direct IO should only be considered if the buffer IO overhead is determined to be significant
3. mmap
- When Android starts to load dex, the entire file is not read into memory at one time, but mmap() is used.
- By mapping files to the process’s address space (user buffer shares data with physical memory (page cache)),
- Advantages:
- Reduced system calls: A single mmap() system call is required, and all subsequent calls act as if they were operating on memory, without a large number of read/write system calls
- Reduced data copy: Normal read requires two foot copy (disk to page cache, page cache to user buffer), mMAP only requires disk data copy to page cache;
- High reliability: After mMAP writes data to the page cache, it can rely on kernel threads to periodically write data back to disk, just like the deferred write mechanism of cached IO.
- It is suitable for frequent read and write operations in the same area, such as user logs and data reporting
- Mmap is also a good choice for cross-process synchronization, and the Binder mechanism in Android uses Mmap internally
- Disadvantages:
- Mmap can also cause content loss in the event of a kernel crash or sudden power failure. You can also use msync to force synchronous write.
- Virtual memory increase: the available memory space of the application is limited, mmap a large file is prone to virtual memory shortage.
- Disk latency: MMAP initiates real disk I/O to the disk via page missing interrupts, so if the current problem is high latency for disk I/O, mmap eliminating a small system call overhead is a drop in the bucket. The purpose of the class rearrangement technique mentioned above is to reduce the disk I/O delay caused by page missing interrupts;
- When low-end computers or system resources are severely insufficient, MMAP frequently writes data to disks, and the performance deteriorates rapidly.
Multithreading blocks IO and NIO
Multithreading blocks IO
- IO operations can be slow, so they should be kept in threads whenever possible.
- The I/O performance bottleneck affects file read/write performance. When the speed reaches a certain level, the overall performance is significantly affected. Too many threads may degrade the overall performance of the application
- Reasonable use of multiple threads can reduce IO waiting, too many threads blocking lead to frequent thread switching, increasing the system context switching overhead;
- Most of the time in the actual work development is to read some relatively small files, using a separate I/O thread or a dedicated new thread, in fact, there is little difference;
NIO
- Asynchronous I/O is used to send the I/O request to the system and continue the execution. I/O is notified in the form of events to reduce the overhead of thread switching.
- Cons: Application implementation becomes more complex, and sometimes asynchronous retrofit is not easy
- Effect: The maximum effect is not to reduce the time of reading files, but to maximize the overall CPU utilization of the staff; (Use the time the thread waits for disk IO to process other CPU tasks)
- Square’s Okio is recommended for synchronous and asynchronous I/O. The demo is as follows:
//Okio中有两个关键的接口,Sink和Source,这两个接口都继承了Closeable接口;
//而Sink可以简单的看做OutputStream,Source可以简单的看做InputStream。
//而这两个接口都是支持读写超时设置的
//1. BufferedSink中定义了一系列写入缓存区的方法
BufferedSink write(byte[] source) 将字符数组source 写入
BufferedSink write(byte[] source, int offset, int byteCount) 将字符数组的从offset开始的byteCount个字符写入
BufferedSink write(ByteString byteString) 将字符串写入
BufferedSink write(Source source, long byteCount) 从Source写入byteCount个长度的
long writeAll(Source source) 将Source中的所有数据写入
BufferedSink writeByte(int b) 写入一个byte整型
BufferedSink writeDecimalLong(long v) 写入一个十进制的长整型
BufferedSink writeHexadecimalUnsignedLong(long v) 写入一个十六进制无符号的长整型
BufferedSink writeInt(int i) 写入一个整型
BufferedSink writeIntLe(int i)
BufferedSink writeLong(long v) 写入一个长整型
BufferedSink writeLongLe(long v)
BufferedSink writeShort(int s) 写入一个短整型
BufferedSink writeShortLe(int s)
BufferedSink writeString(String string, Charset charset) 写入一个String,并以charset格式编码
BufferedSink writeString(String string, int beginIndex, int endIndex, Charset charset) 将String中从beginIndex到endIndex写入,并以charset格式编码
BufferedSink writeUtf8(String string) 将String 以Utf - 8编码形式写入
BufferedSink writeUtf8(String string, int beginIndex, int endIndex) 将String中从beginIndex到endIndex写入,并以Utf - 8格式编码
BufferedSink writeUtf8CodePoint(int codePoint) 以Utf - 8编码形式写入的节点长度
//2. BufferedSource定义的方法和BufferedSink极为相似,只不过一个是写一个是读
BufferedSource read(byte[] sink) 将缓冲区中读取字符数组sink 至sink
BufferedSource read(byte[] sink, int offset, int byteCount) 将缓冲区中从offst开始读取byteCount个字符 至sink
BufferedSource readAll(Sink sink) 读取所有的Sink
BufferedSource readByte() 从缓冲区中读取一个字符
BufferedSource readByteArray() 从缓冲区中读取一个字符数组
BufferedSource readByteArray(long byteCount) 从缓冲区中读取一个长度为byteCount的字符数组
BufferedSource readByteString() 将缓冲区全部读取为字符串
BufferedSource readByteString(long byteCount) 将缓冲区读取长度为byteCount的字符串
BufferedSource readDecimalLong() 读取十进制数长度
BufferedSource readFully(Buffer sink, long byteCount) 读取byteCount个字符至sink
BufferedSource readFully(byte[] sink) 读取所有字符至sink
BufferedSource readHexadecimalUnsignedLong() 读取十六进制数长度
BufferedSource readInt() 从缓冲区中读取一个整数
BufferedSource readIntLe()
BufferedSource readLong() 从缓冲区中读取Long 整数
BufferedSource readLongLe()
BufferedSource readShort() 从缓冲区中读取一个短整形
BufferedSource readShortLe()
BufferedSource readString(Charset charset) 从缓冲区中读取一个String
BufferedSource readString(long byteCount, Charset charset) 读取一个长度为byteCount的String,并以charset形式编码
BufferedSource readUtf8() 读取编码格式为Utf-8的String
BufferedSource readUtf8(long byteCount) 读取编码格式为Utf-8且长度为byteCount的String
BufferedSource readUtf8CodePoint() 读取一个Utf-8编码节点,长度在1-4之间
BufferedSource readUtf8Line() 读取一行Utf-8 编码的String,碰到换行时停止
BufferedSource readUtf8LineStrict()
//3. ByteString: 作为一个工具类,功能十分强大,它可以把byte转为String,这个String可以是utf8的值,也可以是base64后的值,也可以是md5的值,也可以是sha256的值
String base64()
String base64Url()
String utf8()
ByteString sha1()
ByteString sha256()
static ByteString decodeBase64(String base64)
static ByteString decodeHex(String hex)
static ByteString encodeUtf8(String s)
//4. 读写使用
/**
* @Author: LiuJinYang
* @CreateDate: 2020/12/23
*/
public class OkioDemo {
public static void main(String[] args) {
testWrite();
testRead();
testGzip();
}
private static void testWrite() {
String fileName = "tea.txt";
boolean isCreate;
Sink sink;
BufferedSink bufferedSink = null;
String path = Environment.getExternalStorageDirectory().getPath();
try {
File file = new File(path, fileName);
if (!file.exists()) {
isCreate = file.createNewFile();
} else {
isCreate = true;
}
if (isCreate) {
sink = Okio.sink(file);
bufferedSink = Okio.buffer(sink);
bufferedSink.writeInt(90002);
bufferedSink.writeString("asdfasdf", Charset.forName("GBK"));
bufferedSink.flush();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (null != bufferedSink) {
bufferedSink.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
private static void testRead() {
String fileName = "tea.txt";
Source source;
BufferedSource bufferedSource = null;
try {
String path = Environment.getExternalStorageDirectory().getPath();
File file = new File(path, fileName);
source = Okio.source(file);
bufferedSource = Okio.buffer(source);
String read = bufferedSource.readString(Charset.forName("GBK"));
LjyLogUtil.d(read);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (null != bufferedSource) {
bufferedSource.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 或许有时候网络请求中,我们需要使用到Gzip的功能
*/
private static void testGzip() {
Sink sink;
BufferedSink bufferedSink = null;
GzipSink gzipSink;
try {
File dest = new File("resources/gzip.txt");
sink = Okio.sink(dest);
gzipSink = new GzipSink(sink);
bufferedSink = Okio.buffer(gzipSink);
bufferedSink.writeUtf8("android vs ios");
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
closeQuietly(bufferedSink);
}
Source source;
BufferedSource bufferedSource = null;
GzipSource gzipSource;
try {
File file = new File("resources/gzip.txt");
source = Okio.source(file);
gzipSource = new GzipSource(source);
bufferedSource = Okio.buffer(gzipSource);
String content = bufferedSource.readUtf8();
System.out.println(content);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
closeQuietly(bufferedSource);
}
}
public static void closeQuietly(Closeable closeable) {
if (closeable != null) {
try {
closeable.close();
} catch (RuntimeException rethrown) {
throw rethrown;
} catch (Exception ignored) {
}
}
}
}
Copy the code
Small file system
- For a file system, directory lookup performance is very important
- File reading time = time to find the file’s inode + time to read the file data according to the inode. If we need to frequently read and write tens of thousands of small files, the time to find the inode will become very considerable;
- Google’s GFS, Taobao’s open source TFS, Facebook’s Haystack and wechat’s SFS are all file systems specially designed for the storage and retrieval of massive small files. To support VFS interfaces, upper-layer I/O code does not need to be changed.
- After a large number of small files are merged into large files, we can also merge the small files that can be accessed continuously into storage, changing the original random access among small files into sequential access, which can greatly improve performance. At the same time, the combined storage can effectively reduce the disk fragmentation problem caused by small file storage and improve the disk utilization.
The I/O tracking
1. Java Hook
- The entire process of calling a FileInputStream:
java : FileInputStream -> iobridge.open -> libcore.os. open -> blockGuardos.open -> posix.open /1 Public static Os Os = new BlockGuardOs(new Posix()); // Return the static variable Class<? > clibcore = Class.forName("libcore.io.Libcore"); Field fos = clibcore.getDeclaredField("os"); //2. Add pile codes before and after all I/O related methods by dynamic proxy. // newProxyInstance(cposix.getClassLoader (), getAllInterfaces(cPosix), this); // newProxyInstance(cposix.getClassLoader (), getAllInterfaces(cPosix), this); beforeInvoke(method, args, throwable); result = method.invoke(mPosixOs, args); afterInvoke(method, args, result);Copy the code
- Disadvantages:
- Poor performance: due to the use of dynamic proxies and Java’s extensive string operations
- Unable to monitor Native code
- Poor compatibility: In particular, Android P adds restrictions on non-public apis
2. Native Hook
- Profilo uses PLT Hook scheme, which is slightly better than GOT Hook scheme, but the compatibility of GOT Hook scheme is better
- Finally, the target function of Hook is selected from these functions in libc.so
int open(const char *pathname, int flags, mode_t mode);
ssize_t read(int fd, void *buf, size_t size);
ssize_t write(int fd, const void *buf, size_t size); write_cuk
int close(int fd);
Copy the code
- You need to select some of the libraries that call the above methods. In wechat Matrix, select libjavacore.so, libopenjdkjvm.so, libopenjdkjvm.so, which can cover all Java layer I/O calls, please refer to io_canary_jni.cc
- However, it is more recommended to use atrace.cpp in Profilo, which simply traverses all loaded libraries and replaces them all.
void hookLoadedLibs() {
auto& functionHooks = getFunctionHooks();
auto& seenLibs = getSeenLibs();
facebook::profilo::hooks::hookLoadedLibs(functionHooks, seenLibs);
}
Copy the code
Matrix using
- Matrix-android is currently monitoring application package size, frame rate changes, startup time, lag, slow methods, SQLite operation optimization, file reading and writing, memory leaks, etc.
Properties: MATRIX_VERSION=0.6.6 //2. In your project under the root directory of the build. Gradle file add Matrix rely on the classpath (" com. Tencent. Matrix, the Matrix - gradle - plugin: ${MATRIX_VERSION} ") {changing = true} // add matrix-plugin apply plugin: //3.2 matrix {trace {enable = true //if you don't want to use trace canary, set false baseMethodMapFile = "${project.buildDir}/matrix_output/Debug.methodmap" blackListFile = "${project. ProjectDir} / matrixTrace blackMethodList. TXT"}} / / 3.3 in app/build. Gradle file add Matrix depends on the implementation of each module group: "com.tencent.matrix", name: "matrix-android-lib", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-android-commons", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-trace-canary", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-resource-canary-android", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-resource-canary-common", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-io-canary", version: MATRIX_VERSION, changing: true implementation group: "com.tencent.matrix", name: "matrix-sqlite-lint-android-sdk", version: MATRIX_VERSION, changing: true /** * 4. Implement PluginListener, Public class TestPluginListener extends DefaultPluginListener {public static final String TAG = "Matrix.TestPluginListener"; public TestPluginListener(Context context) { super(context); } @Override public void onReportIssue(Issue issue) { super.onReportIssue(issue); MatrixLog.e(TAG, issue.toString()); //add your code to process data } } /** * 5. Dynamically configure the interface, Internal parameters of Matrix can be modified, */ public class DynamicConfigImplDemo implements IDynamicConfig {private static final String TAG = "Matrix.DynamicConfigImplDemo"; public DynamicConfigImplDemo() { } public boolean isFPSEnable() { return true; } public boolean isTraceEnable() { return true; } public boolean isMatrixEnable() { return true; } @Override public String get(String key, String defStr) { //TODO here return default value which is inside sdk, you can change it as you wish. matrix-sdk-key in class MatrixEnum. return defStr; } @Override public int get(String key, int defInt) { //TODO here return default value which is inside sdk, you can change it as you wish. matrix-sdk-key in class MatrixEnum. if (MatrixEnum.clicfg_matrix_resource_max_detect_times.name().equals(key)) { MatrixLog.i(TAG, "key:" + key + ", before change:" + defInt + ", after change, value:" + 2); return 2; //new value } if (MatrixEnum.clicfg_matrix_trace_fps_report_threshold.name().equals(key)) { return 10000; } if (MatrixEnum.clicfg_matrix_trace_fps_time_slice.name().equals(key)) { return 12000; } return defInt; } @Override public long get(String key, long defLong) { //TODO here return default value which is inside sdk, you can change it as you wish. matrix-sdk-key in class MatrixEnum. if (MatrixEnum.clicfg_matrix_trace_fps_report_threshold.name().equals(key)) { return 10000L; } if (MatrixEnum.clicfg_matrix_resource_detect_interval_millis.name().equals(key)) { MatrixLog.i(TAG, key + ", before change:" + defLong + ", after change, value:" + 2000); return 2000; } return defLong; } @Override public boolean get(String key, boolean defBool) { //TODO here return default value which is inside sdk, you can change it as you wish. matrix-sdk-key in class MatrixEnum. return defBool; } @Override public float get(String key, float defFloat) { //TODO here return default value which is inside sdk, you can change it as you wish. matrix-sdk-key in class MatrixEnum. return defFloat; }} /** * 6. Select the position where the program starts to initialize the Matrix, */ private void initMatrix() {// build matrix matrix.builder Builder = new matrix.builder (this); // add general pluginListener builder.patchListener(new TestPluginListener(this)); // dynamic config DynamicConfigImplDemo dynamicConfig = new DynamicConfigImplDemo(); // init plugin IOCanaryPlugin ioCanaryPlugin = new IOCanaryPlugin(new IOConfig.Builder() .dynamicConfig(dynamicConfig) .build()); //add to matrix builder.plugin(ioCanaryPlugin); //init matrix Matrix.init(builder.build()); // start plugin ioCanaryPlugin.start(); } // At this point, Matrix has been successfully integrated into your project, and you are beginning to collect and analyze abnormal data related to performance. Please see the sample at https://github.com/Tencent/Matrix/tree/dev/samples/sample-android/.Copy the code
Monitoring content
- The name of the file, the original size of the file, the stack on which the file was opened, what thread was used, how long the operation took, the size of the Buffer used, whether it was read consecutively or randomly;
- Main thread I/O: Sometimes I/O writes will suddenly zoom in, even if it is a few hundred KB of data, or try not to operate on the main thread;
- Read /write Buffer too small: If we have a Buffer too small, it will result in many useless system calls and memory copies, resulting in more read/write times, which will affect performance.
- Repeat reads: If a file is read frequently and has not been written to or updated, we can improve performance by caching. (Adding a layer of memory cache is the most direct and efficient way)
public String readConfig() { if (Cache ! = null) { return cache; } cache = read("configFile"); return cache; }Copy the code
- Resource leak: Indicates that open resources, including files and cursors, are not closed in time, resulting in leakage. This is a very low-level coding error, but it is very common.
I/O and startup optimization
- Use mmap or NIO for large files: MappedByteBuffer is a Java NIO mmap buffer that is optimized for frequent reads and writes to large files.
- Uncompressed installation packages: We can specify uncompressed files in the installation package for the startup process. This will also speed up the startup, but will have the effect of increasing the size of the installation package.
- Buffer reuse: We can take advantage of the Okio open source library, which has internal ByteString and Buffer reuse techniques to greatly reduce CPU and memory consumption.
- Optimization of storage structure and algorithm: through the optimization of algorithm or data structure, we can minimize I/O or even no I/O, such as some configuration files from the start of complete parsing, to read the corresponding item parsing; Replace the XML, JSON format more redundant, poor performance of the data structure;
reference
- I/O optimization (I/O optimization) : I/O optimization (I/O optimization
- Disk I/O stuff
- This section describes the file Cache management mechanism of the Linux kernel
- Vmstat monitors memory usage
- EMMC, UFS or NVMe? Mobile phone ROM storage and transfer protocol parsing
- Talk about the Linux IO
- What are the challenges of designing storage devices with NAND Flash?
- Linux command — disk command dd
- I/O optimization: What are the different I/O usage scenarios?
- Introduction to the direct I/O mechanism in Linux
- Wechat terminal cross-platform component Mars series (I) – high-performance log module Xlog
- Okio
- How to monitor online I/O operations?
- Matrix