Main contents: Kotlin, Java, RxJava, Multithreading/concurrency, Collections
1. Java related
1.1 Cache Correlation
- The principle of LruCache
- The principle of DiskLruCache
LruCache is used to implement memory-based caching. LRU is the least recently used version and LruCache is implemented based on LinkedHashMap. LinkedHashMap is a wrapper on top of HashMap. In addition to hashing, LinkedHashMap inserts data into a two-way linked list for maintenance. Each read is moved to the end of the list, and the top of the list is moved out when the cache reaches its maximum capacity. The thing to watch out for when using LruCache is the unit problem, because the API doesn’t know how the data to be stored will be sized, so it provides a way to do that. (” Android memory caching framework LruCache source analysis “)
DiskLruCache is similar to LruCache in that it is used to implement caching and is based on LinkedHashMap. The difference is that it is based on disk cache, whereas LruCache is based on memory cache. As a result, LinkedHashMap can store more space, but read and write at a slower rate. To use DiskLruCache, you need to download it from Github. Both OkHttp and Glide’s disk caches are based on DiskLruCache. DiskLruCahce internally maintains a log file that records information about read and write records. Everything else is basically basic disk IO operations.
- Glide cache implementation principle
1.2 the List related
- ArrayList is different from LinkedList
- ArrayList is based on dynamic array, and the bottom layer uses system.arrayCopy () to implement array expansion. The complexity of the search value is O(1), which may be expanded during addition and deletion and is also higher than the complexity of LinkedList. If you can approximate the length of the list, you can reduce the number of expansions by specifying the initial size of the array when new instances are created. Suitable for use in situations where lookup is more than adding or deleting, such as as a container for Adapter data.
- LinkedList is based on a two-way LinkedList; The complexity of add and delete is O(1), and the complexity of search is O(n). It is suitable for adding and deleting a lot of cases.
- Neither list is thread-safe. Vector is thread-safe, but it is thread-safe by locking each method, so performance is low.
If you want to use this list class thread safely (refer to the following questions)
- How can I safely manipulate lists between threads?
There are several ways to safely manipulate lists between threads. You can choose which way to use depending on your business logic. There are usually the following ways:
- The first is when you’re manipulating a List
sychronized
Control. We can lock our own business methods to keep them thread safe. - The second way is to use
Collections.synchronizedList()
For packaging. This method is used internallyPrivate lockThread-safe is achieved by locking a global variable. We need to obtain the private lock before calling our List method. Private locks reduce lock granularity. - The third way is to use and send classes in the package, such as reading more and writing less, to improve efficiency
CopyOnWriteArrayList
Instead of ArrayList, useConcurrentLinkedQueue
Replace LinkedList. In the concurrent containerCopyOnWriteArrayList
Read without locking, write with Lock.ConcurrentLinkedQueue
Is based on the idea of CAS, before adding and deleting data will be compared.
1.3 the Map related
- The principle of SparseArray
SparseArray is primarily used to replace Java’s HashMap, which boxes keys of Integer type into Integer by default (which is inefficient). SparseArray maps by maintaining two arrays internally and uses binary lookup to find the specified key, so its key array doesn’t need to be of wrapper type. SparseArray is used when the key of a HashMap is an Integer, and it internally maintains an array of type Int to store the key. Similarly, LongSparseArray, BooleanSparseArray, etc., are used to save memory space by reducing boxing operations. However, because it uses binary lookup internally to find keys, it is not as efficient as HashMap, so consider using HashMap when you have a large number of key-value pairs to store.
- HashMap, ConcurrentHashMap, and HashTable
- How does a Hashmap put data? (Understand the logic of the PUT element)
The difference between HashMap (HM), which is a hash table, and ConcurrentHashMap (CHM), which is also a hash table, is that HM is not thread-safe and CHM is thread-safe and optimized for locks. The equivalent of HM is HashTable (HT), which is thread-safe and inefficient by locking each internal method.
How HashMap works: HashMap uses a zipper method to resolve hash conflicts, where two elements are squared into a bucket when their hashes are equal. When there is a large amount of data in a bucket, the HashMap either expands the bucket or converts the data structure of the bucket elements from a linked list to a red-black tree. So there are several constants that determine how a HashMap behaves. By default, the HashMap is expanded when the number of buckets already used in the HashMap reaches three-quarters. When the number of elements in a bucket reaches 8 and the number of buckets reaches 64, the data structure of the elements in the bucket is converted from a linked list to a red-black tree. If the number of buckets has not reached 64, the HashMap is expanded rather than the data structure is transformed.
In terms of data structure, the data structure of the elements in the bucket in a HashMap can be converted from a linked list to a red-black tree while preserving its linked list relationship. Because TreeNode in HashMap inherits Entry in LinkedHashMap, it has two data structures.
HashMap is implemented with a number of performance optimizations, such as calculating the index of an element in an array by truncating the last few bits instead of resizing. Increase the randomness of the hash by xOR using the high 16 bits and the low 16 bits of the hash.
Because there are two possible data structures for each bucket’s elements, adding or deleting a HashMap is handled in two cases, depending on the type of node. When the data structure is a linked list, it is very easy to work with it, using a loop to traverse the list. When the data structure is a red-black tree, it’s more complicated to deal with. The search of red-black tree can follow the search logic of binary tree.
Below is the insert logic of the HashMap, where all insert operations are eventually called to the internal putVal() method.
public V put(K key, V value) {
return putVal(hash(key), key, value, false.true);
}
private V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0) { // The original array does not exist
n = (tab = resize()).length;
}
i = (n - 1) & hash; // Take the last n-1 bit of the hash code to get the index of the bucket
p = tab[i]; / / find barrels
if (p == null) {
// If the specified bucket does not exist, create a new one
tab[i] = newNode(hash, key, value, null);
} else {
// The bucket already exists
Node<K,V> e; K k;
if (p.hash == hash // The hash code is the same&& ((k = p.key) == key || (key ! =null && key.equals(k))) // The key has the same value
) {
// The first node is equal to the key of the pair we are inserting
e = p;
} else if (p instanceof TreeNode) {
// The data structure of the bucket is a red-black tree. Call the red-black tree method to continue the insert
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
} else {
// The bucket's data structure is a linked list
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
// We have traversed the end of the list, but have not found it yet, we need to create a new node
p.next = newNode(hash, key, value, null);
// If the length of a list is greater than or equal to 8 after inserting a new node, the list is converted to a red-black tree
if (binCount >= TREEIFY_THRESHOLD - 1) {
treeifyBin(tab, hash);
}
break;
}
if (e.hash == hash // The hash code is the same&& ((k = e.key) == key || (key ! =null && key.equals(k))) // The key has the same value
) {
// Indicates that the key pair to be inserted exists and the data of the previous node needs to be updated
break; } p = e; }}if(e ! =null) {
// Indicates that the specified key exists and the value of the node needs to be updated
V oldValue = e.value;
if(! onlyIfAbsent || oldValue ==null) {
e.value = value;
}
return oldValue;
}
}
++modCount;
// If a new node is inserted and the capacity of the hash table exceeds threshold, it will be expanded
if (++size > threshold) {
resize(); / / capacity
}
return null;
}
Copy the code
Insert the HashMap into a red-black tree and a linked list, depending on the type of header. For linked lists, the specific insertion logic has been given above. In the case of linked lists, in addition to basic inserts, the bucket data structure is transformed from a linked list to a red-black tree when the length of the list reaches 8. For red-black tree data structures, it calls putTreeVal() of TreeNode to perform the logic of inserting nodes into the red-black tree. (post the code and take your time to understand it 🙂
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, int h, K k, V v) { Class<? > kc =null;
boolean searched = false;
// Find the root node. The head node of the index position is not necessarily the root of the red-black treeTreeNode<K,V> root = (parent ! =null)? root() :this;
for (TreeNode<K,V> p = root;;) { // Assign the root node to p and start traversal
int dir, ph; K pk;
if ((ph = p.hash) > h)
// If the hash value passed in is less than the hash value of the p node, then dir is assigned -1 to look up the tree to the left of p
dir = -1;
else if (ph < h)
// If the hash value passed in is greater than the hash value of the p node, then dir is assigned to 1 to look up the tree to the right of p
dir = 1;
// If the hash and key values passed in are equal to the hash and key values of p node, p node is the target node and p node is returned
else if((pk = p.key) == k || (k ! =null && k.equals(pk)))
return p;
// If the class to which K belongs does not implement the Comparable interface or the keys of k and P nodes are equal
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if(! searched) {// This method is executed only for the first time
TreeNode<K,V> q, ch;
searched = true;
// Call find from the left and right nodes of the p node, and return if the target node is found
if(((ch = p.left) ! =null&& (q = ch.find(h, k, kc)) ! =null) || ((ch = p.right) ! =null&& (q = ch.find(h, k, kc)) ! =null))
return q;
}
// Use a set of defined rules to compare the key size of nodes k and P to determine whether to look left or right
dir = tieBreakOrder(k, pk); // dir<0 = k
}
TreeNode<K,V> xp = p; // xp assigns the parent node of x, the intermediate variable used to assign the parent node of x below
// dir<=0 looks to the left of p, otherwise looks to the right of p, if null means that position is the target position of x
if ((p = (dir <= 0)? p.left : p.right) ==null) {
// If you walk in, you have already found the position of x
Node<K,V> xpn = xp.next; // Xp next node
// Create a new node, where the next node of x is XPN, insert node X between XP and XPN
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0) // If dir <= 0, x is the left node of XP
xp.left = x;
else // If dir> 0, x is the right node of XP
xp.right = x;
xp.next = x; // Set the XP next node to x
x.parent = x.prev = xp; // Set the parent and prev nodes of X to XP
// If XPN is not empty, set the prev node of XPN to x node, corresponding to the next node of x node above
if(xpn ! =null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x)); // Adjust the insertion balance of the red-black tree
return null; }}}Copy the code
- How does the Set implement Hash prevent collisions
The internal implementation of a HashSet is a HashMap, which resolves hash collisions using a zipper method. Collisions are placed in a list that is longer than 8 and the number of buckets in use is greater than 64, and the bucket data structure is converted from the list to a red-black tree. When a HashMap calculates the index of each node in the array, it uses the high and low octets of the object’s hash code to increase the randomness of the hash code.
- How to determine whether a HashSet or HashMap has identical elements
Inside a HashSet, we use a HashMap. When we add a key-value pair to a hash table using the put() method, we judge whether the hash value and the key are equal. Only when the two are exactly equal will we consider that the element has been duplicated.
- Implementation of HashMap? What’s different from HashSet?
A HashSet does not allow duplicate elements in a list, and is implemented inside a HashSet using a HashMap. When we add an element to a HashSet, we insert that element as a key and a default object as a value into the internal HashMap as a key-value pair. (See the implementation of HashMap above.)
- TreeMap concrete implementation
TreeMap is implemented based on red-black trees (further refinement)
1.4 Annotation Correlation
- Understanding of Java annotations
Java annotations are commonly used in Android in three ways:
- The first is based on reflection. Because reflection has its own performance issues, it is often used to do simple things like adding additional information for classes, their fields, methods, and so on, and then retrieving that information through reflection.
- The second way is based onAnnotationProcessorThat is, the method of dynamically generating boilerplate code at compile time and then triggering the generation through reflection. ButterKnife, for example, uses annotations. At compile time, find uses the annotated control and binds its value to it. Then, when called
bind()
Direct reflection calls the generated method. Room also generates database methods for annotated methods at compile time. It may also be used when developing such third-party librariesJavapoetTo help us generate Java files. - A final, more common approach is to use annotations instead of enumerations. Because enumerations have an additional memory overhead compared to constants, constants are often used in place of enumerations in development. But we can’t limit the range of constants passed in if we only use constants, so we can use annotations to limit the range of values. In the case of integers, we use annotations when defining annotations
@intdef ({/* various enumeration values */})
To specify the value range of the integer. Then use annotations to modify the method’s parameters. The IDE will then give you a message telling you to use only values in the specified range. (Java Annotations and Their Use in Android)
Association: ButterKnife, ARouter
1.5 the Object related
- Object equal() and hashCode () methods overridden?
Both methods have the ability to determine the identity of an object, so the behavior of both methods must be consistent, and overwriting both methods must follow certain principles. You can think from a business point of view of using unique characteristics of the object, such as ID, or using all of its fields to evaluate to an integer hash value. In general, I don’t overwrite this method directly unless the business characteristics are obvious. Because once modified, its scope will be global. We can also generate the method directly through generate of IDEA.
- What methods does Object have?
wait() & notify()
, used to control the thread so that the current thread waits until another thread callsnotify()/notifyAll()
Methods.wait()
The wait occurs if the current thread acquires the lock (monitor) on the object. After calling this method, the current thread releases the acquired lock, then cedes the CPU and enters the wait state.notify/notifyAll()
The execution of the code block only wakes up the sleeping thread and does not immediately release the lock, which depends on the execution of the block.clone()
Methods related to object cloning (deep copy vs. shallow copy?)finilize()
toString()
equal() & hashCode()
, as shown on the
1.6 String Correlation
- What’s the difference between StringBuffer and StringBuilder?
The former is thread-safe, with each method locked using the synchronized keyword, and the latter is non-thread-safe. StringBuilder is generally fine, because locking in a non-multithreaded environment is an unnecessary overhead.
- Understanding strings in Java
- String is not a basic data type.
- Strings are immutable, and the JVM uses String pools to store all String objects.
- Strings are created using new. String objects created this way are not stored in the string pool. We can call
intern()
Method stores the string object in a string pool and returns a reference if the string pool already has a string of the same value. When creating strings directly with double quotes, the JVM searches the string pool to see if there are strings with equal values, and returns a reference to the string found. Otherwise, create a new string object and store it in the string pool.
- Why is String designed to be immutable?
- Thread-safe: Since String is an immutable class, it is safe to use in multithreading and we do not need to do any other synchronization.
- String is immutable and its value cannot be changed, so it is safe to store data passwords.
- Reuse/save heap space: String is actually one of the most frequently used classes in Java development, and as you can see from the heap dump, it often takes up a lot of heap memory. Because Java strings are immutable, you can save a lot of Java heap space at Java runtime. Different string variables can refer to the same string in the pool. If strings were mutable, any change in the value of one variable would be reflected to other variables, and the string pool would be meaningless.
- What are the common coding methods? utf-8, unicode, ascii
- How many bytes are Chinese in UTF-8 encoding?
Utf-8 encodes a Unicode character into 1 to 6 bytes, depending on the numeric size. Common English letters are encoded into 1 byte, Chinese characters are usually 3 bytes, and only rare characters are encoded into 4 to 6 bytes.
See article for the origin of string encoding: character encoding ASCII UNICODE UTF-8
1.7 Thread Control
- There are three ways to start a thread. The run() and start() methods are different
- Multithreading: how to use, what problems to pay attention to; Android thread has no upper limit and then mentions the upper limit of the thread pool
- Java thread pool, the meaning of several core parameters of the thread pool
- How can threads be shut down, and how can memory leaks be prevented
How to start a thread, thread pool parameters; Issues to watch: number of threads, memory leaks
// Thread overwrites the run() method.
private class MyThread extends Thread {
@Override
public void run(a) {
// Business logic}}// Method 2: Thread + Runnable
new Thread(new Runnable() {
public void run(a) {
// Business logic
}
}).start();
ExectorService + Callable
ExecutorService executor = Executors.newFixedThreadPool(5);
List<Future<Integer>> results = new ArrayList<Future<Integer>>();
for (int i=0; i<5; i++) {
results.add(executor.submit(new CallableTask(i, i)));
}
Copy the code
Number of threads:
Android does not specify the number of threads that can be created, but the resources of each process are limited. Threads occupy a certain amount of resources, so there is an upper limit to the number of threads due to the size of memory. Usually, when we use threads or thread pools, we don’t create too many threads. The thread pool size experience value should be set like this :(where N is the number of CPU cores)
- For CPU-intensive applications, the thread pool size is set to
N + 1
; (Mostly calculating) - If the application is IO intensive, the thread pool size is set to
2N + 1
; (Mostly reading and writing, Android)
Here is the code for creating a thread pool in AysncTask on Android (the core parameters for creating a thread pool are described in the comments).
// Number of cpus
private static final int CPU_COUNT = Runtime.getRuntime().availableProcessors();
// Number of core threads: A thread is created only when a task is submitted. When the number of current threads is smaller than the number of core threads, a new thread is created to execute the task
private static final int CORE_POOL_SIZE = Math.max(2, Math.min(CPU_COUNT - 1.4));
// Maximum number of threads allowed to be created in the thread pool: When the task queue is full and the current number of threads is less than the maximum number of threads, a new thread is created to execute the task
private static final int MAXIMUM_POOL_SIZE = CPU_COUNT * 2 + 1;
// Idle supermarket time for non-core threads: after this time, threads will be reclaimed. If there are many tasks and the execution time is short, a higher value should be set
private static final int KEEP_ALIVE_SECONDS = 30;
// Thread factory: Customize the policy for creating threads, such as defining a name
private static final ThreadFactory sThreadFactory = new ThreadFactory() {
private final AtomicInteger mCount = new AtomicInteger(1);
public Thread newThread(Runnable r) {
return new Thread(r, "AsyncTask #"+ mCount.getAndIncrement()); }};// Task queue: If the number of current threads is greater than the number of core threads, tasks are added to the queue
private static final BlockingQueue<Runnable> sPoolWorkQueue =
new LinkedBlockingQueue<Runnable>(128);
public static final Executor THREAD_POOL_EXECUTOR;
static {
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(
/*corePoolSize=*/ CORE_POOL_SIZE,
/*maximumPoolSize=*/ MAXIMUM_POOL_SIZE,
/*keepAliveTime=*/ KEEP_ALIVE_SECONDS, TimeUnit.SECONDS,
/*workQueue=*/ sPoolWorkQueue,
/*threadFactory=*/ sThreadFactory
/*handler*/ defaultHandler); // Saturation policy: AysncTask does not have this parameter
threadPoolExecutor.allowCoreThreadTimeOut(true);
THREAD_POOL_EXECUTOR = threadPoolExecutor;
}
Copy the code
Saturation strategy: logic executed when both task queues and thread pools are full. Java provides four implementations; Other:
- When the thread pool is called
prestartAllcoreThread()
Method, the thread pool starts early and creates all core threads to wait for the task; - When the thread pool is called
allowCoreThreadTimeOut()
The idle core thread is removed when the timeout expires.
The difference between run() and start() methods is that start() calls native start() methods, and then run() is called back. Run () is executed asynchronously. If run() is called directly, it uses the default implementation (unless overridden) and is executed in the current Thread as if Thread were a normal class.
private Runnable target;
public void run(a) {
if(target ! =null) target.run();
}
Copy the code
Thread closure, there are two ways to choose, one is to use the interrupt flag bit to determine. When a thread needs to be stopped, it simply calls the thread’s interupt() method. The important thing to note in this case is that when the interrupt method is called while the thread is blocked, an exception is thrown and the interrupt flag bit is reset. At this point, we cannot exit the thread. Therefore, we need to consider both the general case and the case where the thread breaks while it is blocked.
Another option is to use a volatile Boolean variable to determine whether the thread should be terminated.
Method 1: use the interrupt flag bit
@Override
public void run(a) {
try {
while(! isInterrupted()) {// do something}}catch (InterruptedException ie) {
// The thread ends the loop because it is interrupted while blocking}}private static class MyRunnable2 implements Runnable {
// Note the use of volatile modifier
private volatile boolean canceled = false;
@Override
public void run(a) {
while(! canceled) {// do something}}public void cancel(a) {
canceled = true; }}Copy the code
Prevent thread memory leaks:
- When threads are used in activities, for example, they are defined as static inner classes. Non-static inner classes hold anonymous references to external classes.
- WeakReference refers to the Activity when the method of the Activity needs to be called in the thread.
- Or when the Activity needs to end
onDestroy()
Method to terminate a thread.
- Wait (), notify(), and sleep()
wait()/notify():
Wait (), notify() and notifyAll()
Methods are native final methods of Object and cannot be overridden.wait()
Causes the current thread to block untilTo be notified or interruptedSo far. The premise is that the lock must be obtained first, generally used with the synchronized keyword, used in the synchronized code blockWait (), notify() and notifyAll()
Methods. If the callwait()
ornotify()
Approach, the thread does not get to lock, will throw IllegalMonitorStateException anomalies. Before the current thread can acquire the lock againwait()
Method on success.- Due to the
Wait (), notify() and notifyAll()
Executed in a synchronized block, indicating that the current thread must have acquired the lock. When the thread executeswait()
Method,The current lock is released, and the CPU is released to wait. Only when thenotify()/notifyAll()
When executed, one or more threads in the waiting state wake up and continue execution until the synchronized block is executed or encountered halfway throughwait()
.Release the lock again.
That is, notify()/notifyAll() only wakes up the sleeping thread, but does not immediately release the lock, which depends on the execution of the block. Therefore, in programming, try to exit the critical section immediately after using notify()/notifyAll() to wake up other threads. 4. Wait () is surrounded by a try catch, and interrupts can wake up the thread waiting for the wait. 5. Notify () and wait() must be in the correct order. If notify() is executed first and wait() is executed later, thread B cannot be woken up. 6. The difference between notify() and notifyAll() : Notify () only wakes up a waiting thread and causes it to start executing. So if there are multiple threads waiting for an object, this method will only wake up one of them, depending on the operating system’s implementation of multithreaded management. NotifyAll () wakes up all waiting threads, although which thread will process first depends on the operating system implementation. The notifyAll() method is recommended if multiple threads need to be awakened in the current situation. In producer-consumer scenarios, for example, all consumers or producers need to be woken up each time to determine whether the program can proceed.
The differences between sleep() and wait() methods are summarized as follows,
- Different classes:
sleep()
Method is the static method of Thread, andwait()
Object instance method. - Different scopes:
wait()
The method has to be inSynchronized method or synchronized block, that is, the object lock must have been acquired. whilesleep()
Methods can be used anywhere without this limitation. - Different lock occupancy:
wait()
Method releases the held object lock, causing the thread to enter the wait pool and wait for the next resource acquisition. whilesleep()
Methods simply free up the CPU and do not release the object lock; - Lock release is different:
sleep()
The method will continue execution if it gets the CPU slice again after the sleep time has reached, andwait()
Methods must waitObject.notift()/Object.notifyAll()
After notification, the wait pool is left and the CPU time slice is obtained again before execution continues.
- Thread state
- NEW: A thread object is created.
- RUNNABLE: After a thread object is created, other threads (such as the main thread) call the object’s properties
start()
Methods. Threads in this state are in the runnable thread pool, waiting to be selected by thread scheduler for CPU usage. - RUNNING: Threads in the RUNNABLE state get a TIMESlice and execute program code.
- BLOCKED: A BLOCKED state is when a thread has given up CPU usage for some reason, giving up a CPU timeslice and temporarily stops running. Until the thread enters the RUNNABLE state, it has no chance to get the CPU timeslice to the RUNNING state again. There are three types of blocking:
- Waiting to block: RUNNING threads execute
o.wait()
Method, the JVM places the thread on the Waitting queue. - Synchronization blocking: When a RUNNING thread obtains a synchronization lock on an object and the lock is held by another thread, the JVM puts the thread into a lock pool.
- Other blocking: RUNNING threads execute
Thread.sleep(long)
或t.join()
Method, or when an I/O request is made, the JVM puts the thread in a blocked state. whensleep()
Status timeout,join()
When a thread terminates or times out, or when I/O processing is complete, the thread returns to the RUNNABLE state.
- Waiting to block: RUNNING threads execute
- DEAD: Thread
run()
,main()
The method completes execution or exits due to an exceptionrun()
Method, the thread terminates its life cycle. Dead threads cannot be resurrected.
- Deadlocks, four conditions for thread deadlocks?
- The concept of deadlocks, how do you avoid deadlocks?
A deadlock occurs when two threads hold resources needed by each other, but are unable to release their own resources. The following four conditions must be met for a deadlock to occur,
- Mutually exclusive: Only one process can access a resource at a time. That is, once the resource is allocated to a process, other processes cannot access the resource until the process finishes accessing it. (One chopstick can only be held by one person)
- Possession and wait: a process has a resource (one or more) and is waiting for another process to release it. (Each person takes a chopstick and waits for others to give up chopsticks)
- Don’t preempt: Someone else already has a resource. You can’t take it from someone else just because you need it. (You can’t take chopsticks from others.)
- Circular waiting: There is a chain of processes such that each process occupies at least one resource required by the next process. (Everyone is waiting for the next person to give up their chopsticks)
Four conditions are required for a deadlock to occur, and as long as at least one of these conditions is not met, a deadlock cannot occur. Because mutual exclusion conditions are not necessary for a shared resource and cannot be changed, they should be guaranteed, so the other three conditions that cause deadlocks are primarily broken.
Broken possession-and-wait problem: Allows a process to get only the resources it needs initially, start running, gradually release the allocated resources as it runs, and then request new resources.
Breaking the non-preemption condition: When a process that already holds some resources makes a new resource request and is not satisfied, it must release all the resources it has held and reapply for them later when it needs to use them. Releasing resources that have been held may lead to actual work before the process, etc. Repeated application and release of resources will lead to infinite delay in the execution of the process, which will not only prolong the process turnaround cycle, but also affect the throughput of the system.
Breaking the circular wait condition can be prevented by defining the linear order of resource types. Each resource can be numbered. When a process occupies the resource numbered I, it can only apply for the resource numbered greater than I next time.
“Four Prerequisites for Deadlocks and solutions”
- The difference between synchronized and Lock
- The implementation principle of Lock
- The realization principle of synchronized
- An internal implementation of ReentrantLock
- The CAS is introduced
- How to implement thread synchronization? Synchronized, lock, voliate, concurrent set, synchronized set
Sychronized principle (surface) :
Synchronization in the Java VIRTUAL machine is implemented based on incoming and outgoing Monitor objects, either explicitly (with explicit Monitorenter and Monitorexit directives, which are synchronized code blocks) or implicitly. When the monitor enters the Monitorenter, the count in the monitor increases by 1, releasing the currently held monitor, and the count decreases by 1. It is common to see two Monitorexit directives for a Monitorenter after decomcompiling code, which is used to prevent exceptions during program execution. The virtual machine needs to ensure that the lock can be released even if an exception occurs midway through the program’s permit (execute second Monitorexit).
For synchronized methods, the JVM can distinguish whether a method is synchronized from the ACC_SYNCHRONIZED access flag in the method_info Structure in the method constant pool. When a method is invoked, the calling instruction checks to see if the ACC_SYNCHRONIZED access flag of the method is set. If so, the thread of execution holds Monitor, then executes the method, and finally releases Monitor when the method completes (either normally or abnormally). During method execution, no other thread can obtain the same monitor. If an exception is thrown during the execution of a synchronous method and cannot be handled within the method, the monitor held by the synchronous method is automatically released when the exception is thrown outside the synchronous method.
Sychronized principle (underlying) :
In the object header of a Java object, there is an area called MarkWord that stores the sychronized flag bit for the heavyweight lock, whose pointer points to the Monitor object. Each object has a Monitor associated with it. Two queues are defined in monitor’s data structure, _WaitSet and _EntryList. When multiple threads access a piece of synchronized code at the same time, they first enter the _EntryList collection, When a thread obtains the object’s monitor, it enters the _Owner area, sets the owner variable in monitor to the current thread, and increases the count in monitor by 1. If the thread calls wait(), it will release the currently held monitor. The owner variable returns to null, count decreases by 1, and the thread enters the _WaitSet collection to be woken up. If the current thread completes, it also releases the monitor (lock) and resets the value of the variable so that another thread can enter to acquire the monitor (lock).
From this point of view, a Monitor object exists in the object header of every Java object (the point of a stored pointer), and synchronized locks are acquired this way, which is why any object in Java can be used as a lock. This is also why notify()/notifyAll()/wait() methods exist in the top-level Object Object.
Of course, Java’s sychronized optimization can also be seen in the MarkWord structure: After Java 6, lightweight and biased locks were introduced to reduce the performance cost of acquiring and releasing locks, and lock efficiency was optimized.
(For the underlying implementation principle of sychronized, please refer to the author’s article:Concurrent programming Topic 3: Synchronized)
The difference between sychronized and Lock is reflected in the following four aspects:
- Interruptible wait: When the thread holding the lock does not release the lock for a long time, the thread waiting can choose to abandon the wait. (Both methods add +1 to the count, but in different ways, so reentrant locks can be terminated.)
- Fair lock: When multiple threads are waiting for the same lock, the fair lock will obtain the lock in the sequence in which the lock was applied. Instead of a fair lock, any thread that is waiting can acquire the lock when it is released (regardless of the time in which the lock was attempted). Sychronized supports only unfair locks, and the Lock constructor specifies whether to use a fair or unfair Lock.
- A lock can bind to multiple conditions: a ReentrantLock can bind to multiple conditions, whereas sychronized has to add a lock to associate with multiple conditions. A ReentrantLock simply calls newCondition multiple times.
ReentrantLock is implemented by:
The implementation of ReentrantLock is based on AQS (synchronizer), which is designed with the idea of CAS. Synchronizer maintains a linked list, with the CAS idea to add and delete data to the list. The underlying CAS operation uses methods from the Sun.misc. Unsafe class. Implement two subclasses of AQS in ReentrantLock, namely NonfairSync and FairSync. This is the key to implementing fair and unfair locks. When we use the constructor to get an instance of ReentrantLock, we can specify fair or unfair locking using a Boolean parameter. In implementation, the difference between NonfairSync and FairSync is only that before the current thread has acquired the lock, it will determine from the above queue whether there is a thread that claimed the lock earlier than it. For a fair lock, if such a thread exists, the current thread fails to acquire the lock. When the current thread acquires a lock, it also uses a CAS operation to increase the lock acquirement count by 1. When the thread acquires the lock again, it will judge according to the thread. If the thread currently holding the lock is the thread applying for the lock, it is allowed to acquire the lock again, so as to realize the reentrant of the lock.
CAS is compare-and-swape, similar to optimism locking. But unlike the familiar optimistic lock, it involves three values: “New value”, “old value” and “in-memory value”, in the implementation of an infinite loop, each time the “old value” and “in-memory value” compared, if the two values are the same, it means that the “in-memory value” has not been modified by other threads; Otherwise, it has been modified. You need to read the value in memory again as old value, and then check the old value and In-memory value. Until the old value is the same as the in-memory value, the new value is updated into memory.
Note here that the above CAS operation is divided into three steps, but these three steps must be done at once, because otherwise, when the “value in memory” is determined to be equal to the “old value”, the “new value” written to memory may be modified by other threads, resulting in incorrect results. Unsafe and other Native methods such as compareAndSwapInt in the JDK do this. Also note that there are some problems with the CAS operation above:
- A typical ABA problem is that when a value in memory is changed by one thread and then changed back, the current thread sees the value as expected, but has actually been changed by another thread. To solve the ABA problem, you can use a traditional mutex synchronization strategy.
- Another problem with CAS is that they can spin for too long. Because CAS is non-blocking synchronous, the thread will not be suspended, but will spin (nothing more than an infinite loop) for the next attempt, where spinning for too long can be costly to performance.
As can be seen from the above description, CAS can only guarantee the atomicity of one shared variable, which cannot be guaranteed when there are multiple variables. One solution is to package multiple shared variables into one, that is, to define them as a whole into an object, and use CAS to ensure the atomicity of the whole, such as AtomicReference.
- The principle and usage of volatile
The two functions of the voliate keyword
- Ensure variable visibility: When a variable modified by the voliate keyword is modified by one thread, other threads can immediately get the result of the modification. When a volatile variable is written, the JMM flusher the value of the shared variable from the thread’s working memory to main memory. When a volatile variable is read, the JMM invalidates the thread’s working memory, and the thread can only re-read the shared variable from main memory.
- Shielding instruction reordering: Instruction reordering is a method used by compilers and processors to optimize programs efficiently. It can only ensure that the results of program execution are correct, but it cannot guarantee that the order of program operations is consistent with the order of code. This is not a problem in a single thread, but it can be in multiple threads. The classic example is to add voliate to a field in a singleton method to prevent instruction reordering.
Volatile implements its semantics in the JMM through Memory barriers. A memory barrier, also known as a memory barrier, is a CPU instruction that ensures the order in which certain operations are performed and the memory visibility of certain variables. Inserting a Barrier between instructions tells the compiler and CPU that no instructions can be reordered with the memory-barrier instructions. Another effect of a Memory Barrier is to force the cache of various cpus to be flushed out, so that any thread on the CPU can read the latest version of the data.
- Handwritten producer/consumer model
See concurrent Programming Topics 5: Producer and Consumer Patterns for three ways to write this.
1.8 and contract
- How does ThreadLocal work?
ThreadLocal implements thread-safety by storing each thread’s own local variables internally. Static variables are defined when used. Each Thread appears to fetch data from the TL, but in reality the TL only acts as the key of the key-value pair. The actual data is stored as a hash table in the Map local variable of the Thread instance. When the TL get() method is called, thread.currentthread () is used to get the currentThread instance, and the TL is used as the key to get the stored value from the Map local variable of that instance. The Map inside Thread uses linear arrays to resolve hash conflicts. (Using ThreadLocal and implementing the source code)
- Concurrent classes: What do concurrent collections know?
- ConcurrentHashMap: a thread-safe HashMap that locks buckets to reduce the lock granularity and improve performance.
- ConcurrentSkipListMap: ConcurrentSkipListMap
- ConCurrentSkipListSet: Implemented with ConcurrentSkipListMap
- CopyOnWriteArrayList: An ArrayList that reads too much and writes too little, locking while writing
- CopyOnWriteArraySet: implement with CopyOnWriteArrayList…
- ConcurrentLinkedQueue: an unbounded and thread-safe Queue whose
poll()
å’Œadd()
And other methods with CAS thought. Locks are relatively lightweight.
1.9 Input and Output
-
NIO
-
Principle of multi-thread breakpoint continuation
Both breakpoint continuation and download are used with RandomAccessFile, which reads data from a specified location. Resumable transmission is that the server gives the client a position that has been uploaded, and then the client moves the file pointer to the corresponding position, and reads the rest of the file through the input stream and transmits it to the server.
If you want to use multiple threads for breakpoint continuation, you can assign a fixed byte file to each thread, read it separately, and then upload it to the server separately.
2. Kotlin related
- Knowledge of Kotlin coroutines
Coroutines are essentially reusing threads to maximize CPU utilization and improve application performance by running threads at full load. Coroutines do not require thread switching compared to threads, and the more threads there are, the greater the performance advantage of coroutines. The second advantage is that there is no need for multi-threaded locking mechanism, because there is only one thread, there is no conflict of variables written at the same time, in the coroutine control of shared resources without locking, only need to judge the state is good, so the execution efficiency is much higher than multi-threading.
Both coroutines and threads can be used to implement asynchronous calls, but there are essential differences between the two:
- Coroutines is
The compiler
Level. Threads are system level. The coroutine switch isIt's controlled by a program
Thread switching is controlled by the operating system. - Coroutines is
collaborative
The thread ispreemptive
. A coroutine is a program that controls when to switch, whereas a thread has an operating system that decides when to switch. - A thread can contain multiple coroutines. In Java, multithreading can take full advantage of multi-core CPU, coroutines are in
A thread
In the execution. 4. Coroutine fitIO intensive
The program, multithreading suitableComputation-intensive
Program (for multi-core CPU). When your application is mostly file read and write operations or operating network request, then you should be the preferred coroutines instead of multithreading, first of all, most of this operation does not use the CPU to calculate but waiting for data reading and writing, and second because coroutines execution efficiency is higher, subroutine is not thread switches, switch is controlled by the program itself, therefore, There is no overhead of thread switching, and the more threads there are, the greater the performance advantage of coroutines compared to multithreading. - Coroutines do
Order calls
Asynchronous code,Avoid callback hell
.
Reference: Should WE continue with Rxjava, or should we try Kotlin’s coroutine – Android architecture article – Zhihu
- What are Kotlin’s advantages over Java?
Kotlin is a JVM-based language that provides many convenient syntax features. If Kotlin is so good at anything, it’s because it stands on the shoulders of Java. After studying it for a while, you will find that many of its syntax designs are very much in line with our actual development habits.
For example, for a class, we usually don’t overwrite it. In particular, in the Java Web direction, a lot of classes are used as Java beans, and they don’t have a lot of inheritance. Classes in Kotlin are not allowed to be inherited by default. To allow your classes to be inherited, you must specify it explicitly using the open keyword.
For a Java Bean, as a business object, it will have many fields. As we do in Java, we declare a set of setter and getter methods for them. Then, setter and getter methods must be used to get the property. Resulting in a lot of parentheses in our code. With Kotlin, attributes can be assigned directly, which is much more elegant.
For example, when using switch in Java, we usually put a break at the end of each case, and Kotlin helps us break by default, which saves a lot of code.
Another area where Kotlin excelled was in NPE control. In Android development, we can use @nonenull and @nullable annotations to indicate whether a field is likely to be empty. In Java, the default field is empty and there is no prompt. You can cause an NPE by being careless, but Kotlin’s default variable is non-null, and you have to declare it separately if you want it to be null. This gives us a hint that a variable might be empty, and we know that it might be empty, so we do something about it. For potentially empty classes, Kotlin defines the following rules that make NPE very easy for us to deal with:
- use
?
The variable is nullable after the type; - Secure call operator
? .
In order toa? .method()
For example, if a is not null then the result of the entire expression isa.method()
Otherwise null; - Elvis operator.
? :
In order toa ? : "A"
For example, if a is not null, the result of the entire expression is a; otherwise, “A”; - Secure conversion operator
as?
In order tofoo as? Typ
For example e, convert foo to an instance of Type Type if foo is Type Type, otherwise return null; - Not empty assertion
!!!!!
, used after a variable to assert that it is not empty, as ina!!
; - Let means to perform some operation on the instance calling let, as in
val b = "AA".let { it + "A" }
Return to “AAA”;
And so on and so forth. A lot of the time, I feel that Java is designed with rules that are misleading to people and don’t fit in with what we’re used to. Kotlin, on the other hand, is based on people’s experience with Java over the years, simplifying a lot of calls, more in line with our usage habits. So Kotlin is powerful because he stands on the shoulders of Java.
3. Design patterns
- Talk about your understanding of Android design patterns
- What are some common design patterns used in projects?
- Factory + Policy: used to create various instances, such as one implementation in the United States, one implementation in China;
- Observer: a page that listens, registers, unregisters, and notifies events;
- Singleton: too many, to delay initialization;
- Constructor: Class has too many parameters for easy call;
- Adapter: RecyclerView adapter;
- Templates: Create a top-level template class, such as abstract fragments or activities, but pay attention to composition over inheritance and don’t over-design.
- Appearance: Camera modules, Camera1 and Camera2, encapsulate their internal implementations and provide external methods in the form of CameraManager.
- Handwritten observer mode?
The Observer design pattern is similar to the interface callback we often use. In the following code, the topic is subscribed to in the observer constructor, which is not so important. The core of this is the queue maintained in the topic, and you can just tune the notification method when you need it. In addition, consider thread-safe control if you are in a multi-threaded environment, such as using thread-safe collections. The following is just a very basic example program to understand the design idea and use it in a flexible way without following the rules.
public class ConcreteSubject implements Subject {
private List<Observer> observers = new LinkedList<>(); // Maintain a list of observers
@Override
public void registerObserver(Observer o) { // Register an observer
observers.add(o);
}
@Override
public void removeObserver(Observer o) { // Remove an observer
int i = observers.indexOf(o);
if (i >= 0) { observers.remove(o); }}@Override
public void notifyObservers(a) { // Notify all observers of updates to the theme
for(Observer o : observers) { o.method(); }}}public class ConcreteObserver implements Observer {
private Subject subject; // The topic to which the observer subscribed
public ConcreteObserver(Subject subject) {
this.subject = subject;
subject.registerObserver(this); // Add the current observer to the topic subscription list
}
// When a topic changes, the topic iterates through the list of observers and notifies them by calling this method
@Override
public void method(a) {
// ... }}Copy the code
To learn more about the Observer design pattern, see the article:Design pattern analysis: Observer pattern)
- Handwriting singleton pattern, lazy and full
// the instance is initialized when the singleton method is called
public class Singleton {
private static Singleton singleton = new Singleton();
private Singleton(a) {}
public static Singleton getInstance(a) {
returnsingleton; }}// lazy: the method is initialized when it is called
public class Singleton {
private volatile static Singleton singleton;
private Singleton(a) {}
public static Singleton getInstance(a) {
if (singleton == null) {
sychronized(Singleton.class) {
if (singleton == null) {
singleton = newSingleton(); }}}returnsingleton; }}Copy the code
In addition, the singletons need to be noted: 1. What if the user uses reflection to initialize? You can throw an exception when creating a second instance; 2. What if a user creates a singleton repeatedly using Java’s serialization mechanism? Make all instance fields transient, then override the readResolve() method and return a singleton.
In addition, when there are too many singletons, you can find a way to store them in a Map and then pull them out of the hash table through a rule, so that there is no need to declare a lot of singletons.
To learn more about the singleton design pattern, refer to the article:Design pattern 4: Singleton pattern)
- What are the similarities and differences between adapter patterns, decorator patterns, facade patterns, and proxy patterns? (This design pattern is easy to mix)
What the four design patterns have in common is that they all require you to pass in a class and then use that class internally to complete the business logic.
We use the letters A, B, and C to represent three different classes (things).
To hide the internal differences and provide A consistent external interface X, let’s define three classes AX, BX and CX that all implement X interface, and reference the methods of A, B and C to implement X interface in their own ways. Taking camera development as an example, Camera1 and Camera2 each have their own implementation methods, defining a unified interface and two implementation classes.
Let’s say we have A class X that references the implementation AX of interface A. The logic of AX is A bit problematic and we want to fix it. We provide three schemes, namely A1, A2 and A3. At this time, A1, A2 and A3 should implement interface A, and then reference AX to complete the business. We can use their respective schemes to optimize the method of interface A. In this way, we modify AX so that A1, A2, and A3 can be applied directly to X.
For the adapter pattern, suppose you now have A class X that references interface A. Now we have to use B to complete A’s logic. Because A and B belong to two different classes, we need an adapter pattern at this point, where A’s implementation AX references B’s implementation BX to complete the various methods of A’s interface.
The purpose of facade mode is to hide the differences between types and provide a consistent external interface. The decorator pattern has the same external interface, but the internal reference instance is the same. The purpose of the decorator pattern is to extend the instance to have multiple functions. So, the former is many-to-one and the latter is one-to-many. The adapter pattern applies to two different classes, using one class to implement the functionality of the other, one-to-one. In contrast, the proxy pattern also uses a class to perform a function and is one-to-one, but it is between classes to enhance the function of the class, whereas adapters are between different classes. Decorators and proxies are used to enhance the functionality of a class, but decorators remain homogeneous after decoration and can seamlessly replace the functionality of previous classes. The modified proxy class is already a proxy class, another class, and cannot replace the original class.