Java based article

Instruction rearrangement

as-if-serial

No matter how the instructions are reordered, the result of execution in a single thread cannot be changed. Okay

happens-before

The results of one operation need to be visible to the other, so there must be a happens-before relationship between the two operations, especially in multithreaded cases

public class ControlDep{
  	int a = 0;
  	boolean flag = true;
  	
  	public void init(a){
      	a = 1; / / 1
      	flag = true; / / 2
    }
  
  	public void use(a){
      	if(flag){ / / 3
          	int i = a * a; / / 4}}}Copy the code

There are two threads A and B, and when A init, there’s A reorder, so 2, 1, 2, B uses the use method, but B still gets 0, so I = 0, and the correct answer would be I = 1

There are two solutions to the above problem:

  1. Memory barriers (volatile), which prohibit instruction reordering on A
  2. Synchronized locks the object or class

JVM memory model

Local method stack, program counter, virtual machine stack are thread private, there is no thread safe method area and heap area, all threads shared, need to lock to ensure thread safety

  • Program counter: small memory footprint, thread private, same life cycle as thread, roughly bytecode line number indicator
  • Virtual stack: A memory model of Java method execution, containing information about local variables, operation stacks, dynamic linking, method exits, etc., used to manage Java method calls, using contiguic memory space
  • Local method stack: The local method stack is used to manage calls to local methods

  • Heap area: Stores all object instances (including arrays), same as the JVM life cycle
  • Method area: Stores loaded class information, constant pool, static variables, even if the compiler compiled the code

Static variables are created in the method area and recycled at the end of the program, independent of the heap

The size of the stack is 1 MB by default. If the call is recursive, it only supports more than 800 times

Three major features of the JVM memory model

Atomicity: In multithreading, once a thread has started execution, it cannot be disturbed by other threads

Visible line: Updates to main memory when a thread modifies a variable

Orderliness: When the processor performs an operation, it optimizes the out-of-order execution of the program code, also called reordering optimization

Garbage collection mechanism

How can I tell if an object is garbage?

  1. Reference counting to manipulate objects must use references, so reference counting is used to determine whether objects need to be reclaimed. It is not used in JAVA (and in Python) because circular references cannot be solved.
  2. Reachability analysis To solve the problem of circular references, reachability analysis is used. A search is conducted through a series of “GC ROOT” objects as a starting point. If there is no reachable path between the “GC ROOT” object and the object, the object is considered unreachable and marked once.

    “The GC ROOT” :
    • Objects referenced in the virtual machine stack (local variables in the stack frame);
    • Objects referenced by constants in the method area;
    • The object referenced by the class static attribute in the method area;
    • JNI (Native method) reference objects in the Native method stack.
    • Active thread object

The garbage collection mechanism is for the collection of heap areas

It is more common to treat an object as a recyclable variable

  1. A reference object is null
Object obj = new Object();
obj = null;
Copy the code
  1. A reference that already points to an object points to a new object
Object obj1 = new Object();
Object obj2 = new Object();
obj1 = obj2;
Copy the code
  1. A local reference to the object to which it points
void fun(a) {...for(int i=0; i<10; i++) { Object obj =newObject(); System.out.println(obj.getClass()); }}Copy the code

Each time the loop completes, the generated Object becomes a recyclable Object.

  1. Only a weak reference modifier
WeakReference<String> wr = new WeakReference<String>(new String("world"));
Copy the code

Garbage collection algorithm

  1. Mark clearing algorithm will recycle the object mark after the specified delete object disadvantage: generate a large amount of memory fragmentation
  2. Replication algorithm To solve the problem of memory fragmentation, a replication algorithm is proposed. Divide the memory into two parts according to the capacity. When one part is used up, the surviving objects are copied to the other part, and the used memory space is cleaned up at one time. Disadvantages: Twice the amount of space consumed, the available memory space is halved
  3. In order to make full use of memory space, the mark-up algorithm moves the surviving object to one end after the mark-up is reclaimed, and then cleans up the memory beyond the end boundary
  4. The generation collection algorithm divides memory into new generation, old generation and permanent generation. New generation: The replication algorithm is used to reclaim a large number of objects, but the memory space is not allocated 1:1. The memory space is divided into three parts: the larger Eden and two smaller Survivor Spaces. Eden and one Survivor space are used each time. Eden and surviving objects from one Survivor are copied to another Survivor. (ratio: 8:1:1) Old generation: using the mark-cleaning algorithm (and the mark-cleaning algorithm —- garbage collector), recycle a small number of objects in the permanent generation: existing in the method area, not in the heap, used to store class classes, constants, method descriptions, etc. There are two main types of permanent generation recycling: discarded constants and useless classes

Note: in Java8, the persistent generation has been removed and replaced by an area called the “metadata area” (meta space).

New generation = 1/3 heap size, old age = 2/3 pair size

All newly created objects are in Eden. Large objects are directly created in the old age because replication in the new generation will affect performance

Replicates once in Survivor, counts +1 for age, and moves to the old zone when older than 15 years

JVM memory structure changes on JDK7 and JDK8?

jdk7:

  1. On physical storage, the heap and method areas are contiguous, but logically separate, because physical storage exists together, triggering a collection of the heap’s permanent generation during Full GC

jdk8:

  1. Cancel the permanent generation, put the class structure and other information into the Native memory area, constant pool and static variables/global variables stored in the heap area
  2. The method area exists in the meta-space, and the Native memory area is the meta-space area

Native Memory, insufficient space, does not trigger GC

Why use meta-space instead of permanent generation?

To avoid OOM of permanent generation, it is difficult to determine the total number of classes that need to be loaded, the total number of methods, and the allocated space. To avoid OOM, use meta-space, which can theoretically obtain all available space in local memory

Where does the character constant pool exist?

1.6: stored in the method area 1.7: Objects are stored in the heap, references exist in the string constant pool, all in the heap 1.8: stored in the heap area

Where is the runtime constant pool?

At 1.8, I moved to metacolor, where I had been in the method area

Garbage collector

Java uses the HotSpot VIRTUAL machine. HotSpot has seven types of garbage collectors, roughly divided into three categories:

New generation collectors: Serial, ParNew, Parllel Scavenge

Old age collectors: Serial Old, CMS, Parllel Old

Reclaim the G1 collector for the entire heap

  • Serial: a new generation of single-threaded collectors, single-threaded in both marking and cleaning, with the advantage of high efficiency and the disadvantage of long dwell time.
  • ParNew (Copy) : A new generation of parallel collectors, a multi-threaded version of Serial, which performs better than Serial on multi-core cpus (only it can work with CMS)
  • Parllel Scavenge: A new generation of parallel collectors that seek high throughput and efficient CPU utilization. As soon as possible to complete the operation of the program, suitable for background applications such as interaction scenarios not high requirements of the scene. Throughput = user thread time/(user thread time +GC thread time), shortening worker thread wait time
  • Serial Old single-threaded collector Serial Old single-threaded collector Serial Old single-threaded collector
  • Parllel Old: The Parllel Scavenge
  • CMS (Concurrent Mark Sweep) : the old parallel collector aims to obtain the shortest recovery pause time, with the characteristics of high concurrency and low pause. Pursue minimum GC collection pause times, which means shorter GC times

    Disadvantages:
    1. Excessively sensitive to CPU resources, applications slow down, and throughput drops
    2. Unable to handle floating garbage. Because the worker thread is running at the time of marking and cleaning, new garbage is generated, but it cannot be collected this time.
    3. A large amount of memory fragmentation is generated, triggering the Full GC prematurely
  • G1 (Garbage First) : Java parallel collector, G1’s collection scope includes both the new generation and the old generation. It is used as a next-generation collector, preserving the concepts of the new and the old, but internally dividing the Java heap into separate regions of equal size

    Advantages:
    1. Parallelism and concurrency. Use multiple cpus to shorten the recycle pause time and execute concurrently with user threads
    2. Collect by belt. Managing the entire heap range independently allows you to handle newly created objects differently from old objects that have been around for a while and have survived multiple GC’s for better collection results
    3. Use a mark-de-clutter algorithm. No memory fragment is generated.
    4. Predictable pauses. You can enable developers to specify a length of time within which garbage collection needs to be completed.

The Parallel Collector +Parallel Old collector can be used as a priority for throughput and CPU resource-sensitive applications

Types and methods of GC

  1. Minor GC: New generation GC
    • Minor GC is triggered when Eden ([‘id(ə)n]) is full
  2. Major GC: Old GC
  3. Full GC: Global GC (Young + Old)
    • The system.gc () method may trigger the Full GC
    • The old age is full
    • The permanent generation store is Full, the Full GC is triggered, and the constant pool is collected and the type is unloaded
    • After the Minor GC, the size of the memory in the old age > the available memory in the old age
    • After a Minor GC, if a 1 GC does not fit, the overflow part will be put into the old GC. If the old GC does not fit, the Full GC will be triggered

The GC triggers “stop-the-world”, which means that all worker threads are shut down for GC collection, and the task is executed after GC collection is complete

HashMap

(1) Meituaninterview question: The structure of Hashmap, 1.7 and 1.8 what are the differences, the most in-depth analysis in history

The paper

Two parameters that affect performance:

  • Initial capacity: a power of 2, 16 by default

  • Load factor: indicates when to expand the hashMap capacity. The default value is 0.75, i.e. 16*0.75=12.

  • Maximum capacity: 2 ^ 30, if larger, use the size of 2 ^ 30

  • Key == NULL; value == NULL; key == null; table[0

  • The essence of deleting an element is “deleting a node of a one-way linked list”

  • Entry is a one-way linked list

Computes the hash value of the key and adds the hash value to the corresponding linked list. If the key exists, the vlaue value is updated

    static class Node<K.V> implements Map.Entry<K.V> {
        final int hash;  // The calculated hash value
        final K key;     //key
        V value;         //value
        Node<K,V> next;  // List next reference. }Copy the code

And modify

  • It’s faster because it’s non-synchronized and thread-safe
  • A HashMap can accept null keys and null values

Array index calculation process

// Array length -1 & hash value
(n - 1) & hash
Copy the code

That’s the same thing as the hash mod of the array length

Describe the put process

  1. Evaluates the hash value for the key, and then evaluates the array index
  2. If the array subscript is not collided, place Node in the array
  3. If they collide, join nodes behind them as a linked list
  4. If the list length exceeds the threshold (8), the list is converted to a red-black tree, and if the list length is less than 6, the red-black tree is converted back to the list
  5. If the node exists, the old value is replaced
  6. If the array is nearly full (maximum size 16* load factor 0.75), resize is required.

Why 6 and 8?

Because the middle position of 7 is placed after frequent data structure switching, performance is affected

The get method

  1. Computes the hash of the key and the index
  2. Find index in array, compare key, fetch value, best O(1), worst O(n)

Why not just use red black trees?

The choice of space and time, when the chain is short, the space is small, the time is ok, after converting into a red-black tree, it is easy to find, but it consumes space.

The following methods can be used to handle hash conflicts:

  1. Open address method (linear probe hashing (after collision, position moved, array length +x) x can be positive, second probe hashing (array length +x squared) x can be positive and negative, after squared)
  2. Rehash (multiple methods of calculating a hash, replacing the same method until the hash value is not repeated)
  3. Linked address method (linked list)
  4. Create a public overflow zone (create an overflow table to store conflicting data)

Why is the performance of HashMap slow?

  • Data type automatic packing problem
  • Resize recalcates index and hashcode, reassigns (1.7) 1.8 to hash value & array length, if it is 0, it will not change, otherwise it will change

What happens when threads are restless

Circular lists, head insertion caused by resize (version 1.7)

Data loss, version 1.8 fix circular linked lists (tail inserts)

Why is the default capacity in HashMap a power of 2?

Because if it’s not a power of two, it’s going to cause more hash collisions. Let’s say n is 17, and the binary of n-1 is 10000, and 01001 and 01101 will both be 0. Let’s say n is 16, and the binary of n-1 is 01111, The index values of 01001 and 01101 are different

Hashcode calculation principle

For int, hashcode is itself, eg: int I = 1; hashcode = 1; For an object, hashCode is a mapping of the internal address to the object’s value

Hash () algorithm principle

    static final int hash(Object key) {
        int h;
        return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
    }
Copy the code

Take the hashCode() of the key and do the or with the value 16 bits higher (h unsigned 16 bits right) (same = 0, different = 1)

The understanding of the HashTable

The PUT and get methods use the synchronized modifier to lock the entire map, and only one thread can operate at a time

Null values and null keys cannot be stored

SparseArray understand

The principle of

Box, int data type —->Integer object, unbox, Integer object —->int data type

The default capacity is 10

  • Key is an int value (to avoid stuffing problems). Use binary search to find the key, also binary insertion, sorted from smallest to largest
  • Two arrays: key (int []) and value(object [])
mKeys[i] = key;
mValues[i] = value;
Copy the code
  • If there is a conflict, replace the value of value

Binary insertion:

 while (lo <= hi) {
            // Dichotomy is divided into two, array middle index
            final int mid = (lo + hi) >>> 1;
            // The dichotomy is one and two, the value at the middle index of the array
            final int midVal = array[mid];
            
            if (midVal < value) {
                /** If the value in the middle of the array is smaller than the value you are looking for, the value is in the middle and back of the array, so the current subscript is mid + 1 */
                lo = mid + 1;
            } else if (midVal > value) {
                /** If the value in the middle of the array is greater than the value to be found, it means that the value to be found is in the middle of the array, so the current subscript value is mid-1 */
                hi = mid - 1;
            } else {
                // If the value in the middle of the array is the same as the value in question, return the index mid in the middle of the array
                return mid;  // value found}}Copy the code

The first value goes in the middle

The second value greater than the middle value is placed in the middle of the left

………………… .

If the keys are the same, replace the values directly. If the keys are different, insert the array. The index element is moved later, and the new key is placed on the index

Advantages over HashMap
  • Save memory
  • Better performance, avoid packing problems
  • The value of the key is int, and the HashMap can be replaced with SparseArray

SparseArray and HashMap comparison, application scenario?

  1. SparseArray is not hashed, HashMap is hashed
  2. SparseArray uses two one-dimensional arrays to store keys and values, and HashMap uses a one-dimensional array plus a one-way list/red-black tree
  3. SparseArray keys can only be of type int, whereas hashMaps can be of any type
  4. SparseArray Key is ordered storage (ascending), while HashMap is not
  5. SparseArray defaults to 10, while HashMap defaults to 16
  6. SparseArray memory usage is superior to HashMap because:
    • SparseArray Key is an int and HashMap is an Object
    • The SparseArray Value store is not wrapped with an entity class (Node) like HashMap
  7. SparseArray searches for elements are generally inferior to HashMap because SparseArray searches are dichotomized and the subscripts of HashMap’s hash can be directly retrieved without conflict

SparseArray or HashMap is used for comparison with HashMap. You are advised to select SparseArray or HashMap based on the following requirements:

  1. If the memory requirements are high and the query efficiency is not high, you can use SparseArray
  2. SparseArray at level 100 has an advantage over HashMap
  3. The key is required to be of type int, because HashMap automatically boxing int to type Integer
  4. The key is required to be ordered and in ascending order

The understanding of the ArrayMap

Internal binary algorithm is also used for storage and search, the design of more consideration in memory optimization

  • Int [] stores hash values, array[index] stores keys, and array[index+1] stores values

The amount of data should be within 1000 levels

How to select ArrayMap and SparseArray?

  1. If the key is int, SparseArray is selected for storage and there is no sealing/unpacking problem
  2. If the key is not int, ArrayMap is used

The understanding of the TreeMap

TreeMap is a binary tree structure, red black tree

Duplicate keys are not allowed

TreeMap has no tuning options because its red-black tree is always in equilibrium

What is the difference between TreeMap and HashMap?

  1. TreeMap consists of red-black tree, and HashMap consists of array + linked list/red-black tree
  2. HashMap elements are not ordered, and TreeMap elements are sorted in ascending order according to what is available
  3. HashMap is best for insert, find, and delete, while TreeMap is best for natural or custom ordering

The understanding of the ThreadLocal

Interviewer: Young man, I heard you read the source code for ThreadLocal? (Swastika deep parsing ThreadLocal)

Threads are isolated and data is not crossed

  • ThreadLocalMap. Each thread has a variable ThreadLocalMap threadLocals
  • ThreadLocalMap has an Entry and has a weak reference relationship with ThreadLocal
  • In ThreadLocalMap, the key is a weak reference to ThreadLocal, the value is an Entry, and the internal object is an object
  • Table default size is 16, initial capacity (16) and threshold (16 x 2/3)
  • Initialize threadLocals using the get() and set() methods in ThreadLocal
  • The get, set, and remove methods remove data where key==null
  • Table is a circular array

Linear probing avoids hash collisions and incrementally searches for places that are not occupied

Calculate the index position using hashcode. If the key value is the same, replace it. If the key value is different, nextIndex is used

ThreadLocal is the ThreadLocalMap that manages each thread, so threads are isolated.

The understanding of the ThreadLocalMap

When creating a new ThreadLcoal, create a ThreadLocalMap object and use the value 0x61C88647 to compute the hash, which is the golden proportion number, so that the calculated hash value is relatively uniform, which greatly reduces hash conflicts. Internally linear detection is used to resolve conflicting sets:

  1. Calculates the array index based on key
  2. Iterate through the linked list of index values. If it is empty, value is directly assigned; if key is equal, value is directly updated; if key is not equal, linear detection is used again.

Why ThreadLocal uses weak references

The key uses a weak reference. If the key uses a strong reference, then when the object of ThreadLocal is reclaimed, but ThreadLocalMap also holds a strong reference to ThreadLocal, the ThreadLocal will not be reclaimed, resulting in a memory leak

Memory leak in ThreadLocal

  • Avoid static ThreadLocal modifiers: Prolong the life cycle and may cause memory leaks
  • When a ThreadLocal weak reference is collected by the GC, the key is null, the object is not collected, and the null key is only known when the set, GET, and remove methods are called again

How ThreadLocalMap cleans expired keys

  1. The probe clears the value that should have been placed in position 4 and puts it in position 7, and moves the value of 7 to position 5 when 5 becomes obsolete
  2. Heuristic cleaning traversal number group, cleaning data

ConcurrentHashMap and HashMap

JDK 1.7 ReentrantLock+segments + hashEntry

  • Thread-safe, segmented thread locks, hashtable is a whole block lock, so performance is improved
  • The default allocation of 16 locks is 16 times more efficient than Hashtable
  • HashEnty is final and cannot be modified, and as soon as it is modified, the previous chain of the node is recreated, using a header insert, so the order is reversed
  • If the Segment is not equal, lock the Segment and calculate the value.

JDK 1.8: Synchronized +node+volatile+ red-black tree

Put:

  1. Calculates the Node array position based on the hash value of the key
  2. If the Node is not empty and the Node is not in the moving state, a synchronized lock is added to the Node for traversing Node insertion
  3. If it is a red-black tree node, insert into the red-black tree
  4. If there are more than eight, expand to a red-black tree

The get:

  1. Compute the hash value, locate the table index position, return if the first node matches
  2. If an expansion occurs, the find method that marks the ForwardingNode is being expanded is called to inform you to look for the node in the new table, and the match is returned
  3. If none of the above is true, the node is traversed and returns if it matches, otherwise null is returned

Differences between 1.7 and 1.8:

  1. 1.7: ReentrantLock+segments + hashEntry(unchangeable)

    1.8: Synchronized +node+volatile+ red-black treeCopy the code
  2. The lock with 1.8 has a lower granularity and locks a linked list (table[I]), while the lock with 1.7 is a small hashmap (seINTERFACES).

  3. ReentrantLock performs worse than synchronized

Capacity:

1.7 Perform small HashMap (SeINTERFACES) expansion operations

Under 1.8, synchrozied nodes are used to lock, so it can be expanded by multiple threads. One thread creates a new ConcurrentHashMap and sets the size. Multiple threads add the old content to the new map. If the added content is marked, the other threads do not process it

Why only hashMap can store NULL values and null keys

Because hashMap is thread-unsafe, and thread-safe elsewhere, there is no way to tell if a null key is not found or if the key is null during multithreaded access

The lock

Common lock

The classification of the lock

  1. Fair lock/unfair lock

    • Fair lock: Multiple threads acquire locks in the order in which they were applied.
    • Unfair lock: Multiple threads apply for locks that are not acquired sequentially. It is possible to apply for locks before acquiring them. (Synchronized)

    ReentrantLock Is an unfair lock by default. You can set it to a fair lock by constructing a pass-through parameter. The advantage of an unfair lock is that the throughput is greater than that of a fair lock

  2. Reentrant lock: also known as recursive lock, this means that after the outer method acquires the lock, the inner method also acquires the lock automatically.

synchronized void setA(a) throws Exception(a){
    Thread.sleep(1000);
    setB();
}

synchronized void setB(a) throws Exception(a){
    Thread.sleep(1000);
}
Copy the code

If it is not a reentrant lock, then the setB method will not be executed by the current thread and is prone to deadlocks

Synchronized is a reentrant lock

  1. Exclusive lock/shared lock

    • Synchronized: a lock can only be held by one thread at a time (ReentrantLock)
    • Shared lock: a lock is held by multiple threads. (ReadWriteLock)
  2. Mutex/read-write lock The exclusive/shared lock mentioned above is a broad term, and the mutex/read-write lock is a concrete implementation. ReentrantLock in Java is ReadWriteLock

  3. Optimistic lock/pessimistic lock

    • Pessimistic locking: Concurrent operations on the same data are bound to change. (Implemented using various locks)
    • Optimistic locking: Concurrent operations on the same data cannot be modified. (Lockless programming, CAS algorithm, spin implementation atomic operation update)
  4. Segmental locking is a lock design, not a specific lock. In Version 1.7 of ConcurrentHashMap, segmental locks are used. These segmental locks are also called segments, and each list in the map is modified by ReentrantLock

  5. Biased/lightweight/heavyweight locks are the three states that describe synchronized.

    • Biased lock: if a piece of synchronized code is continuously accessed by a thread, the lock is automatically acquired, reducing the cost of acquiring the lock
    • Lightweight lock: When a biased lock is accessed by another thread, the biased lock is upgraded to lightweight. Other threads spin to acquire the lock without blocking, improving performance
    • Heavyweight lock: On the basis of lightweight locks, a maximum spin will block. Upgrading to heavyweight locks will block other threads, affecting performance.

    Locks can be upgraded but cannot be downgraded, which means that biased locks cannot be downgraded after being upgraded to lightweight locks. The purpose of this strategy is to improve the efficiency of obtaining locks and releasing locks.

  6. In the process of acquiring the lock, the spin lock will not be blocked immediately. Instead, it will be acquired in a loop to reduce the consumption of context switching by threads. The disadvantage is that the loop consumes CPU

Common lock types in Java

  1. Synchronized: unfair, pessimistic, exclusive, mutually exclusive, reentrant, heavyweight lock
  2. ReentrantLock: Default unfair (fair), pessimistic, exclusive, mutually exclusive, reentrant, heavyweight lock

CAS, short for compare-and-swap, is an atomic instruction of the CPU that causes the CPU to update the value of a certain location atomically after comparison. The implementation is based on the assembly instruction of the hardware platform, that is, CAS is implemented by the hardware, And the JVM only encapsulates the assembly call. The AtomicInteger classes use these encapsulated interfaces.

Synchronized and volatile

Briefly describe the principles of synchronized

Visibility: indicates that the modified value of A can be seen when B is executed

  • The monitorenter directive is used internally, and only one thread can fetch monitor
  • Threads that do not acquire monitor are blocked, waiting to acquire monitor
  • Thread A obtains the main memory value and locks it. After updating the value (temporary area) in the local memory, thread A pushes it to the main memory and implicitly notifes thread B to access main memory to obtain the value through synchronized. Thread B pushes the updated value in the local memory to main memory and repeats the above operations.

Methods and code blocks are synchronized through Monitor objects, including monitorEnter and monitorExit directives, which are inserted into the program and block through Monitor when a thread accesses it

Synchronized modifies the difference between static and non-static methods

Static methods: Objects of this class, where multiple instances of new objects are locked by a lock and multithreaded access is required to wait

Non-static methods: instance objects

volatile

Decorates a member variable to ensure visibility, and the next operation precedes the last. The ++ operation does not guarantee the sum atomicity,

The local cache is synchronized to main memory to invalidate other local caches, and the local cache checks to see if its own cache is expired by sniffing. (Main storage will not actively notify the next access)

Volatile does not guarantee atomicity and can be optimized using the retry mechanism of optimistic locks

Difference between synchronized and volatile

  • Synchronized causes threads to block, while volatile does not

  • The difference between synchronized and volatile is that synchronized implicitly notifies B to obtain a value from main memory, while volatile means that B actively detects its memory expiration and synchronizes with main memory

  • Synchronized: clear the working memory → copy the copy of the latest variable in the main memory to the working memory → finish executing the code → refresh the value of the changed shared variable to the main memory → release the mutex.

  • Both have visibility, but volatile is not atomic, so it does not block threads

    Suppose I = 10 at A certain moment, thread A read 10 to their working memory, A plus one on the value for operation, but is ready to 11 is assigned to the I, because I value did not change at this time, B read the value of the main memory is still 10 to their working memory, and perform the plus one operation, and is preparing to 11 is assigned to the I, A 11 is assigned to the I, Due to the effect of volatile immediately synchronized to the main memory, the values in the main memory is 11, and B I failure in the working memory, B to implement the third step, while B I failed in the working memory, but the third step is to assign 11 I, for B, I’m just assignment operation, does not use I this action, so this step is not to refresh the main memory, B assigns 11 to I and immediately synchronizes to main memory, where the value remains 11. Although both A and B do the increment operation, the main memory is 11, which is why the final result is not 10000.

  • Synchronized modifies methods, classes, variables, code blocks, and volatile modifies only variables

Synchronized modifies the differences between objects

  1. Modifier class: Applies to all objects of this class
  2. Method: Action object is the object of this method
  3. Static methods: Objects are objects of this class
  4. Code block: Action object is the object of this code block

Pessimistic and Optimistic Locks (CAS)

Pessimistic lock: the lock acquired by the current thread blocks other threads (sychronized)

Optimistic lock: No lock will be added. There will be three values: the actual memory value, the old memory value, and the updated new value. If the actual memory value and the old memory value are equal, no thread will modify the value and assign the updated new value directly to memory.

Disadvantages of CAS:

  1. ABA problem, A->B->A, optimistic lock that there is no change, both A, so directly assigned value
  2. Reassignment will take too long.

ReentrantLock

CAS+AQS implementation, optimistic lock

AQS (single list queue) maintains a wait queue, which places threads that cannot acquire locks on the queue to wait. When the current thread finishes, the queue is dequeued, using a volatile INT member variable (state) to indicate the synchronization status

Locking is done using the Lock method of ReentrantLock

Use the unLock method of ReentrantLock

thread

How many ways can I create a thread?

  1. new Thread
  2. Create a Runnable object
  3. Create a New Callable or Future object
  4. Thread pool usage

Disadvantages of New Threads

To execute an asynchronous task, do you still only need the following new Thread? The disadvantages of new threads are as follows:

  1. Each new Thread creates an object with poor performance.
  2. Lack of unified management of threads, unlimited new threads may compete with each other, and may occupy too many system resources, resulting in a crash or OOM.
  3. Lack of more features, such as timed execution, periodic execution, thread interrupts.

The benefits of Java’s four Thread pools over New Threads are as follows:

  1. Reuse existing threads, reduce the overhead of object creation and death, and improve performance.
  2. It can effectively control the maximum number of concurrent threads, improve system resource usage, and avoid excessive resource competition and congestion.
  3. Provides scheduled execution, periodic execution, single thread, concurrency control and other functions.

The thread pool

Brief Description of thread pools

The five states of a thread

  • NEW: Creates a NEW thread
  • RUNNABLE: Indicates that the system can run
  • BLOCKED: blocking
  • WAITING: Enter the WAITING state
  • TIMED_WAITING: Wait over, lock reacquired
  • End of the TERMINATED:
  • RUNNING: The system is RUNNING
  • READY, READY

Generally speaking, there are five states:

  1. New: Creates a thread object and enters the New state. Eg: Thread Thread = new Thread();
  2. Runnable: The thread.start() method is called, ready to be executed by the CPU
  3. Runnable: CPU thread of execution
  4. Blocked: For some reason, the CPU aborts execution of a thread and the thread enters a suspended state
    • Wait to block: The wait method is called to block and the thread waits for some work to complete
    • Synchronization blocking: Waits for a Synchronized lock to be acquired
    • Other blocking: A thread is blocked by calling its sleep() or join() or by making an I/O request. When the sleep() state times out, when the join() wait thread terminates or times out, or when I/O processing is complete, the thread goes back to the ready state.
  5. Dead: the thread is Dead and recycled

The difference between start and run? What’s the difference between “sleep” and “wait”? Join, yield, interrupt

  • Start is to start a thread
  • Run is simply an implementation of Thread. The main implementation is the Runnable interface callback to run
  • Sleep does not release the object lock, but suspends the thread and resumes the running state when the specified time is up
  • The wait method abandons the object lock, and only when notify() is called will the lock be reacquired and run
  • The join method defines the execution order of threads. If A’s join method is called in thread B, B will be executed sequentially until A completes execution. The actual internal method is to call the wait method, hold B in the wait state, and start B after A completes execution

Note: The wait method calls u to waive the object lock on thread B, so call A’s join on thread B, leaving thread B waiting.

  • Yield method, which tells the CPU that the thread task is not urgent and can be paused to allow another thread to run
  • An interrupt method notifies a thread of its actions and executes different logic depending on the state

How can threads T1, T2, and T3 execute sequentially?

T2.join () is called at the start of T3, and t1.join() is called at the start of T2.

After t1 is executed, t1.join() in T2 will not block, that is, after T1 is executed, the method in T2 will be executed. Similarly, CountDownLacth will be used for counting

 public static void main(String[] args) {
 
        final Thread t1 = new Thread(new Runnable() {
 
            @Override
            public void run(a) {
                System.out.println("t1"); }});final Thread t2 = new Thread(new Runnable() {
 
            @Override
            public void run(a) {
                try {
                    // Reference the T1 thread and wait for the T1 thread to finish executing
                    t1.join();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println("t2"); }}); Thread t3 =new Thread(new Runnable() {
 
            @Override
            public void run(a) {
                try {
                    // reference the T2 thread and wait for the t2 thread to finish
                    t2.join();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println("t3"); }}); t3.start(); t2.start(); t1.start(); }Copy the code

What is a deadlock

Resources compete and wait for each other

Let’s say thread A, thread B, resource A, resource B

Thread A accesses resource A and holds the lock of resource A. Thread B accesses resource B and holds the lock of resource B. Then thread A wants to access resource B, but thread B holds the lock of resource B. Thread A waits. Thread B wants to access resource A, but thread A holds the lock of resource A. So B waits.

As A result, A and B wait for each other to release resources, resulting in A deadlock.

Does the crash of one thread affect other threads?

Not necessarily. If the crash occurs in the heap (the thread-shared area), it causes other threads to crash. If a crash occurs in the stack (thread private area), it will not cause other threads to crash

Java reflection

  1. Reflection class and reflection method are obtained by searching through the list to find matching methods, so the search performance will vary with the size of the class method;
  2. Each Class has a corresponding Class instance, so that each Class can take the method reflection method and apply it to other instances.
  3. Reflection is also considered thread-safe, safe to use;
  4. Reflection uses soft reference relectionData to cache class information, avoiding the overhead of retrieving it from the JVM each time.
  5. Reflection calls generate a new proxy Accessor multiple times, while bytecode survival allows for offloading, so a separate class loader is used.
  6. When a method is found, it is copied instead of using the original instance to ensure data isolation.
  7. Scheduling the reflection method, which is ultimately executed by the JVM invoke0();

Use reflection to read data from binary files in the JVM

Reflection principle

. Java — >. The class — > Java. Lang. Class object

Compilation process:

  • Compile. Java files into a machine-recognized binary. Class
  • The.class file stores various information about the class file. Such data as version number, class name, field description and descriptor, method name and description, whether public, class index, field table collection, method collection, etc
  • The JVM takes the.class binary file and brings it into memory for parsing
  • The Class loader takes the binary information of the Class and generates java.lang.class objects in memory
  • Finally starts the life cycle of the class and initializes it (static first, non-static and constructed, parent in subclass)

Reflection operates on java.lang.Class objects in memory.

In summary. Class is a sequential structure file, and a class object is a representation of that file, so we can get all the information about a class from a class object, and that’s how reflection works.

Why does reflection take time?

  1. Long check time
  2. Basic types of sealing and unpacking
  3. Methods the inline

What is an inline function?

Too many method calls lead to introverted optimization, reducing the nesting level of methods, speeding up execution, and reducing stack storage space

Can reflection modify a member variable of final type?

It is known that final modifiers cannot be modified, so fetching this variable directly assigns the value to you at compile time

The compiler inserts the specified function body and replaces every place (context) where the function is called, saving extra time each time the function is called.

So the above getName method, after JVM compilation and inline optimization, will become:

public String getName(a) {
    return "Bob";
}

// Print it out as Bob
System.out.println(user.name)
// Optimized inline
System.out.println("Bob")
Copy the code

Reflection can modify final variables, but for primitive data types or strings, the modified value cannot be retrieved from the object because the JVM optimizes it inline.

Can reflection change the static value?

Field.get(null) can get static variables. Field.set(null,object) can modify static variables.Copy the code

Java exception

Introduces a

Exceptions in Java fall into two broad categories, Error and Exception. There are StackOverFlowError and OutOfMemoryError. Exception is classified into IOException and RuntimeException.

What is the difference between checked and non-checked exceptions in Java?

Check Exception extends Exception (compile-time Exception) : It is caught using a try catch or an error occurs, inheriting from Exception

Non-checking exception extends RuntimeException: No catch is required and an error is reported when necessary.

Try-catch-final-return Execution order?

  1. The code in the finally block executes whether or not an exception is raised
  2. The finally block is still executed when there are return statements ina try and catch
  3. Finally is executed after an expression operation followed by a return, so the value returned by a function is determined before finally is executed. Whatever the code in finally is, the value returned remains the same as the value saved in the previous return statement
  4. It is best not to include a return in finally, or the program will exit prematurely. The return value is not the one saved ina try or catch

Throws and throws

A throw is used inside a method to throw an exception

Throws throws outside a method

How many times does StackOverFlowError occur?

Recursion, full stack memory, function call stack too deep

Common Java exceptions

Java. Lang. IllegalAccessError: illegal access errors. This exception is thrown when an application attempts to access, modify, or call a class’s Field or method but violates the visibility declaration of the Field or method.

Java. Lang. InstantiationError: instantiation errors. This exception is thrown when an application attempts to construct an abstract class or interface using Java’s new operator.

Java. Lang. OutOfMemoryError: out of memory error. This error is thrown when the available memory is insufficient for the Java virtual machine to allocate an object.

Java. Lang. StackOverflowError: stack overflow error. This error is thrown when an application’s recursive calls are too deep and cause the stack to overflow or fall into an infinite loop.

Java. Lang. ClassCastException: class abnormal shape. Given that there are classes A and B (A is not A parent or subclass of B) and O is an instance of A, this exception is thrown when O is forced to be constructed as an instance of class B. This exception is often referred to as a cast exception.

Java. Lang. ClassNotFoundException: can’t find the abnormal class. This exception is thrown when an application tries to construct a class based on a string name and cannot find a class file with the name after traversing the CLASSPAH.

Java. Lang. ArithmeticException: arithmetic

A exception. An integer divided by zero, etc.

Java. Lang. ArrayIndexOutOfBoundsException: array index cross-border anomalies. Thrown if the index value of an array is negative or greater than or equal to the array size.

Java. Lang. IndexOutOfBoundsException: index cross-border anomalies. This exception is thrown when the index value accessing a sequence is less than 0 or greater than or equal to the size of the sequence.

Java. Lang. InstantiationException: instantiate the exception. Throws this exception when an attempt is made to create an instance of a class that is an abstract class or interface through the newInstance() method.

Java. Lang. NoSuchFieldException: attributes, there is no exception. This exception is thrown when a nonexistent property of a class is accessed.

Java. Lang. NoSuchMethodException: there is no exception. This exception is thrown when a nonexistent method of a class is accessed.

Java. Lang. NullPointerException: null pointer exception. Throws this exception when an application attempts to use NULL where an object is required. For example, calling instance methods of null objects, accessing properties of null objects, calculating the length of null objects, throwing null statements, and so on.

Java. Lang. A NumberFormatException: abnormal number format. This exception is thrown when an attempt is made to convert a String to the specified numeric type, but the String does not conform to the format required by the numeric type.

Java. Lang. StringIndexOutOfBoundsException: string index crossed the exception. This exception is thrown when a character in a string is accessed using an index value that is less than 0 or greater than or equal to the sequence size.

Linux processes communicate in several ways

What is interprocess communication in Linux? Explain why Binder communication is efficient? What are the limitations of Binder communication?

Interprocess communication in Linux has the following types:

  • Signal
  • The message queue
  • Shared Memory allows two or more processes to communicate with each other by sharing a block of Memory that maps to their own independent address space.
  • The word Pipe vividly describes the behavior of two communicating parties, process A and process B. A pipe has both a read and a write end. For example, if process A writes from write End, then process B can read from Read End.
  • The local Socket and the server maintain a file respectively. After a connection is established and opened, data is written to the file for the peer to read

Binder communication is a unique IPC mechanism for Android. Binder communication has the following advantages:

  1. Performance: Binder is efficient and requires only one memory copy. In Linux, pipes, message queues, and sockets all need two. Shared memory does not require copying data, but has the problem of multi-process synchronization.
  2. Stability: Binder architecture is based on C/S structure, with Client’s requirements left to the Server. The structure is clear, responsibilities are clear and independent from each other, which naturally provides better stability. Shared memory does not require copying, but is controlled and difficult to use. Binder mechanisms are superior to memory sharing from a stability perspective.
  3. Security: Traditional IPC recipients cannot obtain the reliable process user ID/ process ID (UID/PID) of the other party to authenticate the other party. Android assigns its own UID to each installed APP, so a process’s UID is an important indicator of the process’s identity. In the Android system, only the Client is exposed. The Client sends tasks to the Server. The Server checks whether the UID/PID meets the access permission based on the permission control policy. Binder is more secure from a safety perspective.

Another limitation of Binder communication is a maximum of 16 threads. A maximum of 1M data can be transferred, otherwise there will be a TransactionTooLarge Exception.

CountDownLatch principle

There are four threads, and I want to execute another thread after all four threads have finished,

CountDownLatch is a counter based latch with two methods:

CountDown: count-1

Await: the thread is suspended and, when the count is 0, executes the subsequent logic

Java generics

Generic description

In Java, a generic type is a “parameterized type,” meaning that the generic type is passed in as a parameter

It exists only in the program’s source code and has been replaced with a native type in the compiled bytecode. This method is called pseudo-generics.

Generics in Java are only valid at compile time, and when the results of generics are properly verified, generic-related information is erased, and methods for type checking and type conversion are added to the method boundaries where objects enter and leave.

				List<String> stringArrayList = new ArrayList<String>();
        List<Integer> integerArrayList = new ArrayList<Integer>();

        Class classStringArrayList = stringArrayList.getClass();
        Class classIntegerArrayList = integerArrayList.getClass();

        if(classStringArrayList==classIntegerArrayList){   / / return true
            System.out.println("Same type");
       	}
Copy the code

Generics have generic classes, generic methods, and generic interfaces

A generic class:

// Here T can be written as any identifier. Common parameters such as T, E, K, V are used to represent generics
When instantiating a generic class, you must specify the specific type of T
public class Generic<T>{ 
    // Key is a member variable of type T, whose type is externally specified
    private T key;

    public Generic(T key) { // The generic constructor parameter key is also of type T, whose type is externally specified
        this.key = key;
    }

    public T getKey(a){ // The generic method getKey returns a value of type T, the type of which is externally specified
        returnkey; }}Copy the code

Generic interfaces:

// Define a generic interface
public interface Generator<T> {
    public T next(a);
}
/* FruitGenerator
      
        implements Generator
       
        {* FruitGenerator
        
          implements Generator
         
          {* FruitGenerator
          
            implements Generator
           
            {* FruitGenerator
            
              implements Generator
             
              { FruitGenerator implements Generator
              
                "Unknown class" */
              
             
            
           
          
         
        
       
      
class FruitGenerator<T> implements Generator<T>{
    @Override
    public T next(a) {
        return null; }}/** * When passing generic arguments: * Define a producer to implement this interface. Although we only created a generic interface Generator
      
        * we can pass in an infinite number of arguments to T, forming an infinite number of types of Generator interfaces. * When an implementation class implements a generic interface, if the generic type has been passed to the argument type, all uses of the generic type should be replaced with the passed argument type * i.e. Generator
       
        , public T next(); The T in is replaced by the String passed in. * /
       
      
public class FruitGenerator implements Generator<String> {

    private String[] fruits = new String[]{"Apple"."Banana"."Pear"};

    @Override
    public String next(a) {
        Random rand = new Random();
        return fruits[rand.nextInt(3)]; }}Copy the code

Generic methods:

/** * A basic introduction to generic methods *@paramThe generic argument * passed in by tClass@return<T> <T> <T> <T> <T> * 2) Only methods that declare <T> are generic, and member methods that use generics in a generic class are not generic. * 3) <T> indicates that the method will use the generic type T before you can use the generic type T in the method. * 4) Like the definition of generic classes, here T can be written as any identifier, common parameters such as T, E, K, V are often used to represent generics. * /
public <T> T genericMethod(Class<T> tClass){
        T instance = tClass.newInstance();
        return instance;
}
Copy the code

How does generics affect method overloading?

Methods cannot be overridden, an error is reported, both methods have the same erasure, generic erasure at compile time will result in the same erasure

public class MyMethod {
    public void listMethod(List<String> list1){}
    public void listMethod(List<Integer> list2){}}Copy the code

Class loading

Initialization process for Java classes

Superclass to subclass, static to (non-static, constructed), variable —–> code block

Parent class static variables, the parent class static blocks of code – subclass static variables – subclass static blocks of code – the parent class non-static – parent class structure – subclasses non-static – subclass

The seven flows of the JVM class loading mechanism

Validation loading — — — — — — — — — — — to — — — — — — parse the initialization — — — — — — — — — — — — — use the — — — — — – will unload the JVM. Java file is loaded into a binary file. The class loading:

  1. Get the binary stream class file
  2. Convert static storage structures into run-time data structures in the method area and store them in the method area
  3. Generates a Java object in the heap as a reference to the method area

Get the. Class file and generate a class object in the heap, storing the loaded class structure information in the method area


Verification: JVM specification verification, code logic verification

Preparation: Allocates memory for a class variable and sets its initialization. If the variable is final, it is placed directly into the corresponding constant pool and assigned

Parsing: Constant pool symbolic references are replaced with direct references to memory

(All three are called connections)


Initialization: Executes code logic to initialize static variables, static code blocks, and class objects

Use: Use the initialized class object

Unload: The created class object is destroyed and the JVM responsible for running it exits memory

The difference between global and local variables
  1. Global variables apply to the entire class file. Local variables exist only for the duration of method execution and are then reclaimed. Static local variables are always visible to this function body
  2. Global variables, global static variables, local static variables are all in static storage space. Local variables allocate space in the stack (virtual machine stack)
  3. Global variable initialization requires assignment, local variables do not
  4. Global variables with the same name cannot be declared in a class, and local variables with the same name cannot be declared in a method. If a global variable has the same name as a local variable, the global variable does not take effect in the method.
A flowchart

When the JVM encounters new bytecode, it determines whether the class has been initialized, and if it has not been initialized (it is possible that the class has not been loaded, or if it is implicitly loaded, it will be loaded, validated, prepared, and parsed), and then initialize the class. If it has already been initialized, the instantiation of the class object is started directly, and the methods of the class object are called.

The timing of class initialization
  1. Initializes the main class of the main method
  2. The new keyword is triggered if the class has not already been initialized
  3. When a static method or field is accessed, the target object class is not initialized
  4. If the parent class is not initialized during the initialization of a subclass, the parent class is initialized first
  5. When called through the reflection API, the class is initialized if it is not
  6. The first call to Java. Lang. Invoke. MethodHandle instance, need to initialize MethodHandle point method in the class.
Class when instantiation is triggered
  1. New triggers instantiation, creating the object
  2. The reflection, class.newnistance (), and constructive.newnistance () methods trigger the creation of the object
  3. The Clone method creates the object
  4. Create objects using serialization and deserialization mechanisms
Class initialization and class instantiation

Class initialization: assign values to static members, perform instantiation of static code block classes: perform non-static methods and constructors

  1. Class initialization is performed only once, and static code blocks only once
  2. Class instantiation is performed multiple times, once per instantiation
Is it possible to instantiate an object directly before the class is initialized?

The normal case is class initialization and then class instantiation under unusual circumstances, such as in static variables

public class Run {
    public static void main(String[] args) {
        newPerson2(); }}public class Person2 {
    public static int value1 = 100;
    public static final int value2 = 200;

    public static Person2 p = new Person2();
    public int value4 = 400;

    static{
        value1 = 101;
        System.out.println("1");
    }

    {
        value1 = 102;
        System.out.println("2");
    }

    public Person2(a){
        value1 = 103;
        System.out.println("3"); }}Copy the code

Public static Person2 p = new Person2(); This will instantiate directly, and then perform the initialization of the class, so print

23123
Copy the code
Is there a problem with multithreading class initialization?

No, class initializers are blocking, multithreaded access, only one thread will execute, other blocks.

How many times can an instance variable be assigned during object initialization?

4 times

  1. When an object is created, memory allocation assigns the instance variable to its default value, which is guaranteed to happen.
  2. When the instance variable itself is initialized, it is assigned a value of int value1=100.
  3. When the code block is initialized, it is also assigned once.
  4. Constructor, perform the assignment once.
public class Person3 {
    public int value1 = 100;

    {
        value1 = 102;
        System.out.println("2");
    }

    public Person3(a){
        value1 = 103;
        System.out.println("3"); }}Copy the code

The screen

High brush phone, 60Hz, 120Hz refers to what?

Screen refresh rate, the number of screen refreshes within 1s. This parameter is determined by the hardware of the mobile phone. Generally, higher than 60Hz is the high brush collection, which is characterized by a higher refresh frequency and can maintain stability even if there is frame loss and lag.

Screen refresh process

Display pixels sequentially from left to right and top to bottom. When the whole screen is refreshed, that is, after a vertical refresh cycle, it will be refreshed again after (1000/60) 16ms. In general, a graphical interface drawing requires CPU to prepare data, and then GPU to draw. After drawing, it will be written into the cache, and then the screen will fetch graphics from the cache for display according to the refresh frequency.

Therefore, the whole refresh process is a tripartite cooperation between CPU, GPU and Display.

Frame rate, what is VSYNC

Frame rate: the number of frames rendered by the GPU in one second (unit: FPS). Therefore, it is best to keep the number of frames consistent with the screen refresh degree, which will not cause waste on one side

VSYNC: Vertical synchronization to keep the frame rate consistent with the screen refresh rate and prevent stuttering and frame skipping. Due to the unstable image drawing time of CPU and GPU, there may be a stutter situation, that is, the data of the next frame is not ready and cannot be displayed normally on the screen. After setting vSYNC, CPU and GPU are required to process the data of the next frame within 16ms. When the screen refreshes, the next frame can be fetched directly from the cache and displayed

Single cache, double cache, triple cache on screen
  1. Single cache: the CPU computes data and transfers it to the GPU. The GPU draws an image and puts it in the cache. Display obtains data from the cache and refreshes the screen
  2. Double cache: the CPU calculates the data and transfers it to THE GPU. The GPU image will be put into the cache BackBuffer later. When the VSYNC time reaches, the data will be synchronized to the cache FrameBuffer. If CPU+GPU rendering is not completed within a VSYNC time (the start time of drawing is near the next VSYNC time, resulting in only a small portion of VSYNC time being drawn), a VSYNC time will be wasted. When VSYNC VSYNC time comes, the GPU is processing data. Then the processing of the next frame will not be started. When the GPU processing is finished, the data processing of the next frame cannot be triggered, which will lead to the situation of stutter
  3. 3 cache data: When the processing is not completed in one vSYNC period, a third cache is created, and the next frame is cached in the second vsync period. In this way, the two caches are processed alternately to ensure that the FrameBuffer gets the latest data and the smoothness of the display
How does the screen refresh when the UI is modified in the code?

When invalidate/requestLayout is called for redrawing, a request will be made to VSYNC VSYNC service to wait for the next VSYNC VSYNC time, and the interface drawing refresh operation will be performed by CPU >GPU >Display

Will the screen refresh if the interface remains static? Will the image be redrawn?

The screen will not be refreshed or redrawn. If the screen remains unchanged, the program will not receive the VSYNC time and will automatically filter and not handle the screen refresh operation. Only when the interface changes, the VSYNC service will be requested and the next VSYNC screen refresh will be triggered

JVM garbage collection mechanism

I’ll start with four references

Strong reference: It is not recycled when used

Soft reference: When the system runs out of memory, it is reclaimed

Weak references: The next GC will be collected

Virtual references: They can be reclaimed at any time

The little knowledge

The difference between abstract classes and interfaces

  1. Abstract classes can contain ordinary methods + implementation, interface classes only exist abstract methods, no concrete implementation
  2. Values in abstract classes can be of any type, and values in interfaces must be public static final
  3. A class can inherit only one abstract class, and a class can implement multiple interface classes
  4. Abstract classes have constructors; interface classes do not
  5. The abstract class contains initialization blocks, not interfaces

Static and final

static

Is directly callable (class name. Method/variable),

Modifiable properties, methods, code snippets, inner classes

All objects have only one value

final

Modifiable properties, methods, classes, local variables

Final modifies variables whose values cannot be changed, methods cannot be overridden, and classes cannot be inherited

If you modify a set sum, its reference does not change, and the set sum can change freely

Is Java passed by value or by reference

If it’s a primitive type, it’s value pass

Reference types are reference passing

String is represented as value passing, but the object is actually recreated as a parameter, and the reference has changed, so value passing

public void test(a) {
    String str = "123";
    changeValue(str); 
    System.out.println("STR value is:" + str);  // STR is not changed, STR = "123"
}

public changeValue(String str) {
    str = "abc";
}
Copy the code
public void test(a)  {
    Student student = new Student("Bobo".15);
    changeValue1(student);   // student is not null! The student values are name:Bobo and age:15
    // changeValue2(student); // student = name:Lily, age:20
    System.out.println("Student = name" + student.name + ", the age." + student.age);
}

public changeValue1(Student student) {
    student = null;
}

public static void changeValue2(Student student)  {    
     student.name = "Lily";    
     student.age = 20;
}
Copy the code

The difference between String, StringBuilder, StringBuffer

Strings are immutable, and each assignment recreates the object, costing memory and performance

StringBuilder is non-thread-safe, stored through an array of variable-length characters (char[]).

If the required length is greater than (2 times +2), use the required length; otherwise, use (2 times +2) length. The default length is 16. Parameter construction =16+ parameter length

Stringbuffers are thread-safe

Efficiency from fast to slow: StringBuilder > StringBuffer > String

Why is String final?

Final +private guarantees that it cannot be modified

  1. Immutability ensures thread safety
  2. Immutability avoids deep copy and places String values in the String constant pool (in the heap) for other parties to reference, improving efficiency and saving memory

What is the difference between hashCode, equals and ==?

Hashcode:

  1. The basic type is the change value
  2. A reference type is a mapping of an object’s memory address

Equals:

  1. The equals method in object is equivalent to ==
  2. Among other methods, you override the equals method, which determines whether the values are equal

= = :

  1. Primitive types compare values
  2. A reference object compares a map of memory addresses

For String, Integer objects, they override equals so that equals checks whether values are equal, and == checks whether references are equal

What’s the difference between process, thread, coroutine? What’s the difference between blocking and non-blocking?

process

A process is the smallest unit of resources allocated by the CPU

thread

Threads include coroutines Threads are the basic unit of running and scheduling independently (what really happens on the CPU is threads) threads that share in-process resources can switch scheduling faster than processes

coroutines

Coroutines are operations that exist on threads and perform multiple coroutines through asynchronous IO processing. Coroutines schedule switching faster than threads

Blocking and non-blocking

Blocking is when a thread is not suspended by the CPU and does not execute thread logic

Concurrency and parallelism

Concurrency is you do one, I do one, in turn parallelism is all together

Coroutines versus threads
  1. Coroutines run on top of threads
  2. Thread execution is controlled by the kernel (kernel-mode execution), controlling thread switching consumes resources (preemptive), and coroutines are executed by programs (i.e., in user-mode execution)
  3. Coroutines are much lighter than threads
  4. In the case of multi-core processors, multiple threads can be parallel, but only one function of the coroutine is running, and all other coroutines are suspended. That is, coroutines are concurrent, but not parallel.
  5. Perform intensive I/O operations to improve performance
  6. Switching between coroutines does not need to involve any system calls or any blocking calls

IO

  1. Multiplexing IO

    Execute event A and event B at the same time. Obtain the execution status of A and B through status delivery

  2. Signal driven IO

  3. Asynchronous I/o

    Execute A event, via asynchronous processing, and notify the main process/thread when A event processing is complete

  4. Blocking I/o:

    After event A is executed, event B is executed

  5. Non-blocking IO

    Execute A event and B event at the same time, always monitoring the execution process of A and B