Don’t say much, dry things go.

1, a HashMap

Javaer HashMap: Javaer HashMap: Javaer HashMap: Javaer HashMap: Javaer HashMap HashMap has the following features:

1. Access to a HashMap is not sequential. 2.KV is allowed to be NULL. 3. In the case of multi-threading, HashTable can be used. 4.JDk8 is an array + linked list + red-black tree. 5. The initial capacity and load factor are the key points that determine the performance of the entire class. 6.HashMap is lazy and only built when you put data. 7. When a one-way list is converted to a red-black tree, it will first be converted to a two-way list and finally to a red-black tree. Remember that two-way lists and red-black trees coexist. 8. For the two incoming keys, it is mandatory to determine whether to place data to the left or right. 9. After the list is transformed into a red-black tree, it will try to merge the root node of the red-black tree and the head node of the list with the table[I] node. 10. When deleting nodes, it is necessary to determine whether the number of nodes in the red-black tree to be deleted needs to be translinked. If the translinked list is not similar to RBT, it is necessary to find an appropriate node to fill the deleted nodes. 11. The root node of a red-black tree is not necessarily the same as the head node of a table[I]. The synchronization of the three nodes is implemented by MoveRootToFront. And hashiterator.remove () will movable=false when calling removeNode.Copy the code

Common HashMap tests:

1. Principle of HashMap, internal data structure. 2. General process of put, get, and Remove in HashMap. 3. Implement hash functions in HashMap. 4. How to expand HashMap? 5. Why are several important parameters of HashMap set in this way? 6. Why is HashMap thread unsafe and how to replace it? 7. The difference between HashMap in JDK7 and JDK8 8. Switch between linked list and red-black tree in HashMap. 9. How to generate HashMap ring in JDK7Copy the code

2, ConcurrentHashMap

ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap

2.1 JDK7

ConcurrentHashMap in JDK7 uses Segment + HashEntry Segment lock to implement concurrency. Its weakness is that the concurrency is determined by the number of Segment arrays. Once the concurrency is initialized, it cannot be expanded.

The Segment inherits from ReentrantLock and acts as a lock here. Each Segment is a HashMap that implements Lock. If we have multiple segments forming an array of segments at the same time, we can implement concurrency.

The put process is as follows:

1. General implementation of ConcurrentHashMap?

ConcurrentHashMap allows multiple modification operations to occur concurrently, and the key is the use of lock separation. It uses multiple locks to control changes to different parts of the hash table. Internally, these different sections are represented by segments, each of which is essentially a small HashTable that can be modified concurrently as long as multiple changes are performed on different segments.Copy the code

2. How does ConcurrentHashMap ensure that the elements obtained are up to date in concurrent cases?

A HashEntry that is used to store key-value pair data. Its member variables value and next are volatile by design. This ensures that any changes made to the value by another thread are immediately visible to the GET method and are unlocked.Copy the code

3. The weak consistency of ConcurrentHashMap is reflected in the clear and get methods, because there is no lock.

For example, iterators iterate over data Segment by Segment. If a thread inserts data into the Segment that has just been iterated, an inconsistency will occur. The same goes for Clear. The get method and containsKey method both iterate over all the nodes in the corresponding index bit without locking. If it is modified, it can get the latest value because of the visibility, but if it is newly added, it cannot keep the same value.Copy the code

4. Size Statistics are inaccurate

The size method is interesting. First, count all the data in the segment without locking to see if the data is the same. If the data is the same, return the data. And the size method simply returns a statistical number.Copy the code

2.2 JDK8

ConcurrentHashMap deprecates block locking in JDK8, uses CAS +synchronized, changes HashEntry to Node, and implements a red-black tree.

What is ConcurrentHashMap for efficient concurrency security?

1. The read operation

The GET method doesn't use synchronization at all, nor does the unsafe method, so read operations support concurrent operations.Copy the code

2. Write operations

The basic idea is similar to the HashMap write operation, except that CAS + SYN is used to lock, and also involves the expansion operation. In JDK8, lock has been refined to table[I], array position different can be concurrent, the same position to help expand.Copy the code

3. Synchronization is performed through SYN and unsafe for hardware-level atomicity

Table [I] lock with SYN 2. The unsafe hardware-level instruction is used to retrieve data, directly retrieving the latest data of specified memory.Copy the code

Concurrency basics

Concurrent programming starting point: make full use of CPU computing resources, multithreading is not necessarily faster than single-threading, otherwise why is the core operation instructions in Redis6.0 still single-threading? There you are!

Multithreading and single thread performance are specific task specific analysis, talk is cheap, show me the picture.

3.1 Processes and Threads

Process:

Process is the smallest unit of operating system call and the independent unit of system resource allocation and scheduling.Copy the code

Thread:

1. Because the process of creation, destruction, switching produce a lot of time and space overhead, the number of processes can not be too much, and the thread is smaller than the process can run independently of the basic unit, he is an entity of the process, is the smallest unit of CPU scheduling. Threads can reduce the time and space overhead of concurrent execution of programs, making the operating system have better concurrency. 2. The thread owns almost no system resources, only some essential runtime resources, such as program counters, registers, and stacks, while the process owns heap and stack. By default, Java has two threads, main and GC. Java does not have the permission to open the thread and cannot operate the hardware. The native start0 method is called and implemented by C++Copy the code

3.2 Parallelism and Concurrency

Concurrent:

Concurrency: Concurrency for a single CPU running multiple tasks in a single thread running on the same resourceCopy the code

In parallel:

ParallelisM: Parallel use of multiple cpusCopy the code

3.3 Several thread states

There are six states of threads in Java:

1. Initial (New) :

A new thread object is created, but the start() method has not yet been called.Copy the code

2. Runnable:

1. Call the start() method of the thread, and the thread enters the ready state. Ready just means you're qualified to run, and the scheduler doesn't give you CPU resources, and you're always ready. 2. The sleep() method of the current thread ends, and the join() method of the other threads ends, waiting for the user to complete the input, and a thread to acquire the object lock, these threads will also enter the ready state. 3. When the time slice of the current thread expires, the yield() method of the current thread is invoked. The current thread enters the ready state. 4. After the thread in the lock pool obtains the object lock, it enters the ready state.Copy the code

3. Running

A thread in the ready state becomes running after obtaining a CPU time slice. This is also the only way a thread can get into a running state.Copy the code

4. Blocked:

The blocking state is the state of a thread blocking when it enters a method or block of code modified by the synchronized keyword (which acquires the lock).Copy the code

5. Waiting and Timed_Waiting:

1. Threads in this state are not allocated CPU execution time, they wait to be explicitly awakened (notified or interrupted), or they wait indefinitely. 2. Threads in this state are not allocated CPU execution time, but do not have to wait indefinitely to be explicitly woken up by other threads; they wake up automatically after a certain amount of time.Copy the code

6. Terminated:

A thread is terminated when it finishes normal operation or when it is interrupted abnormally. Once a thread terminates, it cannot be resurrected.Copy the code

PS:

1. The thread calling obj. Wait needs to obtain obJ's monitor first. Wait releases obJ's monitor and enters the wait state. Synchronized is synchronized with wait() and notify(). 2. In fact, threads from blocked/wait state to runnable state all involve synchronization queue and wait queue, which is mentioned in AQS.Copy the code

3.4. Difference between blocking and waiting

Block:

When a thread attempts to acquire an object lock (a non-JUC library lock, known as synchronized) that is held by another thread, the thread is blocked. It is characterized by simplicity of use and the decision of the JVM scheduler to wake itself up, rather than the need for another thread to explicitly wake itself up without responding to interrupts.Copy the code

Waiting for:

When a thread waits for another thread to notify the scheduler of a condition, that thread enters the wait state. It is characterized by the need to wait for another thread to explicitly wake itself up, and its implementation is flexible, semantically rich, and can respond to interrupts. For example, call object.wait (), ** thread.join ()**, and wait for Lock or Condition.Copy the code

Synchronized and JUC Lock both perform the locking function, but the state a thread enters is different. Synchronized blocks a thread, whereas JUC locks block/wake up with park()/unpark(), which puts the thread into a wait state. Although they enter different states when they are locked, they enter Runnable states when they are woken up, and the behavior effect is the same.

3.5 Yield is different from sleep

1. Both yield and sleep suspend the current thread without releasing the lock resource. Sleep specifies a specific sleep time, while yield depends on the CPU time slice.

2. The sleep method does not consider the priority of the thread when giving other threads a chance to run, thus giving lower priority threads a chance to run. The yield method only gives threads of the same or higher priority a chance to run.

3. When we call sleep, the thread enters the ready state and waits for the sleep time. When we call yield, the thread enters the ready state.

4. The sleep method declaration throws InterruptedException, while the yield method does not declare any exceptions

5. Yield cannot be interrupted, while sleep accepts interruption.

6. Sleep is more portable than Yield (related to OPERATING system CPU scheduling)

3.6 What is the difference between ‘wait’ and ‘sleep’

1. Different sources

Wait comes from Object and sleep comes from ThreadCopy the code

2. Determine whether to release the lock

Wait releases the lock, sleep does notCopy the code

3. Scope of use

Wait must be in the synchronized code block, and sleep can be used arbitrarilyCopy the code

4. Catch exceptions

Wait does not catch exceptions, while sleep doesCopy the code

3.7 Multi-threaded implementation

1. Inherit Thread and implement the run method

2. Implement the run method in the Runnable interface and wrap it with Thread. A Thread is a Thread object, a Runnable is a task, and a Thread must be an object when it starts.

3. Implement Callable interface, FutureTask packaging interface, Thread packaging FutureTask. Callable differs from Runnable in that the call method of Callable has a return value and can throw an exception, and Callable has a cache.

4. Through thread pool calls.

5. Implemented via Spring’s annotation @async.

3.8 a deadlock

Deadlock refers to two or more than two threads resources you will need to hold each other, due to some features of the lock, such as the syn use, a thread holds a resource, or get a lock, in this thread before you release the lock, other threads is to get less than the lock, and will always be death under etc. Go, so this will cause the deadlock.

Interviewer: You explain to me what a deadlock is and then I'll hire you. Applicant: Send the Offer first, and THEN I will explain to you what deadlock is.Copy the code

Generation conditions:

1. Mutual exclusion conditions: a resource or lock can only be occupied by one thread. When a thread first obtains the lock, other threads cannot obtain the lock until the thread releases the lock. 2. Hold and wait: a thread that has acquired a lock will not release the acquired lock even if it fails to acquire another lock. 3. Inalienable condition: No thread can force another thread to acquire a lock. The e circular wait condition: Thread A holds thread B's lock, thread B holds thread A's lock.Copy the code

Check:

1. JPS -l Locate the process ID. 2. Jstack locate the deadlockCopy the code

Avoid:

1. Lock order: Locks threads in the same order. 2. Time-limited Lock: a thread can acquire a Lock for a certain period of time. If it cannot acquire a Lock within a given period of time, it does not need to use some of the Lock API.Copy the code

4, the JMM

4.1 the JMM origin

With the rapid development of CPU, memory and disk, their access speed varies greatly. In order to speed up the introduction of L1, L2, L3 cache. After the program runs to obtain data is the following steps.

This speeds things up but leads to cache consistency issues and memory visibility issues. The compiler and CPU also introduced instruction reordering for speed. Instruction reordering basically means that you write code that runs and the results run logically as you see them, but inside the JVM the system is intelligent and accelerates ordering.

1. Compiler optimization reordering: The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program. 2. Reordering of instruction level parallelism: modern processors use instruction level parallelism to reorder data without affecting data dependence. 3. Memory system reordering: Processor reordering using cache and read/write buffer processes.Copy the code

This mechanism of instruction rearrangement will lead to ordering problems, and in concurrent programming often involves communication and synchronization between threads, generally speaking, visibility, atomicity, orderliness. These three problems correspond to the underlying cache consistency, memory visibility, order.

Atomicity: Atomicity means that the operation is not divisible. Whether multi-core or single-core, an atomic quantity can only be operated by one thread at a time. Operations that are not interrupted by the thread scheduler throughout the operation are considered atomicity. Let's say a is equal to 1. Visibility: When multiple threads access the same variable, if one thread changes the value of the variable, the other threads can see the changed value immediately. Java guarantees visibility through volatile, synchronized, and final. Order: Programs are executed in the same order as code, which is guaranteed by Using Volatile and synchronized in Java.Copy the code

In order to ensure the correctness of shared memory (visibility, orderliness, atomicity), the memory model defines the specification of read and write operation behavior of multithreaded program in shared memory mode, namely JMM model. Note that JMM is only a convention concept, which is used to ensure the consistent effect of the mechanism and specification. It acts in the process of synchronizing data between working memory and main memory, specifying how and when to synchronize data.

In the JMM, there are two rules:

1. All operations performed by a thread on a shared variable must be performed in its own working memory and cannot be read or written directly from the main memory. 2. Different threads cannot access variables in the working memory of other threads, and the transfer of variable values between threads needs to be completed through the main memory.Copy the code

To make shared variables visible, you must go through two steps:

1. Flush the updated shared variables in local memory 1 to the main memory. 2. Update the value of the latest shared variable in the main memory to the local memory 2.Copy the code

At the same time, three concepts such as memory barrier, happening-before and AF-IF-serial are proposed to ensure the visibility, atomicity and order of the system.

4.2 Memory Barrier

Memory barriers are CPU instructions used to control reordering and Memory visibility problems under certain conditions. The Java compiler also disallows reordering based on the rules of the memory barrier. The Java compiler inserts a memory barrier instruction at the appropriate place to generate the instruction sequence to prohibit reordering of a particular type of handler, allowing the program to run as expected. It has the following functions:

1. Ensure the sequence of specific operations. 2. Memory visibility that affects some data (or the execution result of an instruction).Copy the code

Memory barriers are used in volatile, as discussed in more detail.

4.3 happen – before

Because the presence of instruction reorders can make it difficult to understand the CPU’s internal operating rules, the JDK uses the concept of happens-before to describe memory visibility between operations. In the JMM, if the results of one operation need to be visible to another, there must be a happens-before relationship between the two operations. The happens-before of the CPU can be guaranteed without any synchronization.

Procedure order rule: For every action in a thread, happens-before any subsequent action in that thread. Monitor lock rule: a lock is unlocked, happens-before a lock is subsequently locked. Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field. Transitivity: If Ahappens-before B, and B happens-before C, then Ahappens-before C. Start () rule: if thread A performs an operation threadb.start () (starts ThreadB), then thread A's threadb.start () operation happens before any operation in ThreadB. Join () rule: if thread A performs the operation threadb.join () and returns successfully, any operation in ThreadB happens-before thread A returns successfully from threadb.join (). Thread interrupt rule: A call to the threadinterrupt method happens-before occurs when the interrupted thread's code detects the occurrence of an interrupt event.Copy the code

4.4 af – if – serial

The meaning of af-if-serial is that no matter how much reordering (compiler and processor to improve parallelism), the execution result of a program in a single-threaded environment cannot be changed and must be correct. This semantics frees programmers in a single-threaded environment from having to worry about reordering interfering with them or memory visibility issues.

5, volatile

The introduction of the volatile keyword guarantees visibility, but not atomicity, such as a++. Java multithreaded interaction is implemented through shared memory. We have the following rules for reading and writing volatile variables:

1. When a volatile variable is written, the JMM flusher the value of the shared variable locally corresponding to that thread to main memory. 2. When a volatile variable is read, the JMM invalidates the thread's local memory. The thread will next read the shared variable from main memory.Copy the code

Volatile uses the memory barriers described above. There are currently four types of memory barriers:

1.StoreStore barrier, which ensures that common writes are reordered from volatile writes. 2.StoreLoad barrier, which ensures that volatile writes are not reordered from potential volatile reads and writes. Prohibit volatile reads and common read reordering 4.LoadStore barrier prevents volatile reads and common write reorderingCopy the code

Volatile: Writes to shared volatile variables are performed using CPU Lock prefixes. The CPU functions are as follows:

1. Write the data in the current processor cache line back to the system memory. 2. This write back to memory operation will tell the other CPU that the variable you got is invalid on the next use of the shared memory to retrieve.Copy the code

6. Singleton DCL + volatile

6.1 Standard Singleton Mode

The singleton pattern is to privatize the constructor of a class, leaving only a static Instance for external callers to call. The common standard for singletons is DCL + volatile:

publicclassSingleDcl{ privatevolatilestaticSingleDcl singleDcl; / / that visibility privateSingleDcl () {} publicstaticSingleDclgetInstance () {/ / placed into the lock code, If (singleDcl ==null) {if(singleDcl ==null) {if(singleDcl ==null) {if(singleDcl ==null) { Synchronized (singledCl.class){if(SingleDcl ==null) {// If (SingleDcl ==null) {// If (SingleDcl ==null) {// If (SingleDcl ==null) { Singlecl =newSingleDcl(); singleDel === null; } } } returnsingleDcl; }}Copy the code

6.2 Why Volatile

Without Volatile, there can be instruction reordering at runtime, causing instance to be assigned as soon as the thread executes in a run-time order of 1 – >2 – > 4, and then constructor initialization. The problem is if the constructor initialization execution is not complete before thread two enters to find instance! = null, the thread is given two semis, and the underlying memory barrier is used to force the thread to perform as you think it should.

Singleton mode is almost a compulsory interview point, generally has the following characteristics:

Lazy: Instantiate objects only when they are needed. The correct implementation is Double Check + Lock +volatile. Hungry: the singleton object has been created when the class is loaded. When the singleton object is obtained, the object can be returned directly. The hungry style is not very memory demanding. The book also lists the use of enumerations, which are compact in code, have no thread-safety issues, and are built into Enum classes to prevent reflection and deserialization from breaking singletons.Copy the code

7. Thread pools

7.1 Learn about Thread Pools in 5 Minutes

Lao Wang is a deep ploughing in the imperial capital of the line code farmers, hard a year to earn some money, want to store money to the bank card, take money to the bank to deal with the following encounter

1. After taking the number at the gate of Laowang Bank, I found that there was a counter selling ing, but no one handled the business directly.

2. When Lao Wang got the number, he found that there were people at the counter, and there was a vacant seat at the waiting table, so he went to sit and wait.

3. When Lao Wang got the number, he found that all the counters were occupied and the waiting seats were full. At this time, the bank manager saw that Lao Wang was an honest man and opened a temporary window for him to handle the matter in a caring manner.

4. When Lao Wang took the number, he found that the counter was full, the waiting seats were full, and the temporary Windows were full. At this time, the bank manager offered several solutions.

1. Tell too many people not to deal with you directly. 2. Using cold violence mode, do not give do not handle and do not let him go. 3. The manager asked Lao Wang to try to talk to the person at the front of the seat to see if he could cancel the seat, if he could, or if he could not be kicked out. 4. The manager told Lao Wang directly who asked you to come to me, I can't deal with it.Copy the code

This process is almost similar to the general process of the JDK thread pool, with 7 parameters:

1. Number of core thread pools corresponding to the three Windows in business: corePoolSize 2. The total number of Windows is maximumPoolSize 3. Open temporary window in how long no one to close the corresponding: keepAliveTime 4. BlockingQueue 6. ThreadFactory this is a threadFactory in the JDK. It is used to create thread objects. 7. If the command cannot be RejectedExecutionHandler is RejectedExecutionHandlerCopy the code

When the task cache queue of the thread pool is full and the number of threads in the thread pool reaches maximumPoolSize, the task rejection policy is adopted if additional tasks arrive. Generally, there are four major rejection policies:

1. ThreadPoolExecutor. AbortPolicy: discard task, and throw RejectedExecutionException anomalies. 2. ThreadPoolExecutor. CallerRunsPolicy: the task you were rejected by the thread pool thread to perform this task by invoking the execute method. 3. ThreadPoolExecutor. DiscardOldestPolicy: abandon the queue in front of the task, and then try again. 4. ThreadPoolExecutor. DiscardPolicy: discard task, don't throw an exception.Copy the code

7.2 Correct Creation Mode

Creating a thread pool by using Executors may cause OOM. The reason is that there are two main BlockingQueue implementations in thread pools, ArrayBlockingQueue and LinkedBlockingQueue.

1.ArrayBlockingQueue is an array of bounded blocking queues that must be sized. 2.LinkedBlockingQueue is a bounded block queue with a maximum length of integer. MAX_VALUE, which can easily lead to OOM.Copy the code

The proper way to create a thread pool is to call ThreadPoolExecutor’s constructor directly to create it yourself. At the same time as the BlockQueue is created, you can specify the capacity.

PrivatestaticExecutorService executor = newThreadPoolExecutor (10,10,60 L, TimeUnit. SECONDS, newArrayBlockingQueue (10));Copy the code

7.3 Common Thread Pools

List several common ways to create a thread pool.

1.Executors.newFixedThreadPool

Fixed length thread pool, with core thread, core thread is the maximum number of threads, no non-core thread. The unbounded wait queue used is LinkedBlockingQueue. Be careful to block the wait queue when using.Copy the code

2.Executors.newSingleThreadExecutor

Create a single thread pool that ensures first-in, first-out executionCopy the code

3.Executors.newCachedThreadPool

Create a cacheable thread pool. If the length of the thread pool exceeds the processing requirement, you can recycle idle threads flexibly, or create a new thread if none is available.Copy the code

4.Executors.newScheduledThreadPool

Create a thread pool of fixed length, and support scheduled and periodic task executionCopy the code

5.ThreadPoolExecutor

The most primitive and common way to create a thread pool is with seven parameters and four rejection policies available.Copy the code

7.4 Thread pool core point

Thread pools are commonly used at work, and interviews are a must. The details of thread pooling are used in a previous example where a bank queued to do business. A thread pool is a pool of threads.

1. Why thread pools? 2. The role of thread pools. 3.7 Major Parameters. 4.4 Great Rejection Policy. 5. Common thread pool task queue, how to understand bounded and unbounded. Common thread pool templates. 7. How to allocate the number of thread pools, IO intensive or CPU intensive? 8. Set a priority queue for the thread pool. To implement the comparability function, the task queue uses the priority queue.Copy the code

8 ThreadLocal.

ThreadLocal can simply be understood as a thread-local variable, compared to synchronized, which is a space-for-time idea. He creates a replica for each thread, isolating threads from each other by accessing internal replica variables. Weak references are used here:

If an object has only weak references, the GC collector reclaims the object's memory when it scans it, whether it is full or not.Copy the code

8.1 emphasis

Each Thread maintains a dictionary data structure called ThreadLocalMap. The dictionary Key is ThreadLocal, so when a ThreadLocal object is no longer in use (and not referenced elsewhere), How does each thread that is already associated with this ThreadLocal clear this resource from its internal ThreadLocalMap? Instead of inheriting the java.util.Map class, ThreadLocalMap in the JDK implements its own dictionary structure designed to periodically clean up invalid resources. Its internal storage entity structure Entry inherited from Java. LAN. Ref. WeakReference, so when the ThreadLocal is no longer referenced, because a weak reference mechanism, when the JVM found insufficient memory, automatically recycle weak references pointing to the instance memory, That is, the ThreadLocalMap within its thread frees its reference to ThreadLocal, allowing the JVM to reclaim the ThreadLocal object. It is important to note that the Key (ThreadLocal) object is reclaimed, not the entire Entry, so the value T object in the thread variable is still in memory, so the memory leak problem is not completely resolved.

Next, an analysis of the underlying code shows that calls to threadlocal.get () or threadlocal.set () periodically reclaim invalid Entry operations.

9, CAS

3, Compare And Swap: the atomicity of operations is ensured by the instructions of the processor. It contains three operands:

V: variable memory address A: old expected value B: new value to be setCopy the code

When the CAS command is executed, the value of V is updated with B only when the value of V is equal to A. Otherwise, the update operation is not performed. CAS can introduce ABA problems, excessive loop overhead, and limitations of atomic manipulation of a shared variable. How to solve it has been written before and will not be repeated here.

10 and Synchronized

10.1 Synchronized interpretation

Synchronized is the JDK’s own thread-safe keyword, which can modify instance methods, static methods and code blocks. This keyword guarantees mutual exclusion, visibility, order (does not solve reordering), but guarantees order.

In fact, the bottom of the Syn is C++ code written, JDK6 before the heavyweight lock, call time involves user state and kernel state switch, quite time-consuming. Before JDK6, Doug Lea wrote a JUC package that makes it easy to implement locks in user mode. The developers of Syn were inspired to implement various performance upgrades after JDK6.

10.2 Synchronized the underlying

Syn refers to the object header containing the object header, the fill data, and the instance variables. Here is a meituan interview question:

How many bytes does the new Object() take

Markword 8 bytes + CLASspointer 4 bytes (default calssPointer compression) + PADDING 4 bytes =16 bytes 2. If classpointer compression is not enabled: MarkWord 8 bytes + CLASspointer 8 bytes =16 bytesCopy the code

User(int id,String name) User u = new User(1, “四”)

Markword 8 bytes + Enable classPointer 4 bytes + Instance data int 4 bytes + Enable common object pointer compression String4 bytes + PADDING 4 = 24 bytesCopy the code

10.3 Synchronized Lock upgrade

Synchronized locks in JDK6 are available in four states: unlocked, biased, lightweight, and heavyweight. These states escalate with the race state. Locks can be upgraded but not degraded, but biased locks can be reset to unlocked. The general upgrade process is as follows:

Lock the contrast:

Instruction reordering is a built-in acceleration method of the interpreter and CPU when the program is running. It may lead to the execution order of statements being different from what is expected, but the reordering must follow as-IF-serial anyway.

The simplest way to avoid reordering is to prevent processor optimizations from reordering instructions, such as memory barriers for volatile. Syn is an exclusive, reentrant lock at the keyword level. When a thread executes a piece of code that is modified by the SYN, it locks and then unlocks it.

When a piece of code is locked by SYN and then unlocked, no other thread can acquire the lock again, only the locked thread can acquire the lock again. Therefore, the code is executed in a single thread, which satisfies the as-IF-serial semantics. It is because of the as-IF-serial semantics that the single-thread order naturally exists.

10.5 Wait False wake up

False wake up definition:

When a condition is met, many threads are awakened, but only some of them are usefully awakened. Others are not. For example, buying and selling goods, if the goods are not available, all consumer threads are stuck in wait state. Suddenly the producer enters an item, waking up all the pending consumers. This may cause all consumers to continue executing the code below WAIT with an error call.Copy the code

False wake up cause:

Because if is executed only once, it is then executed down below if. While doesn't, it doesn't execute down until the condition is met.Copy the code

False awakening solutions:

We use "while" instead of "if" when we call "wait".Copy the code

10.6 notify () at the bottom

1. Why synchronized locks must be used for wait and notify

The bytecode generated by javAP from synchronized code blocks contains monitorenter and Monitorexit instruction threads, which execute monitorenter to obtain the object's monitor. The wait method is implemented by calling native method wait(0). The current thread must own this object's monitor.Copy the code

2. Wake up the thread immediately after notify?

Notify /notifyAll does not release the lock, but wakes up the waiting thread and puts it into the lock pool of the object. However, all threads in the lock pool do not run immediately.Copy the code
publicvoidtest(){ Objectobject=newObject(); synchronized (object){ object.notifyAll(); While (true){// TODO an infinite loop causes the lock to fail to be released. }}}Copy the code

11, AQS

11.1 High-frequency test point thread printing alternately

The goal is to alternate printing between two threads, with letters first and numbers last. You can use semaphores, the Synchronized keyword, and Lock to implement a simple ReentrantLock:

import java.util.concurrent.CountDownLatch; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; publicclassMain{ privatestaticLocklock=newReentrantLock(); privatestaticCondition c1 =lock.newCondition(); privatestaticCondition c2 =lock.newCondition(); privatestaticCountDownLatch count =newCountDownLatch(1); publicstaticvoidmain(String[] args){ String c ="ABCDEFGHI"; char[] ca = c.toCharArray(); String n ="123456789"; char[] na = n.toCharArray(); Thread t1 =newThread(() -> { try{ lock.lock(); count.countDown(); for(charcaa : ca) { c1.signal(); System.out.print(caa); c2.await(); } c1.signal(); }catch(InterruptedException e) { e.printStackTrace(); }finally{ lock.unlock(); }}); Thread t2 =newThread(() -> { try{ count.await(); lock.lock(); for(charnaa : na) { c2.signal(); System.out.print(naa); c1.await(); } c2.signal(); }catch(InterruptedException e) { e.printStackTrace(); }finally{ lock.unlock(); }}); t1.start(); t2.start(); }}Copy the code

11.2 AQS underlying

We used ReentrantLock and Condition, but how are their underlying implementations implemented? In fact, he

We are based on AQS synchronous queue and wait queue implementation!

11.2.1 AQS Synchronization queue

Before learning AQS CAS + spin + LockSupport + template mode must be, the purpose is to facilitate the understanding of the source code, feel simpler than Synchronized, because it is pure Java code. Personal understanding of AQS has the following characteristics:

1. In the AQS synchronization queue, -1 indicates that the thread is asleep. 2. The current node sets the previous node ws to -1. Can you tell if you are asleep? Someone else would have seen you sleeping! 3. The thread holding the lock is never in the queue. 4. The second thread in the AQS queue is the first to queue. 5. If the task is alternating or single-threaded, the AQS queue will not be involved even if the Lock is used. 6. Do not park threads until absolutely necessary, it is very time-consuming! So the queued head thread will spin several attempts to acquire the lock. 7. This is not to say that CAS is better than SYN, because SYN blocks with wait() without consuming CPU resources. CAS is used if the lock competition is not fierce, indicating that the spin is not serious. 8. In AQS, CLH queue should be avoided as much as possible, because CLH may be called to PARK, which is relatively time-consuming.Copy the code

Already the bottom:

11.2.2 AQS Waiting Queue

This is what happens when we call await and signal in Condition.

12. Thread thinking

12.1. Stack enclosing is recommended for variables

All variables are declared inside the method and are stack closed. Methods are called with a frame, which is a separate space. It is absolutely safe to create and use this independent space, but do not return this variable!Copy the code

12.2. Prevent thread hunger

Threads with low priority are always denied the opportunity to execute, so it is generally necessary to ensure sufficient resources, allocate resources fairly, and prevent threads holding locks from executing for a long time.Copy the code

12.3 Development Procedure

Don’t use multithreaded programming just for the sake of using it. Average application performance: service time, latency time, throughput, scalability. When doing applications, you can generally follow the following steps:

1. Ensure the correctness and robustness of the program first, and then think about how to speed up the performance requirements. 2. Be sure to benchmark yourself against tests. Amdar's law S=1/(1-a+a/n)Copy the code

In Amdar’s law, a is the proportion of parallel computing, and N is the number of parallel processing nodes:

1. When 1-a=0, (i.e. no serial, only parallel) maximum acceleration ratio s=n; 2. When a=0 (i.e. only serial, no parallel), the minimum acceleration ratio s=1; 3. When n is infinite, the limit acceleration ratio s→ 1/ (1-a), which is the upper limit of the acceleration ratio. For example, if the serial code makes up 25% of the total code, the overall performance of parallel processing cannot exceed 4.Copy the code

12.4 Factors Affecting Performance

1. Narrow the scope of locking, and try not to lock function

2. Reduce the granularity and fragmentation of locks, such as the implementation of ConcurrentHashMap.

3. Use read/write lock when reading more and writing less, which can improve performance by 10 times.

4. Replace heavy locks with CAS operations.

5. Try to use the common concurrent containers that come with the JDK. The underlying layer is optimized enough.

13, End

Now that you’ve seen it, let me give you some high-frequency interview questions.

1. Differences between synchronized and ReentrantLock and underlying implementation and ReentrantLock 2. Describe the four lock states and upgrade process. 3. What is CAS? What are the disadvantages of CAS? 4. Your understanding of volatile, visibility and instruction rearrangement. 5. What an object creation process looks like. How objects are distributed in memory depends on the JVM. 7.Object 0 = new Object() occupies several bytes in memory 8. As-if-serial = happpends-before Is spin-lock specific gravity lock efficient? The efficiency of bias locking must be improved. 11. Thread pool talk about how to use attention to detail, how to implement. 12. Do you understand JMM? 13. Can Synchronized implement instruction rearrangement? How does it guarantee order? 14. Let's talk about AQS. Why is the underlying AQS CAS + VolatileCopy the code

Personal opinion, feel free to leave a comment!!