preface
Welcome to our GitHub repository Star: github.com/bin39232820… The best time to plant a tree was ten years ago, followed by now
Tips
The interview guide series, which in many cases does not delve into the details, is a way for students to review their knowledge in the role of the interviewee, so I assume that most of the things you, as the interviewer, know.
www.processon.com/view/link/6…
This is the brain map address
Where t
You might think it’s a cliche, and it is. Interview questions, interview treasure book, just a search, you can’t finish reading, also can’t come over, then I write this meaning and what? Actually, I wrote this for the following purposes
- The first is through a system of review, so that their own front to write the article again over, summary sublimation
- The second is to help you establish a review system by writing articles. I will make an introduction to most of the knowledge points that will be asked in the form of point by point
Then below is a summary of previous articles
- 2021-Java Backend Engineer Interview Guide (Introduction)
- 2021-Java Backend Engineer Interview Guide
Today to take a look at multithreading, this is the key, but also difficult, a bit more hardcore ha ha.
concurrent
Remember ali’s first question is the concurrency of the surface, ha ha this small 66 to sum up.
Let’s talk about Java’s concurrency model
There must be a reason why this is the first question. What’s the use of a bunch of locks and jucs if you don’t know the Java concurrency model?
-
Java concurrency is a shared memory model, where communication before Java threads is always implicit.
-
Java thread communication is controlled by the Java Memory Model (JMM), which determines when a write to a shared variable by one thread is visible to another thread. From an abstract point of view, JMM defines an abstract relationship between threads and main memory: shared variables between threads are stored in main memory, and each thread has a private local memory that stores the thread to read a copy of the shared variables.
Do you know about multithreading? Talk about how you usually control access to critical resources.
It’s just a fascinating process.
- Synchronized and lock can be used if the corresponding critical resource is in a single-JVM process
- For multithreading in distributed environments, then use distributed locks (redis or ZooKeeper implementation)
So tell me about what you know about Synchronized
- The synchronized keyword addresses the synchronization of access to resources between multiple threads. The synchronized keyword ensures that only one thread can execute a method or code block modified by it at any time.
- There are three main uses of synchronized
- The decorator instance method acquires the lock of the current object instance
- Decorates a static method to acquire the lock of the current class object
- Modify the code block synchronized (this | object) said before entering the synchronization code base for a given object lock. Synchronized (class. Class) means acquiring the lock of the current class before entering the synchronized code
- The two main low-level implementations of synchronized keyword are:
- Synchronized blocks are implemented using monitorenter and Monitorexit directives, where Monitorenter points to the start of the synchronized block and Monitorexit specifies the end of the synchronized block. Wait/notify method also depends on the monitor object, which is why only in the synchronized block or method calls to wait/notify method, otherwise will be thrown. Java lang. The cause of the exception IllegalMonitorStateException.
- The synchronized modified method does not have the Monitorenter and Monitorexit directives; instead, it does have the ACC_SYNCHRONIZED identifier, which identifies the method as a synchronized method. The JVM uses the ACC_SYNCHRONIZED access flag to tell if a method is declared to be a synchronized method and to perform the corresponding synchronized call.
Talk about the layout of Java objects
- The Java object distribution consists of three parts: object header, instance data, and pair alignment padding bytes
- The object header consists of a Mark Word, a Klass Pointer to the class metadata, an array length (not necessarily), a Mark Word in a 64-bit Java virtual machine that contains the value of our hashcode, our generational age lock flag, and so on
- Not all variables of the instance data are stored here, but all member variables of the object and its parent class are stored here.
- The JVM requires Java objects to be multiples of 8 bytes, so what this does is complement the object’s size to multiples of 8 bytes.
Tell me about the process of upgrading a Synchronized lock
The lock levels from lowest to highest are:
- Unlocked state
- Biased lock state
- Lightweight lock state
- Heavyweight lock state
Synchronized locks the object, not the code it wraps around.
- When an object is new, no thread holds the lock on the object. Mark Word The lock id is 01
- If and only if only one thread A obtains the lock of this object, the object will be upgraded from lockless state to biased lock, and the Mark Word will record the identity of this thread (the lock identifier is 01). At this time, thread A holds the lock of this object.
- Or if thread A acquires the lock again and finds that it is A bias lock with its thread id in the object header, thread A will continue to use the lock (no cas is required to acquire the lock). Synchronized is also reentrant;
- When thread A holds the lock, thread B grabs the lock. Thread B discovers that the lock is biased and that the thread identifier in the object header is not itself. Then, the partial lock will be revoked first, and then the partial lock will be upgraded to lightweight lock. At this time, the Mark Word lock identifier is 00
- There are threads competing for the lightweight lock, using the CAS spin. To avoid consuming CPU resources with long spins, lightweight locks are upgraded to heavyweight locks when spins exceed 10 times (other threads block and do not consume CPU). The Mark Word lock id is 10
Can you talk about CAS? Is there anything wrong with it?
CAS, short for compare and swap, is A way to ensure atomicity. CAS operations contain three operands — memory location (V), expected original value (A), and new value (B). If the value of the memory location matches the expected original value, the processor automatically updates the location value to the new value. Otherwise, the processor does nothing. The CAS instruction of CPU and JNI are used to complete the non-blocking algorithm of Java. Other atomic operations are performed using similar properties. As the whole J.U.C is built on CAS, it has greatly improved the performance of synchronized blocking algorithm.
The problem
- ABA problem. This is because CAS needs to check if the value has changed and update it if it has not. However, if A value is A, changes to B, and then changes to A, CAS checks that it has not changed, but actually has changed. The solution to the ABA problem is to use version numbers. Append the version number to the variable, increment the version number by one each time the variable is updated, and a-b-A becomes 1A-2B-3a.
- Long cycle time and high overhead. Spin CAS, if unsuccessful for a long period of time, can impose a significant execution overhead on the CPU.
- Atomic operations of only one shared variable are guaranteed. When performing operations on a shared variable, we can loop CAS to ensure atomic operations, but when performing operations on multiple shared variables, the loop CAS cannot guarantee atomic operations, so we can use locks.
Talk about the difference between Synchronized and Lock
- Synchronized is a built-in Java keyword, and Lock is a Java class at the JVM level.
- Synchronized automatically releases the lock (a thread releases the lock after executing the synchronization code; The Lock must be released manually in finally (unlock()). Otherwise, the thread is likely to deadlock.
- Synchronized locks are reentrant, non-interruptible, and non-fair, whereas Lock locks are reentrant, interruptible, and fair (both).
- Synchronized originally uses the CPU pessimistic locking mechanism, where a thread acquires an exclusive lock. An exclusive lock means that other threads must rely on blocking to wait for the thread to release the lock. However, when the CPU conversion thread is blocked, it will cause the thread context switch, and when there are many threads competing for the lock, it will cause the CPU frequent context switch, resulting in low efficiency.
- Lock uses optimistic locking. Optimistic locking is when an operation is performed each time without locking, assuming no conflicts, and then retry until it succeeds if it fails because of conflicts. The mechanism of optimistic locking implementation is CAS operation (Compare and Swap). We can look further into the source code for ReentrantLock and see that one of the more important methods of obtaining the lock is compareAndSetState. This is essentially a special instruction provided by the calling CPU.
ReentrantLock is one of the most common implementations of ReentrantLock, and it is not the most common implementation of ReentrantLock, but the most common implementation of ReentrantLock
- First fairness and the justice refers to the multithreaded each thread to obtain the order of the lock, acquiring a lock to the thread first priority, rather than a fair lock was unable to provide the safeguard, but We know already is not fair, it’s not random, it is A certain sequence of the fair, for an example of A fair, suppose to get A lock, If A lock, the B to get the lock, then B fails, B went to queue waiting, C at this time, C also fail, then he has to wait, at this point D, then A release the lock, then you say if it is absolutely fair at this time should be acquiring A lock to B, but source is D right now have the chance to get the lock, An unfair lock is more efficient than a fair lock, because an unfair lock reduces the probability that the thread will hang, and the subsequent thread will have some chance to escape the overhead of being suspended, so by default it is an unfair lock. And our Synchronized was unfair
- Reentrant means that when a thread acquires the Lock, it can succeed in acquiring the Lock again before the current thread releases the Lock again. This is called reentrant Lock. The realization of Lock is to determine the current thread and give the Lock status +1, and then Synchronized is also reentrant
The core of our ReentrantLock is AQS, so how does it implement, inheritance? Talk about the internal structure of its class, and talk about its locking process.
This may not be very many people understand, but I am standing in the role of the interviewer asked, so a lot of pre-knowledge I default you understand, hey hey, I suggest you go to see my AQS
- First, several concepts are expounded, AQS full name is AbstractQueuedSynchronizer, this is what we call the cornerstone of concurrent classes, all do not have a lock is a queue, so we need to focus on it some of the fields, the Node (used to encapsulate the thread) and State the status of the lock
- The Design of the Node class, which has several key fields, is essentially a two-way linked list, with its precursor and its successor, and thread encapsulation
- Let’s talk about the Lock process
- The ReentrantLock object first calls the lock method and attempts to lock the queue. The CAS then determines whether the queue is empty if the state in the AQS is less than or equal to 0. The CAS then creates the queue, etc., and finally gets the lock
- If get a lock fails, be about to encapsulate the current thread into a Node to the team, but after good encapsulation Node, (if non fair) under the condition of the current thread is still have a chance to try to get the lock, if it failed this time, he can only blame the lovely team, or do so in order to reduce the overhead of thread.
- Of course, THERE are a lot of things in AQS, and I just express my knowledge.
Talk about volatile
- Visibility: The last write to the volatile variable is always visible (to any thread).
- Atomicity: Atomicity of read/write to any single volatile variable, but not of compound operations such as volatile++.
- Memory semantics. When a volatile variable is read, the JMM invalidates the thread’s local memory. The thread will next read the shared variable from main memory. When a volatile variable is written, the JMM flusher the value of the shared variable from the thread’s local memory to main memory.
Let’s talk about threads, what states they have
- NEW Initial state
- RUNNABLE Running status
- BLOCKED state
- WAITING state
- TIME_WAITING Indicates the timeout waiting status
- TERMINATED state
Talk about the difference between blocking and waiting
- Blocking: When a thread attempts to acquire an object lock (a lock in a non-java.util.concurrent library, known as synchronized) that is held by another thread, the thread is blocked. It is characterized by simplicity of use and the decision of the JVM scheduler to wake itself up, rather than the need for another thread to explicitly wake itself up without responding to interrupts.
- Wait: When a thread is waiting for another thread to notify the scheduler of a condition, that thread enters the wait state. It is characterized by the need to wait for another thread to explicitly wake itself up, and its implementation is flexible, semantically rich, and can respond to interrupts. For example, call object.wait (), thread.join (), and wait for Lock or Condition. Wait for a thread to enter the lock, but wait for other threads to do something
- It is important to note that while synchronized and JUC Lock both perform the locking function, threads enter different states. Synchronized blocks a thread, whereas JUC locks/wakes it up with locksupport.park ()/unpark(), which puts the thread into the wait state. On the other hand, although they enter a different state when they are locked, they both enter a runnable state when they are woken up, and the behavior effect is the same. A thread enters the lock, but waits for other threads to perform some action
What are the differences and similarities between sleep() and Wait ()?
- The main difference is that sleep() does not release the lock, while wait() does.
- After wait() is called, the thread does not wake up automatically. Other threads need to call notify() or notifyAll() on the same object. After the sleep() method completes, the thread wakes up automatically. Or you can use wait(long timeout) to wake up automatically.
- “Wait” can be used with synchronized
Why do we call the run() method when we call the start() method, and why can’t we call the run() method directly?
The start() method is called to start a thread and put it into a ready state; executing the run() method directly does not execute in multithreaded fashion.
Talk about multiple threads accessing simultaneously, like our Semaphore
- Synchronized and ReentrantLock allow only one thread to access a resource at a time, and Semaphore allows multiple threads to access a resource at the same time.
- Final Semaphore Semaphore = new Semaphore(20); Semaphore. Acquire (1); Semaphore.release (1) releases 1 credentials.
Talk about CountDownLatch
- CountDownLatch allows count of threads to block in one place until all threads have completed their tasks, and it is implemented based on AQS.
- CountDownLatch’s constructor sets the state of the AQS, and its wait method CountDownLatch’s wait method CountDownLatch’s constructor sets the state of the AQS, and its wait method CountDownLatch’s wait method countDown state-1. Of course, the source code is not so simple, but small 66 roughly popular understanding
Talk about cyclicbarriers.
-
CyclicBarrier is very similar to CountDownLatch in that it can also implement technical waits between threads, but it is more complex and powerful than CountDownLatch. The main application scenario is similar to CountDownLatch.
-
CyclicBarrier uses a count variable as its internal counter, which is initialized by the Parties property and decrement by one each time a thread reaches the barrier. If count is zero, it means that this is the last thread of this generation to reach the fence, and we try to execute the task we entered in our constructor. Like when you wait for the subway in the morning
-
The CountDownLatch is a counter that the thread completes one record at a time, except that the count is not increasing but decreasing, while the CyclicBarrier is more like a valve that requires all threads to arrive before the valve can open and continue.
Talk about thread-safe containers
- ConcurrentHashMap: Thread-safe HashMap
- CopyOnWriteArrayList: a thread-safe List that performs much better than a Vector if it reads more than it writes.
- All mutable operations of the CopyOnWriteArrayList class (add, set, and so on) are implemented by creating new copies of the underlying array. When the List needs to be modified, I do not modify the original content, but make a copy of the original data and write the modified content into the copy. After the write is complete, the modified copy is replaced with the original data so that the write operation does not affect the read operation.
Talk about the Atomic class
- In fact, this is not what to say, they are actually based on CAS implementation of some atomic classes, usage is very simple, can be used
- AtomicInteger AtomicLongArray AtomicReferenceArray AtomicReference and so on.
Talk about a ThreadLocal
- It is designed to provide a copy of the value of the variable for each thread that uses it. Each thread changes its own copy and does not conflict with other threads’ copies so that, from the thread’s point of view, it is as if each thread owns the variable.
- Call ThreadLocalMap (key) to retrieve the current thread, and then call the set (key) to retrieve the current Map. Value is the value to store. This enables per-thread data isolation
Tell me why it’s so complicated, why it didn’t use the current thread as the key, and then value as the value?
If this is the case, then if there are many threads, then the map will be large, increasing as the number of threads increases, and the size of the ThreadLocalMap in my Thread will depend not on how small the Thread is, but on what needs to be stored
If ThreadLocalMap is a Map, how does it handle hash collisions
-
The way to resolve conflicts in a HashMap is to construct a list structure on an array. The conflicting data is mounted to the list and converted to a red-black tree if the list length exceeds a certain number.
-
ThreadLocalMap does not have a list structure, it only has an array, and its implementation is just a hash, and when it encounters a conflict, it will iterate through the array, find a non-null place, or insert the same one, because it still needs to determine equals. Ha ha said very simple, but the source is still a complex batch oh
-
And when you’re done, remove it.
Talk about Callable and Runnable Future
-
Java.lang.Runnable is an interface in which only a run() method is declared.Run returns void, and no result is returned after the task is executed
-
Callable, located in the java.util.concurrent package, is also an interface in which only one method is declared called Call (). This is a generic interface, and the call() function returns the type of V passed in
-
In the first place? A future is a way for multiple threads to return results. The first way to use it is callback and the second way is futureTask
Understand CompletableFuture, and talk about how it’s used
- This is a Java8 feature that compensates for the weakness of Future, where asynchronous tasks complete and need to continue with their results without waiting. ThenAccept, thenApply, thenCompose, and so on can be used directly to hand the results of the previous asynchronous processing to another asynchronous event processing thread. This is the kind of asynchronous processing we need. Multiple asynchronous event handlers of a control flow can be seamlessly linked together.
- In Java8, CompletableFuture offers a very powerful Future extension that helps simplify the complexity of asynchronous programming, and provides the ability to do functional programming with callbacks to compute results, Methods to transform and compose CompletableFutures are also provided.
- Specific uses, such as consuming the results of a thread, transformation, aggregation, and so on.
Talk about the benefits of thread pools
- The idea of pooling technology is to reduce the consumption of resources per acquisition and improve the utilization of resources.
- Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.
- Improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.
- Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.
Talk about thread pool ThreadPoolExecutor and the meaning of its parameters
- The number of threads defines the minimum number of threads that can run at the same time.
- MaximumPoolSize: When the number of tasks in the queue reaches the queue capacity, the number of threads that can run simultaneously becomes the maximum number of threads.
- WorkQueue: When a new task arrives, the system determines whether the number of threads currently running has reached the core number. If so, the new task is placed in the queue.
- KeepAliveTime: when the number of threads in the thread pool is greater than corePoolSize, if no new task is submitted at this time, the threads outside the core thread will not be destroyed immediately. Instead, they will wait until the keepAliveTime exceeds the keepAliveTime before being recycled and destroyed.
- Unit: keepAliveTime Time unit.
- ThreadFactory: Used when executor creates a new thread.
- Handler: saturation policy. Either reject (throw exception), do not handle, or abandon the first task in the queue and execute the current task
How does ThreadPoolExecutor work, maintaining threads and executing tasks at the same time?
- Thread pooling actually builds a producer-consumer model internally, decoupling threads and tasks from each other and not directly related to each other, so as to buffer tasks well and reuse threads.
- The operation of thread pool is mainly divided into two parts: task management and thread management. The task management section acts as a producer, and when a task is submitted, the thread pool determines the subsequent flow of that task.
- The thread management part is the consumer, which is uniformly maintained in the thread pool. According to the task request, threads are allocated. When the thread completes the task, it will continue to acquire new tasks to execute.
- Inside the thread pool, a variable CTL is used to maintain two values: runState and number of threads (workerCount). In the implementation, the thread pool combines the maintenance of two key parameters, runState and number of threads, with runState being stored in three bits higher and workerCount in 29 bits lower.
Task execution mechanism of thread pool (process when a task is added to the thread pool)
- First, check the RUNNING status of the thread pool. If it is not RUNNING, reject it directly. Ensure that the thread pool executes tasks in the RUNNING state.
- If workerCount < corePoolSize, a thread is created and started to execute the newly submitted task.
- If workerCount >= corePoolSize and the blocking queue in the thread pool is not full, the task is added to the blocking queue.
- If workerCount >= corePoolSize && workerCount < maximumPoolSize and the blocking queue in the thread pool is full, a thread is created and started to execute the newly submitted task.
- If workerCount >= maximumPoolSize and the blocking queue in the thread pool is full, the task is processed according to the reject policy. The default is to throw an exception directly.
Describe the use of thread pools in various business scenarios.
- Respond quickly to user requests
This was combined with business, small company is online education, and then have the online operations, actually this thing is similar to electricity, lighting is to sell online courses, live, recorded, courses, question these things, and then there is a course details, involves orders, goods, courses, multiple services, and then assemble data to the front to show, So for the user, of course, it is hoped that the interface as soon as possible, if too long, I may not have the heart to see, then we can use the way of thread pool to request each service, to shorten the request time, what is the requirement of the thread pool? The most important thing in this scenario is to get the maximum response speed to satisfy the user, so set the queue size less and increase corePoolSize and maxPoolSize to create as many threads as possible to execute the task quickly.
- Quickly process batch tasks
Just like our question bank, after the students brush the question, do we have to make statistics in the background, all kinds of completion rate, the rate of achievement, involving the students, and may be the teacher’s class and so on, this batch business, in this case, should also use multi-thread strategy, parallel computing. However, different from the response-first scenario, such scenario has a huge amount of tasks and does not require instantaneous completion. Instead, it focuses on how to use limited resources to handle as many tasks as possible in a unit time, which is the problem of throughput priority. So you should set up queues to buffer concurrent tasks, and adjust corePoolSize appropriately to set the number of threads handling the task. Setting too many threads here can also cause frequent thread context switches, slow down the processing of tasks, and reduce throughput.
Should we design a system thread pool? What are its best practices?
- When designing thread pools, use ThreadPoolExecutor and avoid using Executors’ newFixedThreadPool and newCachedThreadPool because of OOM risks.
- ThreadPoolExecutor allows you to check the status of the current thread in real time. You can write an interface that connects it to your system monitor
- Remember to name the thread
- The United States group SAO operation, this really can, small 66 according to the United States group big guy to the idea of their own play. It smells good. It justThread pool parameters dynamic?
- What does that mean? That is to say, when we started, we don’t know the thread pool to the system parameters of best practice, for example, such as I have a group of business needs a thread pool threads to deal with, then I will set up a lot of core thread, and maximum thread, but I deal with in the process of the service, I also need other downstream business docking, if he doesn’t come, That’s not messing up their service, right? There are all kinds of different problems. Meituan technology team, in order to deal with this extreme business scenario, designed this thread pool parameter dynamic solution to deal with the extreme situation
- The JDK allows a thread pool user to dynamically set the core policy of the thread pool through an instance of ThreadPoolExecutor. Using the setCorePoolSize method as an example, after the runtime thread pool user calls this method to setCorePoolSize, The thread pool overwrites the original corePoolSize value and takes a different processing strategy based on the comparison between the current value and the original value. If the current value is smaller than the current number of worker threads, it indicates that there are redundant worker threads. At this time, interrupt requests are sent to the current idle worker threads for recycling. The redundant workers will also be recycled when idle next time. If the current value is greater than the original value and there is a task waiting to be executed in the current queue, the thread pool creates a new worker thread to execute the queue task. You can use the NacOS Distributed configuration center to dynamically update the parameters, such as the number of core threads and the maximum number of threads, by putting the desired Settings into the configuration
- The other thing is, how do we set the size of the queue? Because the source code can not be set, because the size field in the queue is actually fianl modified, ha ha, think so can stop us, we are smart, copy the source code to a class, remove fianl, and then provide this field get set method, and then become read from the configuration. Hey hey, is not thinking clearly.
The end of the
Concurrent multithreading, almost, the next article is no accident is JVM, JVM is actually for our Java development is also a more important knowledge point.
Daily for praise
Ok, everybody, that’s all for this article, you can see people here, they are real fans.
Creation is not easy, your support and recognition, is the biggest motivation for my creation, we will see in the next article
Wechat search “six pulse Excalibur program life” reply 888 I find a lot of information to you