Hello everyone, I am xiao CAI, a desire to do CAI Not CAI xiao CAI in the Internet industry. Soft but just, soft praise, white piao just! “Dead ghost ~ see finish remember to come to me 3 even!”
❝
This article focuses on getting started with Java parallelism
Refer to it if necessary
If it is helpful, do not forget to “praise” Sunday
❞
Forget the damn parallelism
At the Avoiding Ping Pong forum in late 2014, Linus Torvalds took a very different view, saying, “Forget the damn parallelism!”
The whole “parallel computing is The future” is a bunch of crock.
See this news, suddenly in the mind a tight, have not remembered to let me forget not “elated”
But want to do not dish of small CAI, found that things are not simple ~ in the development of we all want to use multithreading to process procedures, isn’t it in order to make procedures faster, this TM let me embarrassed ah!
What is parallelism?
“Parallel programs are easier to adapt to business requirements than serial programs.”
Simply speaking is: a family of three, you go to school, mom at home to do housework, dad work to earn money. At the same time, three people are doing different things to make their lives better. If it is a serial situation, that is, one person has to wear more than one job, one person does the work of three people, how do you think this.
Java virtual machines are very busy. In addition to executing the main thread of the main function, they also need to do JIT compilation, garbage collection and so on. These tasks are handled by a single thread inside the virtual machine, and each task is independent of the other, making it easier to understand and maintain.
“Forget is impossible to forget, let alone I have not remembered, so do not forget to use it harder”, come on, holding the hand of xiao CAI, let’s conquer it together!
A few important concepts
Synchronous and Asynchronous
Synchronous and asynchronous are commonly used to describe a method call.
Synchronization: Once a synchronous method call is initiated, the caller must wait until the method execution is complete before continuing with subsequent behavior.
Asynchronous: An asynchronous method is like a message passing; once started, the method call is returned immediately and the caller can continue with subsequent operations. The execution method is usually executed in another thread without hindering the caller’s work.
To put it simply: synchronization means that you go to the station to buy a ticket, you have to wait in line, and when you get to the line, you can buy a ticket, and then do other things. Asynchronously, you can buy a ticket online, and after you pay, you get your ticket, and you can do other things in between.
Concurrency and Parallelism
Concurrency and parallelism are two particularly confusing concepts.
Parallelism: a true “simultaneous execution” of multiple tasks.
Concurrency: Multiple tasks are executed “alternately”, possibly sequentially.
In actual development: if there is only one CPU in the system, multi-process or multi-thread is used to perform tasks, then these tasks cannot be truly parallel, but concurrent, using the time slice rotation.
A critical region
A critical section is used to represent a common resource or shared data that can be used by multiple threads. But only one thread can use it at a time, and once the critical section resource is occupied, other threads that want to use the thread must wait.
To put it simply: there is A printer, and the printer can only perform one task at A time. If two people need to use the printer at the same time, student A can only print their own materials after student B finishes using the printer.
“In parallel programs, critical section resources are the objects to be protected.“
Blocking and non-blocking
Blocking and non-blocking are used to describe the interaction between multiple threads.
Blocked: Student A has occupied the printer. Student B has to wait for student A to finish using the printer. If student A has been occupying the printer and refused to let others use it, other students would not be able to work normally.
Non-blocking: student A occupies the printer, but does not interfere with student B’s normal work. Student B can do other things.
DeadLock, Starvation, and LiveLock
Deadlock: Four threads are waiting on each other in a circle. Neither of them is willing to release the resources they have, and this state will last forever, with no one getting out of the loop.
Hunger: A students in the canteen window, B students in the back of the queue, this time to C, D… Several students directly cut in line behind B, and then if some students continue to cut in line in front of B, the result is that B students will never get a meal, then there will be hunger. In addition, if one thread is holding on to a critical resource, causing other threads that need the resource to fail to execute properly, this situation is also starvation.
Live lock: A corridor, A students want to pass, the head came to B students, but unfortunately two students block each other, at this time A students to the right way, B students also to the right way, A students and to the left way, B students also to the left way, repeatedly, eventually will give way to A road. However, when two threads encounter this situation, they are not as intelligent as humans. They will block each other, and the resource will jump between the two threads, so that no thread can get the resource. This is the situation of live lock.
Concurrency level
Concurrency levels can be divided into:
-
blocking
When a thread is blocked, the current thread cannot continue executing until other threads release resources. For example, before using “synchronized” or “reentrant locks,” we get blocked threads.
-
There is no hunger
If there is a priority between threads, then thread scheduling will always favor the higher priority thread, which is unfair.
Unfair locking: The system allows high-priority threads to jump the queue, potentially starving low-priority threads.
Fair lock: In a first-come-first-served order, no matter how high the priority is, the new arrivals must queue and hunger will not occur.
-
barrier-free
Accessibility is the weakest type of non-blocking scheduling. If the two threads execute smoothly, neither of them will hang due to critical section problems.
If blocking is a “pessimistic strategy”, then non-blocking is an “optimistic strategy”. A frictionless multithreaded program does not execute smoothly. If a critical section resource conflicts seriously, all threads will roll back their operations and no thread will be able to walk out of the critical section.
You can use the CAS (Compare And Set) policy to achieve accessibility. Set a “consistency flag”. Before the operation, the thread reads and saves this flag. After the operation is completed, it reads this flag again to determine whether it is modified. If they are inconsistent, the resource may conflict with another thread during the operation and the operation needs to be retried.
Therefore, any thread that operates on a resource should update this conformance flag to indicate that the data is no longer safe.
-
unlocked
Lockless parallelism is frictionless. In the case of no lock, any thread can access the critical section, but “lockless concurrency guarantees that one thread can leave the critical section in a limited number of steps.”
-
Without waiting for
No lock requires only that “a thread can leave the critical area in a limited number of steps”, while no wait extends this further by requiring that “all threads must complete in a limited number of steps”.
A typical wait-free structure is “RCU (Read Copy Update)”. The basic idea is that data can be Read without control, and data can be written back after the original data has been modified
JMM (Java Memory Model)
The key technical points of the “JMM” are built around atomicity, visibility, and orderliness of multiple threads.
Atomicity
Atomicity means that an operation is uninterruptible. Even when multiple threads execute together, once an operation is started, it cannot be disturbed by other threads.
To put it simply: if you have A static global variable I, and two threads assign it A value of 1, and two threads assign it A value of 2, then the value of I is either 1 or 2, no matter what the operation is, and there is no interference between the two threads.
Note: Problems can occur if you use a “long” type instead of an “int” type. Because “long” reads and writes are not atomic (” long “has 64 bits)
Visibility
Visibility means that when one thread changes the value of a shared variable, other threads immediately know that the value has changed. The visibility problem does not exist for serial systems, because if you modify a variable in any of the steps, you will read the modified variable in the subsequent steps.
“Two threads share A variable. Due to compiler optimization or hardware optimization, thread B optimizes the variable and sets it in the cache cache or register. If thread A modifies the variable, thread B will not be aware of the change and will still read the old value stored in the cache“
Orderliness
For a thread of executing code, it is customary for task code to be executed from front to back. This is, of course, for the “one-thread case” of the entire program. With multithreading, programs can be “out of order,” meaning that code written earlier will be executed later. This is because when the program is executed, instructions are rearranged, and the order of the rearranged instructions may not be the same as that of the original instructions.
If thread A executes writer() first and thread B executes reader() next, then thread B will not see A = 1 when it executes I = A + 1.
One thing to note here is that for a thread to see instructions executed in the same order (otherwise the application will not work at all). The premise of instruction rearrangement is “to ensure the consistency of serial semantics”.
Which instructions cannot be reordered (Happen-Before rule)
- Program sequence principle: guarantee semantic serialization within a thread
- Volatile rules: Volatile variables are written before they are read. This ensures visibility of volatile variables
- Lock rule: An unlock must occur before a subsequent lock
- Transitivity: A precedes B, and B precedes C, so A must precede C
- The thread
start()
The method precedes each of its actions - Thread interruption
interput()
Code that precedes the interrupted thread - The constructor of the object is executed before the end
finalize()
methods
❝
Today you work harder, tomorrow you will be able to say less words!
I am xiao CAI, a man who studies with you. 💋
❞