This is my NTH day on the Challenge. To understand concurrent programming, you need to know how CPU multithreading works.
Context switch mechanism: The CPU implements this mechanism by allocating CPU time slices (typically tens of milliseconds) to each thread. The time slice is the time allocated by the CPU to each thread. Since the time slice is relatively short, the CPU keeps switching threads to make us feel that multiple threads are executing simultaneously. The CPU executes tasks (for example, the time-edge rotation algorithm and priority algorithm) in a cycle. After executing a time slice, the current task will switch to the next task. Before switching, the state of the previous task will be saved so that the state of the task can be loaded when switching back to the next task. So the process from saving to loading is a context switch.
The underlying implementation of the concurrency mechanism
Java code is compiled into bytecode files, which are loaded by the classloader into the JVM. The JVM executes the bytecode, which ultimately needs to be converted into assembly instructions for execution on the CPU. The concurrency mechanism used in Java depends on the JVM implementation and CPU instructions.
Synchronized and Volatile
In Java, there are four lock states, from lowest to highest: no lock, biased lock, lightweight lock, and heavyweight lock. Synchronized has long been the elder statesman of multithreaded concurrent programming, and is known to many as the heavyweight lock.
The basis for synchronization with Synchronized: Every object in Java can be used as a lock. The specific performance is in the following 3 forms:
- For normal synchronous methods, the lock is the current instance object.
- For statically synchronized methods, the lock is the Class object of the current Class
- For Synchronized method blocks, locks are objects configured in Synchronized parentheses.
When a thread attempts to access synchronized code to begin with, it must first acquire the lock and release it when exiting or throwing an exception.
The implementation of Synchronized in the JVM
What is Monitor? Monitor is a synchronization tool and a synchronization mechanism. It is usually described as an object with the following features:
- Object all methods are executed mutually exclusive. Just as a Monitor has only one “permission” to run, any thread that enters any method needs to obtain this “permission” and return it when it leaves.
- A singal mechanism is usually provided that allows the thread holding the “permission” to temporarily abandon the “permission” while waiting for a predicate (condition variable) to materialize, and when the condition is true, the current process can “notify” the thread waiting for the condition variable so that it can get the permission again.
The JVM implements method synchronization and code block synchronization based on entering and exiting Monitor objects. This is done through the Monitorenter and Monitorexit directives. While monitoRenter directives are compiled and inserted at the start of synchronized code blocks, Monitorexit inserts at the end of methods and exceptions. The JVM ensures that each Monitorenter must have a Monitorexit paired with it. Any object has a Monitor associated with it, and when a Monitor is held, it is locked. When the thread executes the Monitorenter instruction, it attempts to acquire ownership of the object’s Monitor, that is, the lock on the object.
Definitions and principles of Volatile
Volatile is the lightweight Synchronized that guarantees “visibility” of shared variables in multiprocessor development. Visibility means that when one thread modifies a shared variable, another thread can read the changed value. Volatile modifiers, when used properly, are cheaper to use and implement than Synchronized because they do not cause thread context switching and scheduling. Checking volatile in assembly Code When we compile volatile code into assembly code, we will find that variable code that is volatile has a Lock prefix in it. Instructions prefixed with Lock cause two things on multi-core processors:
- Writes data from the current processor cache row back to the system cache.
- This write to memory invalidates the memory address cached by other cpus.
First of all, we need to understand that in order to improve the processing speed, the processor will not directly communicate with the memory, but first read the system memory data to the internal cache and then operate, but do not know when the operation will be written to the memory. If you write to a volatile variable, the JVM sends the processor an instruction prefixed with Lock to write the variable to system memory on the cached row. And, on a multiprocessor, in order to ensure that each processor cache is consistent, can achieve cache coherence protocol, each processor by sniffing the spread of the data on the bus to check the value of the cache is expired, when the processor found himself cache line corresponding to the memory address has been changed, and will replace the current processor cache line is set to null and void, When the processor operates on the data, it reads it back from system memory into the processor cache.