Double tenth before a more than a month, all related to electricity system for pressure measurement, continuously optimizing system, our electricity ERP system is also carried out for more than a month of pressure measurement and optimization process, in it, we found a lot of overtime alarm, by analyzing tools, we found that cs is high level, and then analysis the log, We found a large number of exceptions related to WAIT (). At that time, we suspected that these problems were caused by a large number of threads not being processed in time during the concurrent processing of multiple threads. Later, we reduced the maximum number of threads in the thread pool and conducted pressure measurement to find that the system performance was greatly improved.
As we all know, in concurrent programming, more threads does not mean more efficiency. Too few threads may lead to under-utilization of resources, too many threads may lead to fierce competition for resources, and then frequent context switching causes extra overhead for the system.
What is context switching
We all know that in dealing with multi-threaded concurrent tasks, the processor will assign each thread CPU time slice, thread in their respective assigned tasks within the time slice, the size of each time slice is commonly a few milliseconds, so may occur in one second thread mutual switch, dozens of hundreds of times to our feeling is at the same time.
Thread only takes up the processor within the allocated time slice, when a thread ran out of time slice, was forced to suspend operation or their own reasons, there will be another thread to take up the processor, this kind of right to use a thread to cede processor, another thread to get right to the use of the processor process is known as a context switch.
When a thread cedes control of the processor, it “cuts out”; Another thread gets access to the processor. In this process, the operating system will save and restore the relevant progress information, which is often referred to as the “context”. The context generally contains the storage content of registers and the instruction content stored by the program counter.
Context switch causes
In multithreaded programming, we know that context switching between threads can cause performance problems, so what causes context switching between threads? Let’s take a look at the thread lifecycle to find out.
The five states of a thread are well known: NEW, RUNNABLE, RUNNING, BLOCKED, and DEAD. The corresponding six states in Java are: NEW, RUNABLE, BLOCKED, WAINTING, TIMED_WAITING, TERMINADTED.
In the figure, the process of a thread from RUNNABLE to RUNNING is a context switch, and the process from RUNNING to BLOCKED, then to RUNNABLE, and then to RUNNING is a context switch. When a thread changes from RUNNING to BLOCKED, it is called a thread’s suspension. When the thread is suspended, the processor will be occupied by another thread, and the operating system will save the corresponding context so that the thread can continue its previous execution when it enters the RUNNABLE state. When a thread is awakened from its BLOCKED state to RUNNABLE, the thread retrieves the last saved context information.
As we can see, the context switch of multiple threads is actually caused by the switching of two running states of multiple threads.
We know that there are two situations that can cause context switches: one is the program-triggered switch, which is commonly called spontaneous context switch, and the other is the system or virtual machine induced up/down context switch, which is called non-spontaneous context switch.
A spontaneous context is when a thread is cut out by a Java program call, usually while coding, calling several methods or keywords:
sleep()
wait()
yield()
join()
park();
synchronized
lock
Copy the code
Non-spontaneous context switches are common: threads run out of allocated time slices, virtual machine garbage collection, or execution priorities.
Small tests found context switches
Let’s take a look at the speed of concurrent versus serial execution with an example;
public class DemoApplication {
public static void main(String[] args) {
// Run multithreading
MultiThreadTester test1 = new MultiThreadTester();
test1.Start();
// Run a single thread
SerialTester test2 = new SerialTester();
test2.Start();
}
static class MultiThreadTester extends ThreadContextSwitchTester {
@Override
public void Start(a) {
long start = System.currentTimeMillis();
MyRunnable myRunnable1 = new MyRunnable();
Thread[] threads = new Thread[4];
// Create multiple threads
for (int i = 0; i < 4; i++) {
threads[i] = new Thread(myRunnable1);
threads[i].start();
}
for (int i = 0; i < 4; i++) {
try {
// Wait to run together
threads[i].join();
} catch (InterruptedException e) {
// TODO Auto-generated catch blocke.printStackTrace(); }}long end = System.currentTimeMillis();
System.out.println("multi thread exce time: " + (end - start) + "s");
System.out.println("counter: " + counter);
}
// Create a class that implements Runnable
class MyRunnable implements Runnable {
public void run(a) {
while (counter < 100000000) {
synchronized (this) {
if(counter < 100000000) {
increaseCounter();
}
}
}
}
}
}
// Create a single thread
static class SerialTester extends ThreadContextSwitchTester{
@Override
public void Start(a) {
long start = System.currentTimeMillis();
for (long i = 0; i < count; i++) {
increaseCounter();
}
long end = System.currentTimeMillis();
System.out.println("serial exec time: " + (end - start) + "s");
System.out.println("counter: "+ counter); }}/ / parent class
static abstract class ThreadContextSwitchTester {
public static final int count = 100000000;
public volatile int counter = 0;
public int getCount(a) {
return this.counter;
}
public void increaseCounter(a) {
this.counter += 1;
}
public abstract void Start(a); }}Copy the code
Execution results;
multi thread exce time: 5149s
counter: 100000000
serial exec time: 956s
counter: 100000000
Copy the code
By comparing the execution results, it can be seen that the serial execution speed is faster than the concurrent execution speed, which is due to the extra overhead of the system caused by multi-thread context switch, and the use of synchronized keyword leads to lock competition, resulting in thread context switch. Without the use of the synchronized keyword, concurrent execution is not as efficient as serial execution, because multithreaded context switching still exists without locking competition.
Which links of the system overhead in context switch:
- The operating system saves and restores context
- Processor cache loading
- The scheduler performs scheduling
- The cache may be flushed due to a context switch
conclusion
Context is a process in which one thread releases the handler and another thread retrieves the handler. Both spontaneous and involuntary calls result in a context switch, resulting in overhead of system resources. The more threads there are, the faster the execution will be. We recommend using a single thread when the logic is simple and the speed is relatively fast. If the logic is very complex, or where a lot of computation is required, we recommend using multithreading to improve the performance of the system.