Basic methods of thread correlation include wait, notify, notifyAll, sleep, Join, yield, etc.
- Thread wait
The thread that called wait() is in a WAITING state. The thread that called wait() is WAITING for notification from another thread or is interrupted. Therefore, wait methods are typically used in synchronized methods or synchronized blocks of code.
- Thread sleep
Sleep causes the current thread to sleep. Unlike WAIT,sleep does not release the current lock. Sleep (long) causes the current thread to enter the TIMED WATING state, while WAIT () causes the current thread to enter the WATING state
- Thread yield
Yield causes the current thread to yield CPU slices, recompeting with other threads for CPU slices. In general, threads with higher priority have a higher chance of successfully competing for CPU time slices, but this is not absolute, and some operating systems are insensitive to thread priority.
- Thread interrupts
Interrupting a thread, which is intended to give the thread a notification signal, affects an interrupt flag bit inside the thread. The thread itself does not change state (blocking, terminating, etc.).
(1). Calling interrupt() does not interrupt a running thread. In other words, threads in the Running state will not be terminated because they are interrupted, only the internal maintenance of the interrupt flag bit is changed.
(2). If a thread is in the TIMED WATING state when sleep() is called, interrupt() throws InterruptedException and the TIMED WATING state is terminated prematurely.
Many methods (such as Thread.sleep(Long Mills)) that declare InterruptedException clear the interrupt identifier before throwing an exception. Calling isInterrupted() returns false.
(4). The interrupt state is an inherent identification bit of the thread, which can be used to safely terminate the thread. For example, if you want to terminate a thread by calling thread.interrupt(), you can terminate the thread gracefully within the thread’s run method based on the value of thread.isinterrupted ().
-
Join () calls the Join () method of one thread in the current thread, then the current thread becomes blocked and returns to the end of another thread. The current thread changes from the blocked state to the ready state, waiting for the CPU’s favor.
-
Why the join() method?
In many cases, the main thread generates and starts the child thread and needs the result returned by the child thread. That is, the main thread needs to terminate after the child thread has terminated. In this case, join() is used.
The notify() method in the Object class wakes up a single thread that is waiting on the monitor of this Object. If all threads are waiting on this Object, one of the threads is chosen to wake up. The selection is arbitrary and occurs when an implementation decision is made. Wait on an object’s monitor until the current thread waives the lock on the object before continuing execution of the awakened thread, which competes in the usual manner with all other threads actively synchronizing on the object. A similar method is notifyAll(), which wakes up all threads waiting on the monitor again.
- Other methods:
(1).sleep () : forces a thread to sleep N milliseconds.
(2).isalive () : checks whether a thread isAlive.
(3).join () : wait for thread to terminate.
(4).activecount () : the number of active threads in the program.
(5).enumerate () : enumerate the threads in the program.
(6).currentthread () : gets the currentThread.
(7).isdaemon () : whether a thread is a daemon thread.
(8).setdaemon () : Sets a thread as a daemon thread. (The difference between a user thread and a daemon thread is whether to wait for the main thread to terminate depending on the completion of the main thread.)
(9).setName () : Sets a name for the thread.
(10).wait () : forces a thread to wait.
(11).notify () : notifies a thread to continue running.
(12).setPriority () : Sets the priority of a thread.
GetPriority (): : Gets the priority of a thread.
Thread context switch
It cleverly makes use of time slice rotation. CPU serves each task for a certain time, and then saves the state of the current task. After loading the state of the next task, it continues to serve the next task, and the state of the task is saved and reloaded. Time slice rotation makes it possible to execute multiple tasks on the same CPU.
- process
(sometimes called a task) is an instance of a program running. In Linux, threads are lightweight processes that can run in parallel and share the same address space (an area of memory) and other resources with their parent process (the process that created them).
- context
The contents of CPU registers and program counters at a point in time.
- register
Is the small but fast memory inside the CPU (as opposed to the relatively slow RAM main memory outside the CPU). Registers increase the speed of computer programs by providing fast access to commonly used values, usually intermediate values for operations.
- Program counter
Is a special register that indicates where in an instruction sequence the CPU is executing, either the current instruction or the next instruction to be executed, depending on the system.
- PCB- “Switch frame”
Context switch can be considered as the kernel (the core of the operating system) to switch processes (including threads) on the CPU, and the information during context switch is stored in the process control block (PCB). A PCB is also often referred to as a switchframe. Information is kept in the CPU’s memory until it is used again.
- Context switch activities:
(1). Suspend a process and store the state (context) of the process in the CPU somewhere in memory.
(2). Retrieve the context of the next process in memory and restore it in the CPU register.
(3). Jump to the position pointed by the program counter (i.e. jump to the line of code when the process was interrupted) to restore the process in the program.
- The cause of the thread context switch
(1) after the time slice of the current task is used up, the system CPU normally schedules the next task;
(2). The scheduler suspends this task and continues the next task when the current task encounters IO block;
(3). Multiple tasks preempt lock resources. The current task does not grab lock resources and is suspended by the scheduler to continue the next task;
(4). User code suspends the current task to free up CPU time;
(5). Hardware interruption;
Synchronous lock and deadlock
- Synchronization lock
Problems can easily arise when multiple threads access the same data at the same time. To avoid this, we need to ensure that thread synchronization is mutually exclusive, meaning that multiple threads executing concurrently allow only one thread to access shared data at a time. In Java, you can use the synchronized keyword to acquire a lock on an object.
- A deadlock
A deadlock is when multiple threads are blocked at the same time, and one or all of them are waiting for a resource to be released.
Principle of thread pool
The main job of the thread pool is to control the number of running threads, put tasks into queues during processing, and then start these tasks after the creation of threads. If the number of threads exceeds the maximum number, the exceeding number of threads queue up and wait for other threads to finish executing, and then pull the task from the queue to execute. Its main characteristics are: thread reuse; Control the maximum number of concurrent requests; Manage threads.
- Thread the reuse
Each Thread class has a start method. The Java virtual machine calls the run method of this class when start is called to start the thread. The class’s run() method calls the Runnable object’s run() method. We can inherit and override the Thread class and add the Runnable object passed by the continuous call to its start method. This is how thread pools work. The loop method is implemented as a Queue, which can block until the next Runnable is fetched.
- Composition of a thread pool
A typical thread pool consists of the following four components:
(1). Thread pool manager: Used to create and manage thread pools
(2). Worker thread: a thread in the thread pool
(3). Task interface: the interface that each task must implement to schedule its running by the worker thread
(4). Task queue: it is used to store tasks to be processed and provide a buffer mechanism
Thread pooling in Java is implemented by the Executor framework, which uses Executor, Executors, ExecutorService, ThreadPoolExecutor, Callable and Future and FutureTask classes.
-
CorePoolSize: Specifies the number of threads in the thread pool.
-
MaximumPoolSize: Specifies the maximum number of threads in the thread pool.
-
KeepAliveTime: The lifetime of extra idle threads that are destroyed multiple times when the current thread pool exceeds corePoolSize.
-
Unit: keepAliveTime unit.
-
WorkQueue: Queue of tasks that have been submitted but not yet executed.
-
ThreadFactory: a threadFactory used to create threads, usually using the default.
-
Handler: Indicates the policy for rejecting tasks when there are too many tasks to be processed.
-
Rejection policies
The thread pool has run out of threads to service new tasks, and the wait queue is full for new tasks. At this point we need to reject the strategy mechanism to properly handle this problem. The built-in rejection policy in the JDK is as follows:
AbortPolicy: directly throws an exception to prevent the normal operation of the system.
(2). CallerRunsPolicy: This policy runs the currently discarded task directly in the caller thread as long as the thread pool is not closed. Obviously, this will not actually drop the task, but it is highly likely that the performance of the task submission thread will drop dramatically.
DiscardOldestPolicy: Discards the oldest request (i.e. the task to be executed) and attempts to commit the current task again.
(4). DiscardPolicy: This policy silently discards unprocessed tasks without any processing. This is the best solution if you allow task loss. All the preceding built-in rejection policies implement the RejectedExecutionHandler interface. If the RejectedExecutionHandler interface still cannot meet actual requirements, you can extend the RejectedExecutionHandler interface.
- Java thread pool working process
(1). When a thread pool is created, there are no threads in it. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool will not execute them immediately.
(2). When the execute() method is called to add a task, the thread pool will make the following judgment:
A) If the number of running threads is smaller than corePoolSize, create a thread to run the task immediately;
B) Queue the task if the number of running threads is greater than or equal to corePoolSize;
C) If the queue is full and the number of running threads is smaller than maximumPoolSize, create a non-core thread to run the task immediately;
D) If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool will throw RejectExecutionException.
(3). When a thread completes a task, it takes the next task from the queue to execute it.
(4). When a thread has nothing to do for more than a certain keepAliveTime, the thread pool determines that if the number of threads currently running is greater than corePoolSize, the thread is stopped. So after all the tasks in the thread pool are complete, it will eventually shrink to the size of corePoolSize.
Welcome to share the discussion with us. Will bring you one or two knowledge points a day, grow together,