This is the fifth day of my participation in the August More text Challenge. For details, see:August is more challenging

This article is from chaoCode. Please send a personal message and link to the author and the original address at the beginning of the article.

Violators, the author reserves the right to pursue.

preface

Concurrent programming in our daily development is in every moment of the day have to use, but most of the code underlying has helped us to do some safe disposal of concurrent programming, but there are still many cases need to go our own control, so we need to understand learning concurrent programming, step by step then we began to learn.

This article is conceptual and can be lengthy, so be patient with it. These are the basic concepts that we need to understand before we learn about concurrent programming.

Basic concept

1. What are processes and threads

A process is the smallest unit in which resources are allocated by an operating system, including CPU, memory space, and disk I/O. Multiple threads in a process share all system resources in the process, but processes are independent of each other. A process is a program with certain independent functions about a run activity on a data set. A process is an independent unit of the system for resource allocation and scheduling. A process is an execution of a program on a computer. When you run a program, you start a process. Obviously, a program is dead and static, while a process is alive and dynamic. Processes can be divided into system processes and user processes. The processes used to perform the various functions of the operating system are the system processes, which are the operating system itself in the running state, and the user processes are all the processes you start.

A thread is the smallest unit of CPU scheduling and must depend on a process. An existing thread is an entity of a process. It is the basic unit of CPU scheduling and dispatch, which is smaller than a process and can run independently. A thread owns virtually no system resources of its own, only a few of the resources essential to running (such as program counters, a set of registers, and stacks), but it can share all the resources owned by a process with other threads that belong to the same process.

Threads are everywhere. Any program must create threads, especially in Java. Any program must start a main thread of the main function; Java Web development inside the scheduled task, timer, JSP and Servlet, asynchronous message processing mechanism, remote access interface RM, any listener event, onclick trigger events are inseparable from the thread and concurrent knowledge.

2. Relationship between the number of CPU cores and the number of threads

Multi-core: Also refers to single Chip Multiprocessors (CMP),CMP is proposed by Stanford University in the United States, its idea is the large-scale parallel processors in SMP(symmetric multi-processors) integrated into the same Chip, each processor in parallel to execute different processes. This relying on multiple cpus to run the program simultaneously in parallel is an important direction to realize ultra-fast computing, which is called parallel processing

Simultaneous Multithreading: Simultaneous Multithreading. SMT. Allows multiple threads on the same processor to execute synchronously and share execution resources on the processor.

Number of cores, Number of threads: Mainstream cpus today are multi-core. The reason for increasing the number of cores is to increase the number of threads, because the operating system uses threads to perform tasks, and generally they correspond 1:1, meaning that a quad-core CPU typically has four threads. But when Intel introduced hyper-threading technology, the number of cores and the number of threads formed a 1:2 relationship.

3.CPU time slice rotation mechanism

When we are developing, we feel that we are not limited by the number of CPU cores. We can start threads if we want, even on a single-core CPU. Why? This is because the operating system provides a rotation mechanism for CPU time slices.

Time slice rotation scheduling, also known as RR scheduling, is one of the oldest, simplest, fairest and most widely used algorithms. Each process is assigned a period of time, called its time slice, which is the amount of time the process is allowed to run.

Baidu Encyclopedia explains the rotation mechanism of CPU time slices as follows:

If the process is still running when the time slice ends, the CPU is stripped and allocated to another process. If the process blocks or freezes before the time slice ends, the CPU switches immediately. All the scheduler does is maintain a list of ready processes, and when a process runs out of its time slice, it is moved to the end of the queue

The only interesting aspect of time slice rotation scheduling is the length of the time slice. Switching from one process to another requires timing, including saving and loading register values and memory images, updating various tables and queues, and so on. If the processwitch, sometimes called a context switch, takes 5ms, and if the time slice is set to 20ms, then it will take the CPU 5ms to switch the process after 20ms of useful work has been done. 20% of CPU time is wasted on administrative overhead.

To improve CPU efficiency, we can set the time slice to 5000ms. Only 0.1 percent of the time is wasted. But consider what happens in a time-sharing system if you have 10 interactive users pressing enter almost simultaneously? Assuming all the other processes use up their time slice, the last unfortunate process has to wait 5 seconds before it gets a chance to run. Most users can’t stand the fact that it takes five to respond to a short command, and the same problem can happen on a PC that supports multiple programs.

The conclusions can be summarized as follows: setting time slices too short can lead to too many process switches and reduce CPU efficiency; setting time slices too long can lead to poor response to short interaction requests. A time slice of 100ms is usually a reasonable compromise.

In the case of a CPU crash, it’s not hard to see that when you run a program that gets 100% CPU and you don’t restart the computer, you still have a chance to Kill it. I think that’s because of this mechanism.

Thank you for watching, and feel free to let us know in the comments section if you have any mistakes. If this post helped you, feel free to like 👍 and follow.