Multithreaded learning methods
Multithreading as the interview disaster area, if we can in-depth understanding and use, for us is very beneficial, especially in the interview, if the multithreading answer is good, is very able to add points. So you can withstand the interviewer’s killer questions!
No matter what to learn, we all need to have an overall cognition, overlooking the whole picture, to know the details, if you can, it is best to draw a mind map, will be one of the branch and a little knowledge, record, and convenient when we learn, organized the decomposability of learning step by step, so as to achieve from point to surface, from the surface to the body.
Than a multithreaded learning, more natural or recommended by diy small experiment and understand the theoretical basis, we in the case of enough time, we can see there are books about multithreading, such as the core technology, concurrent Java programming (Java multi-thread programming practice (Java concurrency of the bible), the art of multiprocessor programming), etc. But for the students who do not have enough time, there will be less time to read books and practice by themselves. At this time, we can only study by leaning on the back and looking at the interview questions.
Squeeze CPU performance
Before, the smallholder saw an interview requirement. He thought it was very interesting and wrote it down. What was it?
- Solid Java foundation, familiar with JVM, multithreading, collection and other basic, familiar with distributed, caching, messaging, search and other mechanisms
- More than 3 years Java development experience, familiar with Spring, MyBatis and other frameworks
- Strong interest in squeezing CPU performance!
- Have certain project planning and decision-making ability, good at capturing business requirements, problems existing in system architecture design, and provide effective solutions
- Have high domain design ability and business analysis ability, can analyze and solve problems independently
We see the third article, squeezing has a keen interest in the performance of the CPU, the history of the thread is a history of computer CPU squeezing, when our multi-threaded efficiency is higher, the more to the more powerful CPU squeezing is, but we also need to pay attention to, press in the press at the same time, guarantee the normal operation of the program is a point we need to consider.
History of threads
The history of threads is a process of upgrading the server’s CPU over and over again. When we learn a new technology, we can better understand and master the new technology by understanding the background of the technology. Although this process seems useless and time-consuming, the later learning process can help us better understand the new technology. From the perspective of thread development, it can be divided into five stages:
1. Single process
The earliest is, manual switching on a single process, at the time of application, can only run one program at a time, when we want to switch the other application manually stop the current process, then to run other programs, when the CPU utilization is not high, most of the time is waiting for the human to intervene, as shown in the figure below:
2, batch
Slowly people feel this way too slow, or too influence efficiency, so there will be A multi-process batch, can be understood as, in our process, there are ABCDF five procedures, we can one-time output the five procedures, need not again in to switch between them, but if the program A blocked, they need to wait for the other four procedures, As shown below:
3. Parallel processing
Write the program to switch back and forth in different memory locations, such as we have the ABC three program, program operations to A CPU, but due to network latency or what the reason, cause the program A block, then when the program B can go inside the CPU execution, if the program B is blocked, the program C can go inside the CPU execution
4. Multi-threading
Within the program can have different missions thread switching back and forth, such as the IDEA, we use inside it may be some waiting for network transmission, some in the display of the code, some in to save, to save our code in the historical records, and so on, these tasks they perform and parallel execution, At this time, the concept of thread came into being. A thread is a different route of parallel execution in a process, and the switch of different tasks in a program. If a thread wants to improve its efficiency, it is actually very complicated inside. Which design to network and IO knowledge.
Fiber/coroutine
It’s the lightest thread, the green thread, and it’s also a user thread, which is managed by the user, not by the computer, so that the application can decide independently how its thread is going to behave. The operating system kernel can’t see it, doesn’t schedule it, and fibers have their own addressing space. An application can create multiple fibers in a threaded environment and then run it manually. The fiber is not run automatically and must be specified by the application itself to run or switch to the next fiber.
Process/thread/fiber
1. What is process
Process is running applications in the system, once the program is running process, such as we often use QQ, WeChat, etc., resource allocation process is the system of independent entities, each process has its own separate address space, a process can have multiple threads, each thread USES own process of stack space.
// Process: the basic unit of operating system resource allocation, such as memory, open files, network IO, allocated independent memory space
public class T00_Process {
public static void main(String[] args) {
System.out.println("hello world"); }}Copy the code
2. What is a thread
The thread is the smallest unit of program execution. It is contained in the process and is the actual operating unit of the process. A thread refers to a single sequential flow of control, in which multiple threads can be concurrent in a process, each performing different tasks in parallel, often referred to as a lightweight process.
Single thread:
Different execution paths in a single thread
Multithreading:
3. What is fiber process
Fiber contains independent stack and control information of register state. Fiber switching for Fiber control requires relatively high programming experience. As a Fiber belongs to a Fiber object, blocking a Fiber means blocking the thread where it is located. Fiber process has the characteristic of fast switching speed.
Characteristic of a fiber process
- Threads are implemented in the Windows kernel. The operating system schedules threads according to the scheduling algorithm of the system.
- Fibers are implemented in user mode, and the kernel knows nothing about fibers.
- Fibers are more lightweight threads, and a thread can contain one or more fibers
What is thread switching
What is context switching for threads? Context switching in multithreading is the process of switching CPU control from one thread that is already running to another thread that is ready and waiting for CPU execution.
Does it make sense to set multithreading on a single CPU
Usually a task takes time not only on the CPU, but also on the IO (e.g., looking up data in the database, grabbing web pages, reading and writing files, etc.). While one process is waiting for I/O, the CPU is idle, and another process can use the CPU to compute. Multiple processes running together can fill up IO and CPU. Now are generally virtual resources, resources have a bounce mechanism, so the general run multithreading when you can run multithreading.
The larger the number of worker threads is, the better
Of course not, thread switching also consumes resources, and more means thread switching back and forth.
I am a small farmer, afraid of what infinite truth further have further joy, everyone come on