Concurrent history

In early computers that did not contain an operating system, programs were single serial programs that could execute only one program from beginning to end, and that program had access to all of the computer’s resources. However, as technology developed, operating systems emerged. It gave computer programs the concept of processes, threads, and the ability to run multiple programs at a time, with different programs running in separate processes. The operating system allocates various resources, including memory, file handles, security certificates, and so on, to independently executing processes. Different processes exchange data through the system’s own communication mechanism, such as sockets, signal processors, shared memory, semaphores, and files. The operating system supports the simultaneous execution of multiple programs for the following reasons:

  1. Resource usage. If a program is waiting for a time-consuming operation to complete, another program can be run while waiting, which increases resource utilization.
  2. Fairness. For example, by means of time slices, programs take turns to occupy computer resources, rather than by one program from beginning to end, and then the next.
  3. Convenience. Write multiple programs to calculate multiple tasks and communicate as necessary. It’s much easier to do than just writing a program that calculates all the tasks.

The serial programming model has the advantage of being intuitive and simple: do one thing at a time until you’re done, and then do another. In many cases, however, this serial model is not ideal. For example, we want to boil water to make tea and then read a book. In a serial way, we have to wait until the water is boiling and the tea is ready before we can read a book. In real life, you can boil water to read a book first, and then wait for the water to boil to make tea. This also led to the concept of synchronization and asynchrony for computer applications. It’s for these reasons that processes, threads, come into being.

Threads, also known as lightweight processes. In most operating systems today, scheduling is based on threads, not processes. A process can create multiple threads that share process-wide resources. Therefore, multiple threads run independently if there is no clear mechanism for collaboration. Similarly, these threads have access to process variables, and if there is no explicit synchronization mechanism to coordinate access to shared data, a variable that is being used by one thread may be accessed by another thread at the same time, with unpredictable results. However, each thread has its own counters, stacks, local variables, and so on.

Advantages of threads

If used properly, it can reduce development and maintenance costs and improve performance. Threads also reduce code complexity, making code easier to write, read, and maintain. In GUI programs, interface response speed can be improved; In the server program, resource utilization and throughput can be improved.

  • Leverage the power of multi-core processors.
  • Simplicity of modeling. Complex and asynchronous workflows are broken down into threads to run, interacting at specific synchronous locations.
  • Simplified handling of asynchronous events. The blocking of one thread does not affect the processing of other threads.
  • A more responsive user interface. Use specific threads to handle time-consuming operations, rather than putting them in the UI main thread. For example, Android App time-consuming events cannot be processed in the UI thread, which affects the smoothness of UI response.

Risk of threads

Java’s use of threads is a double-edged sword. The advantages of threads are well known, provided that we can properly write safe concurrent code. However, due to the lack of technical skills of the developers, and the hidden risks of concurrency, our programs may not achieve the desired results. Therefore, it is important to understand the concurrency risk aspect.

Security issues

The order of execution of multiple threads without synchronization is unpredictable and can even produce strange results. As the following sequence generates classes, multiple threads may obtain the same value at the same time.

	public class UnsafeSequene{
	    private int value;
	    // Return a unique value
	    public int getNext(a){
	        returnvalue++; }}Copy the code

Increment operation value++, which actually contains three separate operations:

  1. Read the value of value
  2. Add the value 1
  3. Because operations are performed alternately between multiple threads, it is possible for two threads to read the same value. Thread A and thread B as shown below:

    The figure above illustrates a common concurrency safety problem called a Race Condition. Because multiple threads share the same memory address space and are running concurrently, it is possible to access or modify variables that are being used by other threads. This approach is easier to share data than other thread communication mechanisms, but it also brings with it a significant risk that threads will fail because they cannot anticipate changes in data.

Fortunately, Java provides various synchronization mechanisms to coordinate this access. The example code above, by changing it to a synchronous method, prevents this error.

public class UnsafeSequene{
    private int value;
    // Return a unique value
    public synchronized int getNext(a){
        returnvalue++; }}Copy the code

Activity problem

Security means that nothing bad ever happens. Activity is when something right eventually happens. Activity problems occur when an operation cannot continue, such as program code going into an infinite loop. Deadlocks caused by threads are also active issues. For example, thread A is waiting for thread B to release its resources, and thread B never releases the resources, thread A will wait forever.

Performance issues

Activity means that something right eventually happens, but it’s not good enough. This is a performance issue because we usually want the right thing to happen as quickly as possible. Performance problems include long service time, insensitive response, low throughput rate, and high resource consumption.

In a good concurrent program, threading improves performance, but nevertheless incurs some level of runtime overhead. 1. When thread scheduling temporarily suspends an active thread and moves to another thread, context switching occurs frequently, with significant overhead: saving and restoring execution context, and more CPU time is spent on thread scheduling rather than thread execution. 2. With synchronization, some compiler optimizations tend to be suppressed.

Threads are everywhere

Even if there are no visible creation threads in the program, threads may still be created in the framework, so code invoked in those threads must also be thread-safe. The framework introduces concurrency into the program by calling application code in the framework thread. Application states will inevitably be accessed in your code, so all code paths to those states must be thread-safe.

The modules given below will all invoke the application code in threads outside the application.

  1. Timer
  2. Servlets and JavaServer Page
  3. Remote call method
  4. The Swing and AWT

This article was originally published on wechat public account [Linli Teenager]. Please pay attention to the first time for updates.