Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

In multithreaded environment, programs often have some thread safety problems, so Java provides some thread synchronization mechanism to solve the safety problems, such as: synchronized Lock and Lock Lock can solve the thread safety problems.

Pessimistic locks and optimistic locks

Locks can be divided into two general categories:

  • Pessimistic locking
  • Optimistic locking

As the name implies, pessimistic locking assumes the worst case scenario, and every time it retrieves data, it assumes that another thread will modify it, so every time it retrieves data, the lock will be locked, and every time another thread tries to modify the data, it will block until the lock is acquired. For example, table lock, row lock, read lock, write lock in MySQL database, Java synchronized and ReentrantLock, etc.

And optimistic locking always assume the best case, every time to get the data that other threads will not change, so it’s not locked, but at the time of modifying data need to assess the modified data in the meantime do you have any other threads, modified is normal if there is no change, if modified, the revision is failed. Common optimistic locks include version number control and CAS algorithm.

Pessimistic locking applications

Examples are as follows:

public class LockDemo {

    static int count = 0;

    public static void main(String[] args) throws InterruptedException {
        List<Thread> threadList = new ArrayList<>();
        for (int i = 0; i < 50; i++) {
            Thread thread = new Thread(() -> {
                for (int j = 0; j < 1000; ++j) { count++; }}); thread.start(); threadList.add(thread); }// Wait for all threads to complete
        for(Thread thread : threadList) { thread.join(); } System.out.println(count); }}Copy the code

In this program, a total of 50 threads are opened, and the shared variable count is ++ in the thread, so if there is no thread safety problem, the final result should be 50000, but there must be thread safety problem, and the run result is:

48634
Copy the code

To address thread-safety issues, use the synchronized keyword:

public class LockDemo {

    static int count = 0;

    public static void main(String[] args) throws InterruptedException {
        List<Thread> threadList = new ArrayList<>();
        for (int i = 0; i < 50; i++) {
            Thread thread = new Thread(() -> {
                // Use the synchronized keyword to address thread safety issues
                synchronized (LockDemo.class) {
                    for (int j = 0; j < 1000; ++j) { count++; }}}); thread.start(); threadList.add(thread); }for(Thread thread : threadList) { thread.join(); } System.out.println(count); }}Copy the code

The operation of modifying the count variable is wrapped with the synchronized keyword, so that when a thread is performing the ++ operation, other threads cannot perform the ++ operation at the same time, and can only wait for the previous thread to complete 1000 times before continuing to execute, thus ensuring that the final result is 50000.

Using ReentrantLock can also address thread-safety issues:

public class LockDemo {

    static int count = 0;

    public static void main(String[] args) throws InterruptedException {
        List<Thread> threadList = new ArrayList<>();
        Lock lock = new ReentrantLock();
        for (int i = 0; i < 50; i++) {
            Thread thread = new Thread(() -> {
                // Use the ReentrantLock keyword to resolve thread safety issues
                lock.lock();
                try {
                    for (int j = 0; j < 1000; ++j) { count++; }}finally{ lock.unlock(); }}); thread.start(); threadList.add(thread); }for(Thread thread : threadList) { thread.join(); } System.out.println(count); }}Copy the code

Both locking mechanisms are concrete implementations of pessimistic locking, which guarantees atomic operations regardless of whether other threads modify at the same time.

Optimistic lock application

Since thread scheduling is extremely costly to the operating system, we should try to avoid threads switching between blocking and waking, resulting in optimistic locking.

In a database table, we often set a version field, which is an example of optimistic locking, assuming that the data content of a table is as follows:

+----+------+----------+ ------- +
| id | name | password | version |
+----+------+----------+ ------- +
|  1 | zs   | 123456   |    1    |
+----+------+----------+ ------- +
Copy the code

How does it avoid thread safety issues?

Suppose there are two threads A and B that want to modify this data. They execute the following SQL statement:

select version from e_user where name = 'zs';

update e_user set password = 'admin',version = version + 1 where name = 'zs' and version = 1;
Copy the code

First two threads are query from zs user version number is 1, then thread A to perform the update operation, amend the user’s password for the admin right now, and add the version number 1, then thread B to perform updates, as the version number is 2, so the update must be failed and as A result, thread B would have failed, It can only get the version number and update it again. This is called optimistic locking. We do not lock the program or the database, but it still keeps the thread safe.

CAS

In Java, we can implement it in a special way:

public class LockDemo {

    static AtomicInteger count = new AtomicInteger(0);

    public static void main(String[] args) throws InterruptedException {
        List<Thread> threadList = new ArrayList<>();
        for (int i = 0; i < 50; i++) {
            Thread thread = new Thread(() -> {
                for (int j = 0; j < 1000; ++j) {
                    // Use AtomicInteger to solve thread safety problemscount.incrementAndGet(); }}); thread.start(); threadList.add(thread); }for(Thread thread : threadList) { thread.join(); } System.out.println(count); }}Copy the code

Why is it possible to solve thread-safety problems using the AtomicInteger class?

Let’s look at the source code:

public final int incrementAndGet(a) {
    return unsafe.getAndAddInt(this, valueOffset, 1) + 1;
}
Copy the code

When count calls the incrementAndGet() method, it actually calls the getAndAddInt() method of the UnSafe class:

public final int getAndAddInt(Object var1, long var2, int var4) {
    int var5;
    do {
        var5 = this.getIntVolatile(var1, var2);
    } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));

    return var5;
}
Copy the code

There is A loop in the getAndAddInt() method, and the key code is here. Let’s assume that thread A enters the method at this point, var1 is the AtomicInteger object (the initial value is 0), var2 is 12 (this is A memory offset, so we don’t have to worry about it), Var4 has a value of 1 (ready to increment count).

The AtomicInteger object and memory offset are used to get the data values in main memory:

var5 = this.getIntVolatile(var1, var2);
Copy the code

Var5 = 0, then the program will judge:

!this.compareAndSwapInt(var1, var2, var5, var5 + var4)
Copy the code

CompareAndSwapInt () is a local method that compares and swaps, namely: If the value of var1 is the same as the value of var5 from main memory, it must be the same, so var5+var4 is assigned to var1 and returns true, and true is false, so the loop ends and the method returns 1.

This is normal, but when concurrency occurs, the situation is quite different, assuming thread A executes the getAndAddInt() method:

public final int getAndAddInt(Object var1, long var2, int var4) {
    int var5;
    do {
        var5 = this.getIntVolatile(var1, var2);
    } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));

    return var5;
}
Copy the code

Thread A now obtains the value of VAR1 as 0 (VAR1 is the shared variable AtomicInteger), when thread A is about to execute, thread B executes first, thread B now obtains the value of VAR1 as 0, var5 as 0, relatively successful, then the value of var1 becomes 1. At this time, it is the turn of thread A to execute, it obtains the value of VAR5 is 1, at this time the value of var1 is not equal to the value of VAR5, this increment operation will fail, and re-enter the loop, at this time the value of VAR1 has changed, at this time the value of VAR5 is also 1, which is relatively successful, so it increases the value of VAR1 to 2. If another thread modifies the value of var1 in main memory before fetching VAR5, the operation will fail again and the program will re-enter the loop.

This is to use spin as a way to implement an optimistic lock, because it is not locked, so it saves resources for thread scheduling, but also avoids the situation that the program spins all the time.

Hand write a spin lock

public class LockDemo {

    private AtomicReference<Thread> atomicReference = new AtomicReference<>();

    public void lock(a) {
        // Get the current thread object
        Thread thread = Thread.currentThread();
        // Spin wait
        while(! atomicReference.compareAndSet(null, thread)) {
        }
    }

    public void unlock(a) {
        // Get the current thread object
        Thread thread = Thread.currentThread();
        atomicReference.compareAndSet(thread, null);
    }

    static int count = 0;

    public static void main(String[] args) throws InterruptedException {
        LockDemo lockDemo = new LockDemo();
        List<Thread> threadList = new ArrayList<>();
        for (int i = 0; i < 50; i++) {
            Thread thread = new Thread(() -> {
                lockDemo.lock();
                for (int j = 0; j < 1000; j++) {
                    count++;
                }
                lockDemo.unlock();
            });
            thread.start();
            threadList.add(thread);
        }
        // Wait for the thread to complete
        for(Thread thread : threadList) { thread.join(); } System.out.println(count); }}Copy the code

The initial value in the AtomicReference must be null, so the first thread that calls the lock() method will successfully place the current thread’s object in the AtomicReference. If another thread calls the lock() method, The thread object is stuck in a loop because it is different from the AtomicReference object. The thread does not set the AtomicReference value to null until the first thread completes the ++ operation and calls the UNLOCK () method, at which point the other thread can exit the loop.

The CAS mechanism allows us to simulate locking without adding locks, but its disadvantages are obvious:

  • Cyclic waiting consumes CPU resources
  • Atomic operations of only one variable can be guaranteed
  • Can cause ABA problems