This is the second article on ThreadLocal.

In the last article, Yasin introduced you to what ThreadLocal is and the basics of ThreadLocal.

So what does ThreadLocal do in practice? Today we will use a simple application scenario to show you how ThreadLocal can be used to help solve multi-threaded security problems.

It’s a simple matter of statistics. Let’s say we want to count the amount of calls on an interface over time. Each time the interface is accessed, the statistic is +1. Let’s start with the simplest basic implementation of thread insecurity:

@RestController
@RequestMapping("orders")
public class OrderController {

    private Integer count = 0;
  @GetMapping("/visit")  public Integer visit(a) throws InterruptedException {  count++;  Thread.sleep(100);  return 0;  }   @GetMapping("/stat")  public Integer stat(a) {  return count;  } } Copy the code

Here we assume that calling this interface will take 100 milliseconds (simulating synchronous IO operations). A little bit of multi-threaded knowledge of the students know that this time is “thread is not safe”. If multiple threads access this interface at the same time, data inconsistency problems can occur. Let’s try to test that with ab.

Total number of calls, 100 concurrent
$ ab -n 10000 -c 100 localhost:8080/orders/visit

$ curl localhost:8080/orders/stat
9953(base)
Copy the code

We expected the stat call to return 10000, but it returned 9953. Why did this happen? Because count++ is not thread-safe. There is an “in-memory model” involved here. For this operation, we first read the original value from memory and put it in thread local memory. And then we do the +1 operation, and then we write it back into memory.

Java memory model

If you have multiple threads working on it, maybe thread A hasn’t written yet, and thread B is reading the original value. This can cause data inconsistency problems. The result will be smaller than expected.

So how do you solve this thread safety problem? There are many solutions. Let’s try one of the easiest ways to do this, which is to “lock”. In the last article we talked about several ways to solve multithreading problems, one of which was “queuing.” Using locks is the idea of queuing, which is absolutely thread safe. Let’s take a look at the effect of locking.

@GetMapping("/visit")
public Integer visit(a) throws InterruptedException {
    Thread.sleep(100);
    this.add();
    return 0;
}  private synchronized void add(a) {  count++; } Copy the code

Let’s do the same thing. You can see that the result is correct, in line with our expectation of 10000.

$ ab -n 10000 -c 100 localhost:8080/orders/visit

$ curl localhost:8080/orders/stat
10000(base)
Copy the code

Is there any other way to be thread-safe?

Earlier we said that using count++ would be thread unsafe for this case because “multiple threads are competing for the same resource” count. We can use the idea of “avoid”, so that a thread “only uses its own resources” and does not use other people’s resources, so that there is no thread safety problem.

Let’s use ThreadLocal and modify the code:

@RestController
@RequestMapping("orders")
public class OrderController {

    private static final ThreadLocal<Integer> TL = ThreadLocal.withInitial(() -> 0);
  @GetMapping("/visit")  public Integer visit(a) throws InterruptedException {  Thread.sleep(100);  TL.set(TL.get() + 1);  return 0;  }   @GetMapping("/stat")  public Integer stat(a) {  return TL.get();  } } Copy the code

Let’s do the same thing with ab.

$ ab -n 10000 -c 100 localhost:8080/orders/visit

$ curl localhost:8080/orders/stat
99(base)
Copy the code

When we access the statistics interface, we find that we can only get statistics for the current thread. So how do we get the sum of statistics for all the threads?

ThreadLocal is not implemented and requires us to write our own code. In fact, the idea is very simple, we just need to put each thread corresponding value reference into a unified container, and then we need to use the container to iterate over.

First, we try to use a HashSet to hold the value. The important thing to note here is that we need to lock this value when we initialize it. Because hashsets are not thread-safe.

@RestController
@RequestMapping("orders")
public class OrderController {

    private static final Set<Integer> SET = new HashSet<>();
 private static final ThreadLocal<Integer> TL = ThreadLocal.withInitial(() -> {  Integer value = 0;  addSet(value);  return value;  });   private static synchronized void addSet(Val<Integer> val) {  SET.add(val);  }   @GetMapping("/visit")  public Integer visit(a) throws InterruptedException {  Thread.sleep(100);  TL.set(TL.get() + 1);  return 0;  }   @GetMapping("/stat")  public Integer stat(a) {  return SET.stream().reduce(Integer::sum).orElse(-1);  } } Copy the code

But when we test it, it doesn’t seem to work. Stat is always 0. Why is that?

Because Integer is special, it is a wrapper class of primitive int. It has an internal cache and uses the same object when its values are small (-128 to 127). The +1 operation does not change the value of the original reference. So it cannot be used as a normal reference object.

So how do you solve this problem? It’s easy. We’ll just wrap him up outside.

public class Val<T> {
    T v;

    public T getV(a) {
        return v;
 }   public void setV(T v) {  this.v = v;  } }  @RestController @RequestMapping("orders") public class OrderController {   private static final Set<Val<Integer>> SET = new HashSet<>();  private static final ThreadLocal<Val<Integer>> TL = ThreadLocal.withInitial(() -> {  Val<Integer> val = new Val<>();  val.setV(0);  addSet(val);  return val;  });   private static synchronized void addSet(Val<Integer> val) {  SET.add(val);  }   @GetMapping("/visit")  public Integer visit(a) throws InterruptedException {  Thread.sleep(100);  Val<Integer> val = TL.get();  val.setV(val.getV() + 1);  return 0;  }   @GetMapping("/stat")  public Integer stat(a) {  return SET.stream().map(Val::getV).reduce(Integer::sum).orElse(-1);  } } Copy the code

Then we tested it again and found that we got what we expected.

$ ab -n 10000 -c 100 localhost:8080/orders/visit

$ curl localhost:8080/orders/stat
10000(base)
Copy the code

Some of you might be confused. So how is this better than using synchronized or atomic classes directly?

The locking and atomic classes use the queuing idea, while ThreadLocal uses the avoiding idea. It avoids thread contention through a design philosophy of its own, so it will be more efficient. Queuing can be dangerous, and if your critical sections are time consuming, there’s a good chance they will block a lot of threads and make the system unusable.

Critical section: an area where multiple threads compete for resources and only one thread can run that section of code at a time.

In this case, we don’t see the advantages of using ThreadLocal because the resource contention is very simple and the critical section is a variable of type Integer. But if the critical section is expensive, ThreadLocal’s advantages come into play. Try sleeping 100ms with the synchronized method above.

While ThreadLocal doesn’t necessarily avoid all thread-safety issues, like this case, we still need to lock synchronously when we initialize addSet. But it can narrow down thread-safety issues and improve performance.

So do you get the hang of using ThreadLocal? What other scenarios can you use ThreadLocal for? In the next article, we’ll take a look at the source code of the major frameworks to see how ThreadLocal is used.

About the author

I’m Yasin, a good-looking and interesting programmer.

Wechat public number: made up a process

Personal website: https://yasinshaw.com

Pay attention to my public number, grow up with me ~

The public,