This is the 5th day of my participation in the August More Text Challenge

Three characteristics of concurrency: visibility, order, atomicity

Visibility

volatile

/** * volatile; /** * volatile; /** * Volatile; /** * Volatile; /** * Volatile; * * In the following code, running is stored in the t object in heap memory. * When thread T1 starts running, it reads the running value from memory to the workspace of the T1 thread. The t1 thread does not know when the main thread changes the running value, so it does not stop running. Volatile It forces all threads to read running from the heap. * * Volatile does not guarantee inconsistencies when running variables are modified by multiple threads, meaning that volatile does not replace synchronized */
package com.mashibing.juc.c_001_00_Visibility;

import com.mashibing.util.SleepHelper;

public class T01_HelloVolatile {
    private static /*volatile*/ boolean running = true;

    private static void m(a) {
        System.out.println("m start");
        while (running) {
            //System.out.println("hello");
        }
        System.out.println("m end!");
    }

    public static void main(String[] args) {

        new Thread(T01_HelloVolatile::m, "t1").start();

        SleepHelper.sleepSeconds(1);

        running = false; }}Copy the code

Parse system.out.println (“hello”) underlying code

public void println(String x) {
    synchronized (this) { print(x); newLine(); }}Copy the code

Println () uses synchronized, which is visible to a certain extent, but lacks the immediacy of volatile. It may take several cycles to achieve consistency between main and working memory.

Volatile reference types (including arrays) only guarantee the visibility of the reference itself, not the visibility of the internal fields

public class T02_VolatileReference {
    private static class A {
        boolean running = true;

        void m(a) {
            System.out.println("m start");
            while (running) {
            }
            System.out.println("m end!"); }}private volatile static A a = new A();

    public static void main(String[] args) {
        new Thread(a::m, "t1").start();
        SleepHelper.sleepSeconds(1);
        a.running = false; }}Copy the code

The cache

From CPU’s compute unit (ALU) to:

Multistage cache

Cache line

Read by block, 64 bytes

The principle of program locality can improve efficiency and give full play to the ability of bus CPU pins to read more data at one time

Why is a cache line 64 bytes long?

The larger the cache row, the higher the local space efficiency, but the slower the read time

The smaller the cache row, the lower the local space efficiency, but the faster the read time

Industrial practice after taking a compromise value, currently used: 64 bytes

Cache line alignment

  • Cache line alignment

    Cache line 64 bytes is the basic unit of CPU synchronization, and cache line isolation can be more efficient than pseudo-sharing

    Disruptor

  • Learn programming tricks for cache line alignment

package com.mashibing.juc.c_001_02_FalseSharing;

import java.util.concurrent.CountDownLatch;

public class T01_CacheLinePadding {
    public static long COUNT = 10_0000_0000L;

    private static class T {
        //private long p1, p2, p3, p4, p5, p6, p7;
        public long x = 0L;
        //private long p9, p10, p11, p12, p13, p14, p15;
    }

    public static T[] arr = new T[2];

    static {
        arr[0] = new T();
        arr[1] = new T();
    }

    public static void main(String[] args) throws Exception {
        CountDownLatch latch = new CountDownLatch(2);

        Thread t1 = new Thread(()->{
            for (long i = 0; i < COUNT; i++) {
                arr[0].x = i;
            }

            latch.countDown();
        });

        Thread t2 = new Thread(()->{
            for (long i = 0; i < COUNT; i++) {
                arr[1].x = i;
            }

            latch.countDown();
        });

        final long start = System.nanoTime();
        t1.start();
        t2.start();
        latch.await();
        System.out.println((System.nanoTime() - start)/100 _0000); }}Copy the code

Cache consistency protocol MESI

False sharing

Variables A and B are in the same cache row. When variable A modifies the synchronized data, variable B is synchronized.

order

CPU out of order execution

Why is it out of order?

In short, it’s about efficiency.

Single-threaded AS-IF-serial

A single thread, two statements, with no dependencies, may not be executed sequentially

For single-thread reordering, final consistency must be guaranteed

As-if-serial: looks like serialization (single thread)

For example,

Object creation is out of order and the this object escapes

public class T03_ThisEscape {

    private int num = 8;

    public T03_ThisEscape(a) {
        new Thread(() -> System.out.println(this.num)
        ).start();
    }

    public static void main(String[] args) throws Exception {
        newT03_ThisEscape(); System.in.read(); }}Copy the code

This probably refers to the T03_ThisEscape object when num is semi-initialized, with num output being 0.

Instead of starting a new thread in a constructor, you can start a thread in another method.

atomic

Atomicity of threads

Let’s start with a simple little program:

import java.util.concurrent.CountDownLatch;

public class T00_IPlusPlus {
    private static long n = 0L;

    public static void main(String[] args) throws Exception {

        Thread[] threads = new Thread[100];
        CountDownLatch latch = new CountDownLatch(threads.length);

        for (int i = 0; i < threads.length; i++) {
            threads[i] = new Thread(() -> {
                for (int j = 0; j < 10000; j++) {
                    //synchronized (T00_IPlusPlus.class) {
                    n++;
                    / /}
                }
                latch.countDown();
            });
        }

        for(Thread t : threads) { t.start(); } latch.await(); System.out.println(n); }}Copy the code

The above procedures due to multithreading competition between the data did not reach the expected.

The problem can be solved by locking, but synchronized can ensure the visibility and atomicity of data, but not order. If there are many services to be processed and there is no dependency between them, reordering will occur.

The nature of locking

The essence of locking is to serialize concurrent programming

Note that serialization does not mean that other programs never get a chance to execute. Rather, it is possible that serialization will be scheduled, but the lock cannot be grabbed and will return to Blocked or Waiting (sync lock upgrade).

Must be the same lock.

import com.mashibing.util.SleepHelper;

public class T00_01_WhatIsLock {
    private static Object o = new Object();

    public static void main(String[] args) {
        Runnable r = () -> {
            Synchronized (o) {//synchronized (o)
                System.out.println(Thread.currentThread().getName() + " start!");
                SleepHelper.sleepSeconds(2);
                System.out.println(Thread.currentThread().getName() + " end!");
            / /}
        };

        for (int i = 0; i < 3; i++) {
            newThread(r).start(); }}}Copy the code

What kind of statements (instructions) are atomic?

CPU level assembly, need to query the assembly manual!

8 atomic operations in Java

  1. Lock: Main memory, identifying variables as thread exclusive
  2. Unlock: Main memory to unlock thread-exclusive variables
  3. Read: main memory, read memory into thread cache (working memory)
  4. Load: working memory, the value after read is put into the thread-local variable copy
  5. Use: working memory that transmits values to the execution engine
  6. Assign: working memory that the execution engine assigns to thread-local variables
  7. Store: working memory that stores values to the main memory for write backup
  8. Write: main memory, write variable values

Some basic concepts

Race condition => Race condition, which refers to the contention that occurs when multiple threads access shared data

Unconsistency of data, undesirable results under concurrent access

How do you ensure data consistency? –> Thread synchronization (thread execution order is arranged)

Monitor –> lock

Critical section -> Critical section

If the critical section execution time is long and there are many statements, it is called the coarse granularity of the lock, and conversely, it is the fine granularity of the lock