Synchronized,

preface

Popular: There are two main causes of thread-safety problems:

  • There is shared data (also known as critical resources)
  • Multiple threads operate and share data

Academics: There are two main causes of thread-safety problems:

  • Memory visibility problems caused by main memory and thread working memory,
  • Problems caused by reordering need to be aware of the happens-before rule.

When there are multiple threads share data operation, need to make sure that the same time one and only one thread in a Shared data operation, other threads must wait until after the thread processing the data, this way the name of the mutex, that is to say as a Shared data is after the current is on a visit to the thread with a mutex, at the same time, Other threads can only wait until the current thread finishes processing and releases the lock.

The keyword synchronized can ensure that (1) only one thread can execute a method or a code block at a time (mainly for methods or code blocks with shared data operations), Synchronized ensures that changes made by one thread (mainly changes in shared data) are seen by other threads (guaranteed visibility, a complete substitute for Volatile).

Notes: Here are all the notes that need to be modified during the studywww.javanode.cn, thank you! Please correct any inappropriateness

Synchronized main mode

The three application modes of synchronized are described as follows

  • Modifier instance method, used to lock the current instance, before entering the synchronization code to obtain the currentThe instanceThe lock
  • Modifies a static method used to lock the current class object, to be obtained before entering synchronized codeCurrent class objectThe lock
  • Modifies a block of code that specifies a lock object, to be obtained before entering a synchronized code base, for a given objectLock for a given object.

Synchronized using

Applies to instance methods

Synchronized is used to modify instance methods in an instance object, except for static methods

public class AccountingSync implements Runnable{
    // Share resources (critical resources)
    static int i=0;

    /** * synchronized modifs instance methods */
    public synchronized void increase(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000000;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        AccountingSync instance=new AccountingSync();
        Thread t1=new Thread(instance);
        Thread t2=new Thread(instance);
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println(i);
    }
    /** * Output result: * 2000000 */
}
Copy the code

Note that synchronized modifies the instance method increase, in which case the lock on the current thread is the instance object. Note that in Java thread synchronization locks can be any object.

Here we also need to realize that when one thread is accessing an object’s synchronized instance method, other threads cannot access the object’s other synchronized methods. After all, an object has only one lock, and when one thread acquires the lock, other threads cannot acquire the lock. So the object’s other synchronized instance methods cannot be accessed, but other threads can still access the object’s other non-synchronized methods. Of course, if thread A needs to access the synchronized method f1 of obj1 (the current lock is obj1), and thread B needs to access the synchronized method f2 of obj2 (the current lock is obj2), This is allowed because the two instance objects are not locked in the same way. If the two threads do not share the operation data, thread-safety is guaranteed.

Unfortunately, if two threads are operating on shared data, thread-safety may not be guaranteed, as the following code demonstrates

public class AccountingSyncBad implements Runnable{
    static int i=0;
    public synchronized void increase(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000000;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        / / new new instances
        Thread t1=new Thread(new AccountingSyncBad());
        / / new new instances
        Thread t2=new Thread(new AccountingSyncBad());
        t1.start();
        t2.start();
        Thread A waits for thread A to terminate before returning from thread.join()t1.join(); t2.join(); System.out.println(i); }}Copy the code

The difference is that we create two new instances AccountingSyncBad at the same time, and then start two different threads to operate on the shared variable I. Unfortunately, the result is 1452317 instead of 2000000, because the code made a serious error. Although we modify the increase method with synchronized, two different instance objects are new, which means that there are two different instance object locks, so T1 and T2 enter their own object locks, which means that t1 and T2 threads use different locks, so thread safety cannot be guaranteed. The solution to this dilemma is to use synchronized on a static increase method, in which case the object lock is unique to the current class object, since no matter how many instance objects are created, there is only one for the class object. Let’s take a look at using a static increase method that applies synchronized.

Applied to static methods

When synchronized is applied to a static method, the lock is the class object lock of the current class. ** Since static members are not exclusive to any instance object and are class members, concurrent operations on static members can be controlled through class object locks. Note that if thread A calls the non-static synchronized method of an instance object and thread B calls the static synchronized method of the class that the instance object belongs to, the mutual exclusion will not occur. Because a lock used to access a static synchronized method is the current class object, and a lock used to access a non-static synchronized method is the current instance object lock, look at the code below

public class AccountingSyncClass implements Runnable{
    static int i=0;

    /** * on static methods, the lock is on the current class object, that is, the corresponding class object */ of the * AccountingSyncClass class
    public static synchronized void increase(a){
        i++;
    }

    /** * non-static, access to different locks will not be mutually exclusive */
    public synchronized void increase4Obj(a){
        i++;
    }

    @Override
    public void run(a) {
        for(int j=0; j<1000000;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        / / new new instances
        Thread t1=new Thread(new AccountingSyncClass());
        / / the new heart
        Thread t2=new Thread(new AccountingSyncClass());
        // Start the threadt1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

Because the synchronized keyword modifies static increase methods, its lock object is the class object of the current class, unlike the synchronized method that modifies instance methods. Note that the increase4Obj method in this code is an instance method whose object lock is the current instance object. If it is called by another thread, there will be no mutual exclusion (lock objects are different, after all), but we should be aware that thread-safety issues can be found in this case (handling the shared static variable I).

Acts on synchronized code blocks

In addition to using keywords to modify instance method and static method, you can also use the synchronized code block, in some cases, we can write the method body is bigger, at the same time there are some more time-consuming operation, and need to be synchronized code and only a small part, if directly with the method of synchronous operation, may do more harm than good, At this point, we can wrap the code to be synchronized in the way of synchronous code block, so that there is no need to synchronize the whole method. The example of synchronous code block is as follows:

public class AccountingSync implements Runnable{
    static AccountingSync instance=new AccountingSync();
    static int i=0;
    @Override
    public void run(a) {
        // Omit other time-consuming operations....
        // use the synchronization block to synchronize variable I with the lock object instance
        synchronized(instance){
            for(int j=0; j<1000000; j++){ i++; }}}public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(instance);
        Thread t2=newThread(instance); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

It can be seen from the code that synchronized is applied to a given instance object, that is, the current instance object is the lock object. Each time a thread enters the code block wrapped by synchronized, the current thread is required to hold the instance object lock. If other threads currently hold the lock, New threads must wait, ensuring that only one thread executes i++ at a time; Operation. Except, of course, for instance as an object.

We can also use this object (representing the current instance) or the current class object as the lock, as follows:

//this, the current instance object lock
synchronized(this) {for(int j=0; j<1000000;j++){
        i++;
    }
}

/ / class object lock
synchronized(AccountingSync.class){
    for(int j=0; j<1000000;j++){
        i++;
    }
}
Copy the code

conclusion

Synchronized lock objects and lock classes are essentially locks on objects. A class is also a special object. Except there’s only one class object. Object to lock different properties that can be accessed simultaneously by both synchronization methods

Underlying semantic principles of synchronized

In the JVM, objects are laid out in memory in three areas: object headers, instance data, and aligned padding. As follows:

The object header consists of mark Word, a pointer to the object instance data (Class Metadata Address), and length.

Number of VM slots Header object structure instructions
32/64bit Mark Word Store information such as the object’s hashCode, lock information, or generational age or GC flag
32/64bit Class Metadata Address A type pointer points to an object’s class metadata, which the JVM uses to determine which class instance the object is.
32/64bit length When the object is an array, length holds the length of the array

By default, Mark Word stores the object’s HashCode, generation age, lock marker bit, etc

The lock state 25bit 4bit 1bit whether bias lock 2bit lock flag bit
Unlocked state Object HashCode Age of object generation 0 01

Because the object header information is an additional storage cost unrelated to the data defined by the object itself, the Mark Word is designed to be a non-fixed data structure for storing more efficient data, reusing its storage space based on the state of the object itself.

Underlying principles of synchronized code blocks

Synchronization in Java virtual machines is implemented based on inbound and outbound Monitor objects. This is true whether synchronization is explicit (with explicit Monitorenter and Monitorexit directives that synchronize code blocks) or implicit (method-level synchronization). In the Java language, synchronization is probably the most commonly used synchronization method modified by synchronized. The synchronized methods are not synchronized by monitorenter and Monitorexit directives, but are implicitly implemented by method invocation directives that read the ACC_SYNCHRONIZED flags of methods in the runtime constant pool.

Drill down into the JVM to look at the bytecode and create code like this:

public class SynchronizedDemo2 {
    Object object = new Object();
    public void method1(a) {
        synchronized (object) {

        }
    }
}
Copy the code

You can see from the bytecode that the synchronization block is implemented using the Monitorenter and Monitorexit directives. This is also unique to Synchronized. To execute a synchronized block, the monitorenter directive is first executed and monitorexit directive exits. The key to Synchronized is to acquire the object’s monitor. When monitorenter is executed, the current thread attempts to acquire the object’s monitor. When the entry counter of objectref’s monitor is 0, the thread can successfully acquire the monitor, set the counter value to 1, and acquire the lock successfully. Synchronized is inherently reentrant. If the current thread already owns objectref’s Monitor, it can re-enter the monitor (more on reentrancy later) and the counter will be incresed by one. If another thread already owns objectref’s monitor, the current thread will block until the executing thread completes, i.e. the monitorexit directive is executed, which releases the monitor(lock) and sets the counter to 0. Other threads will have the opportunity to own monitor. Note that the compiler will ensure that regardless of how the method completes, every Monitorenter directive called in the method executes its monitorexit counterpart, regardless of whether the method terminates normally or abnormally. To ensure that monitorenter and Monitorexit can be paired correctly when the method exception completes, the compiler automatically generates an exception handler that claims to handle all exceptions. The exception handler is intended to execute monitorexit. You can also see from the bytecode that there is an additional Monitorexit directive, which is the monitorexit directive that is executed to release monitor when the exception ends.

Basic principle of synchronized method

Method-level synchronization is implicit, that is, controlled without bytecode instructions, and is implemented in method calls and return operations. The JVM can distinguish whether a method is synchronized from the ACC_SYNCHRONIZED access flag in the method_info Structure in the method constant pool. When a method is invoked, the calling instruction checks whether the ACC_SYNCHRONIZED access flag of the method is set. If so, the thread of execution will hold monitor (a term used in the virtual machine specification) before executing the method. Monitor is finally released when the method completes, either normally or abnormally. During method execution, the executing thread holds the Monitor, and no other thread can obtain the same monitor. If an exception is thrown during the execution of a synchronous method and cannot be handled within the method, the monitor held by the synchronous method is automatically released when the exception is thrown outside the synchronous method.

// Method level synchronization is implicit,
    public class SyncMethod {

        public int i;

        public synchronized void syncTask(a){ i++; }}Copy the code
  ## Decompiled bytecode
  public synchronized void syncTask();
    descriptor: ()V
    flags: ACC_PUBLIC, ACC_SYNCHRONIZED
    Code:
      stack=3, locals=1, args_size=1
         0: aload_0
         1: dup
         2: getfield      #2 // Field i:I
         5: iconst_1
         6: iadd
         7: putfield      #2 // Field i:I
        10: return
      LineNumberTable:
        line 9: 0
        line 10: 10
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0      11     0  this   Lcn/javanode/concurrent/key/synchronizedDesc/SyncMethod;
}
Copy the code

As you can see from the bytecode, the synchronized modified method does not have monitorenter and Monitorexit directives. Instead, it does have the ACC_SYNCHRONIZED identifier, which identifies the method as a synchronized method. ** The JVM uses the ACC_SYNCHRONIZED access flag to tell if a method is declared to be a synchronized method and therefore to execute the corresponding synchronized call. This is the basic principle behind synchronized locking on synchronized blocks of code and synchronized methods. It is also important to note that synchronized was a heavyweight Lock in early Versions of Java, which was inefficient because the monitor Lock depended on the underlying operating system’s Mutex Lock, and the operating system needed to switch from user mode to core mode when switching between threads. The conversion between these states takes a relatively long time and the time cost is relatively high, which is also the reason why the early synchronized efficiency is low. Java 6 introduced lightweight locking and biased locking to reduce the performance costs associated with acquiring and releasing locks.

Java virtual machine optimization for Synchronized

There are four lock states: unlocked, biased, lightweight and heavyweight. As locks compete, locks can be upgraded from biased locks to lightweight locks, and then to heavyweight locks, but locks are upgraded in one direction, that is, they can only be upgraded from low to high, and there is no degradation of locks.

unlocked

You can see that mark Word is now saved

  • The lock state
  • The object’s hashcode
  • The generational age of the object used here for garbage collection
  • Biased lock: 0 No 1 Yes
  • Lock flag bit: 01

Biased locking

It was proposed after jdk1.6 that in most cases locks are not contested, a lock is usually acquired by the same thread and does not need to be locked or unlocked. Therefore, biased locking is introduced to reduce the cost of acquiring the lock for the same thread (which involves some CAS operations and time consuming). * * biased locking the core idea is that if a thread got a lock and then lock into the deflection model, the structure of Mark Word become biased locking structure, when the thread lock request again, no need to do any synchronization operation, namely the process of acquiring a lock, lock * * so it saves a lot of application of the operation, thus the performance of the provider. Therefore, in the case of no lock contention, biased locking has a good optimization effect, after all, it is very likely that the same thread applies for the same lock for many consecutive times. But for lock more competitive situation, biased locking failure, because such occasions is likely to lock the thread is not the same for each application, so this situation should not be used to lock, or you will do more harm than good, it is important to note that the biased locking failed, does not immediately into a heavyweight locks, but upgraded to a lightweight lock first.

Lightweight lock

If biased locking fails, the virtual machine does not immediately upgrade to heavyweight locking, but instead attempts to use an optimization called lightweight locking (added after 1.6), in which the Mark Word structure also changes to lightweight locking. Lightweight locks improve application performance on the basis that “for the vast majority of locks, there is no competition for the entire synchronization cycle”, note that this is empirical data. It is important to understand that lightweight locks are suitable for scenarios where threads alternately execute synchronized blocks. If the same lock is accessed at the same time, this will cause the lightweight lock to expand into a heavyweight lock.

spinlocks

When lightweight locking fails, the virtual machine also performs an optimization called spin locking to prevent threads from actually hanging at the operating system level. This is based on in most cases, the thread holding the lock time not too long, if hang directly operating system level thread may do more harm than good, after all, the operating system to realize the need when switching between threads from user mode to kernel mode, the state transitions between needs a relatively long time, time cost is relatively high, So spinlock will assume that in the near future, the current thread lock can be obtained, therefore the virtual opportunity for current wants to make a few empty thread fetching the lock loop (which is also called spin), generally not too long, may be 50 or 100 cycles, after several cycles, if get the lock, I successfully enter the critical section. If the lock is not available, threads are suspended at the operating system level, which is an optimized way to spin locks, which can actually improve efficiency. Finally there is no way to upgrade to the heavyweight lock.

Lock elimination

Eliminate another lock lock is virtual machine optimization and the optimization more thoroughly, the Java virtual machine in the JIT compiler (can be simple to understand for the first time a piece of code will be executed when compile, also known as instant compiled), run through the context of the scan, remove lock there can be no Shared resource competition, in this way to eliminate unnecessary lock, The append of the following StringBuffer is a synchronous method, but in the add method the StringBuffer is a local variable and cannot be used by other threads, so it cannot compete for shared resources. The JVM automatically removes the lock.

/** * Remove synchronization lock */
public class StringBufferRemoveSync {

    public void add(String str1, String str2) {
        //StringBuffer is thread-safe, and since sb is only used in append methods, it cannot be referenced by other threads
        // Therefore sb is a resource that cannot be shared, and the JVM automatically removes the internal lock
        StringBuffer sb = new StringBuffer();
        sb.append(str1).append(str2);
    }

    public static void main(String[] args) {
        StringBufferRemoveSync rmsync = new StringBufferRemoveSync();
        for (int i = 0; i < 10000000; i++) {
            rmsync.add("abc"."123"); }}}Copy the code

Compare the advantages and disadvantages of locks

The lock advantages disadvantages Usage scenarios
Biased locking Locking and unlocking require no CAS operation, no additional performance cost, and a nanosecond difference compared to performing asynchronous methods If there is lock contention between threads, there is additional lock cancellation cost This method is applicable to scenarios where only one thread accesses fast synchronization
Lightweight lock Competing threads do not block, improving response times If a thread is a thread that never gets a lock contention, using spin can cost CPU performance Pursuit of response time, synchronization fast execution speed is very fast
Heavyweight lock Thread contention does not apply to spin and does not consume CPU Threads are blocked and the response time is slow. In multithreading, frequent lock acquisition and release will bring huge performance consumption The pursuit of throughput, fast synchronization and long execution speed

Supplementary knowledge

1. Reentrancy of synchronized

In terms of mutex design, when a thread attempts to manipulate a critical resource of an object lock held by another thread, it will block. However, when a thread requests the critical resource of an object lock held by itself again, this situation is a reentrant lock and the request will succeed. In Java, synchronized is an atomic-based internal locking mechanism and can be reentrant. Therefore, when a thread calls a synchronized method, another synchronized method of the object is called inside its method body. That is, a thread obtains an object lock and then requests the object lock again. Yes, that’s the reentrancy of synchronized. As follows:

public class AccountingSync implements Runnable{
    static AccountingSync instance=new AccountingSync();
    static int i=0;
    static int j=0;
    @Override
    public void run(a) {
        for(int j=0; j<1000000; j++){//this, the current instance object lock
            synchronized(this){
                i++;
                increase();// Synchronized reentrancy}}}public synchronized void increase(a){
        j++;
    }


    public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(instance);
        Thread t2=newThread(instance); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

After obtaining the current instance lock, enter the synchronized code block to execute the synchronized code, and call another synchronized method of the current instance object in the code block. When requesting the current instance lock again, it will be allowed to execute the method body code, which is the most direct embodiment of reentrant lock. Special attention is paid to the fact that when a subclass inherits from its parent, it can also call its parent’s synchronized methods via a reentrant lock. Note that since synchronized is implemented based on monitor, the counter in Monitor is still incremented by one with each reentrant.

2. Principle of synchronized realization of visibility

Visibility simply means flushing data from working memory to main memory and loading it. Specifically, the memory barrier

int b = 0;

 int c = 0;

 synchronized(this) {-> Monitorenter Load Memory barrier Acquire memory barrierint a = b; 
	c = 1; = >synchronizedRelease Memory barrier} -> MonitoreXit Store Memory barrierCopy the code

  • The purpose of the Load barrier is to refresh the processor cache, in other words, update variables to other processors, Load data from other processors’ caches (or main memory) into your own cache, and ensure that you see the latest data.

  • The Store barrier performs flush processor cache operations, which essentially flush variables updated by the current processor to the cache (or main memory)

On the level of synchronized code block bytecode:

  • After the MoniterEnter directive, a load barrier is added, performing a refresh operation to read the latest data from another processor’s cache or load data from main memory

  • After moniterExit, add a store barrier and flush the latest value to the cache or main memory

3. Principle of synchronized order

As shown in the code above

  • After the Monitorenter directive, after the Load barrier, one is addedAcquire a barriertheBarriers prevent instruction reordering between read and write operations.
  • One is added before the Monitorexit directiveRelease barriertheBarriers prevent reordering between write and read operations.

Therefore, Acquire and Release barriers allow synchronzied to maintain order, allowing only internal synchronized instructions to be reordered, but never to be reordered externally.

Consolidate the ascension

Find a few examples, consolidate the above learning, see if you can figure out the order of execution!

Case a

public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence = new SynchronizedObjectLock();

    @Override
    public void run(a) {
        Thread 1 must wait until thread 0 releases the lock before it can execute
        synchronized (this) {
            System.out.println("I'm a thread." + Thread.currentThread().getName());
            try {
                Thread.sleep(3000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + "The end"); }}public static void main(String[] args) {
        Thread t1 = new Thread(instence);
        Thread t2 = newThread(instence); t1.start(); t2.start(); }}// Execution result
/** I am Thread Thread-0 Thread-0 end **/
Copy the code

Case 2

public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence = new SynchronizedObjectLock();
    // Create 2 locks
    Object block1 = new Object();
    Object block2 = new Object();

    @Override
    public void run(a) {
        // This block uses the first lock. When it is released, the following block uses the second lock, so it can be executed immediately
        synchronized (block1) {
            System.out.println("Block1 lock, I'm a thread." + Thread.currentThread().getName());
            try {
                Thread.sleep(3000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("Block1 lock."+Thread.currentThread().getName() + "The end");
        }

        synchronized (block2) {
            System.out.println("Block2 lock, I'm a thread" + Thread.currentThread().getName());
            try {
                Thread.sleep(3000);// The sleep method does not lose the lock.
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("Block2 lock."+Thread.currentThread().getName() + "The end"); }}public static void main(String[] args) {
        Thread t1 = new Thread(instence);
        Thread t2 = newThread(instence); t1.start(); t2.start(); }}// Execution result
/** block1 lock, I am Thread 0 block1 lock, I am Thread 0 block2 lock, I am Thread 0 Because they are not using the same block1 lock, I am Thread thread-1 block1 lock, thread-1 ends block2 lock, thread-0 ends block2 lock, I am Thread thread-1 block2 lock, thread-1 ends **/
Copy the code

Method lock form: synchronized modifies ordinary methods, and the lock object defaults to this

// The lock of the current thread is the instance object
// When one thread acquires the lock on the object, other threads cannot acquire the lock and therefore cannot access other synchronized instance methods of the object
public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence = new SynchronizedObjectLock();

    @Override
    public void run(a) {
        method();
    }

    public synchronized void method(a) {
        System.out.println("I'm a thread." + Thread.currentThread().getName());
        try {
            Thread.sleep(3000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "The end");
    }

    public static void main(String[] args) {
        Thread t1 = new Thread(instence);
        Thread t2 = newThread(instence); t1.start(); t2.start(); }}// Execution result
/** I am Thread thread-1 thread-1 end I am Thread thread-1 thread0 end **/
Copy the code

Method lock form: synchronized modifies ordinary methods, and the lock object defaults to this

// This for t1 and t2 are two different instances that hold different locks
public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instence2 = new SynchronizedObjectLock();

    @Override
    public void run(a) {
        method();
    }

    // synchronized synchronized is the default lock for normal methods
    public synchronized void method(a) {
        System.out.println("I'm a thread." + Thread.currentThread().getName());
        try {
            Thread.sleep(3000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "The end");
    }

    public static void main(String[] args) {
        // This for T1 and t2 are two different instances, so the code is not serialized
        Thread t1 = new Thread(instence1);
        Thread t2 = newThread(instence2); t1.start(); t2.start(); }}// Execution result
/** I am Thread thread-1 thread-1 end thread-1 end **/
Copy the code

Kind of lock form


public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instence2 = new SynchronizedObjectLock();

    @Override
    public void run(a) {
        method();
    }

    Synchronized on static methods, the default lock is the current Class, so no matter which thread accesses it, only one lock is required
    public static synchronized void method(a) {
        System.out.println("I'm a thread." + Thread.currentThread().getName());
        try {
            Thread.sleep(3000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "The end");
    }

    public static void main(String[] args) {
        Thread t1 = new Thread(instence1);
        Thread t2 = newThread(instence2); t1.start(); t2.start(); }}// Execution result
/** I am Thread Thread-0 Thread-0 end **/
Copy the code

Synchronized code block


public class SynchronizedObjectLock implements Runnable {
    static SynchronizedObjectLock instence1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instence2 = new SynchronizedObjectLock();

    @Override
    public void run(a) {
        // All threads need the same lock
        synchronized(SynchronizedObjectLock.class){
            System.out.println("I'm a thread." + Thread.currentThread().getName());
            try {
                Thread.sleep(3000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + "The end"); }}public static void main(String[] args) {
        Thread t1 = new Thread(instence1);
        Thread t2 = newThread(instence2); t1.start(); t2.start(); }}// Execution result
/** I am Thread Thread-0 Thread-0 end **/
Copy the code

Notes: Here are all the notes that need to be modified during the studywww.javanode.cn, thank you! Please correct any inappropriateness