Pay attention to my public number: Android development braggart. Learn from each other

With concurrency we can do a lot of things at the same time, but there is also the problem of two or more threads interfering with each other. If you don’t guard against such conflicts, you can have two threads accessing the same bank account, printing to the same printer, changing the same value, and so on.

Shared resources

A single thread can only do one thing at a time. There’s only one entity so you never have to worry about two people parking in the same place. But multiple threads access the same resource simultaneously.

Incorrect access to the resource

Let’s start with an experiment. Multiple tasks. One task produces an even number, and the other tasks test the validity of the even number.

public abstract class IntGenerator {
	// To indicate visibility, use volatile modifier
	private volatile boolean canceled = false;
	public abstract int next(a);
	public void cancel(a){
		canceled = true;
	}
	// Check whether the object has been destroyed
	public boolean isCanceled(a) {
		returncanceled; }}Copy the code

Any IntGenerator can be tested with the following EvenChecker class:

public class EvenChecker implements Runnable{
	private IntGenerator generator;
	private final int id;

	protected EvenChecker(IntGenerator generator, int id) {
		super(a);this.generator = generator;
		this.id = id;
	}

	@Override
	public void run(a) {
		while(! generator.isCanceled()) {int val = generator.next();
			if (val %2! =0) {
				System.out.println("Not even."); generator.cancel(); }}}public static void test(IntGenerator gp,int count) {
		ExecutorService service = Executors.newCachedThreadPool();
		for (int i = 0; i < count; i++) {
			service.execute(new EvenChecker(gp, i));
		}
		service.shutdown();
	}

	public static void test(IntGenerator gp) {
		test(gp,10); }}Copy the code

In the example above, generator.cancel() does not undo the task itself, but rather the condition on whether the IntGenerator object can be undone. All possible ways for a concurrent system to fail must be carefully considered; for example, one task cannot depend on another. Because the order in which tasks close cannot be guaranteed. Here, by making tasks dependent on non-task objects, we can eliminate potential race conditions.

EvenChecker always reads and tests the return value of IntGenerator. If isCanceled() returns true, run() returns, which tells the Executor in test() that the task is complete. Any EvenChecker task can call cancel() on the IntGenerator associated with it, which will cause all other EvenChecker using that IntGenerator to be shut down.

The first IntGenerator has a next() method that produces some column even values:

public class EvenGenerator extends IntGenerator{
	private int currentEvenValue = 0;

	@Override
	public int next(a) {
		++currentEvenValue;
		++currentEvenValue;

		return currentEvenValue;
	}

	public static void main(String[] args) {
		EvenChecker.test(newEvenGenerator()); }}Copy the code

Execution Result:

1537 is not even 1541 is not even 1539 is not evenCopy the code

One task might call the next() method after the first increment operation is performed by another task, but not before the second increment operation. This will put the value in an inappropriate state. To prove that this can happen, the text() method creates a set of EvenChecker objects to read and output the same EvenGenerator consecutively and to check if each value is even. If not, abort.

The program will eventually fail to terminate because each EvenChecker task will still be able to access the information while EvenGenerator is in an inappropriate state. However, depending on the operating system and implementation details, this problem may not be detected after many cycles. It is important to note that the incrementing program itself requires multiple steps, and tasks can be suspended during incrementing. That is, incrementing is not an atomic operation in Java. Therefore, even a single increment is not safe if the task is not protected.

Resolve competition for shared resources

The previous example shows a basic problem with threads: you never know when a thread is running. For concurrent operations, you need some way to prevent both tasks from accessing the same resource, at least during critical phases. One way to prevent this conflict is to lock a resource when it is used by a task. The first task to access a resource must lock it so that other tasks cannot access it until it is unlocked, and when it is unlocked, another task can lock it and use it, and so on.

Almost all concurrent modes solve thread conflicts by serializing access to shared resources. This means that only one task is allowed to access the shared resource at any given time. This is usually done by prefacing the code with a lock statement so that only one task can run the code at a time. Because lock statements produce a mutually exclusive effect, this mechanism is called mutex.

In addition, when a lock is unlocked, we are not sure which task will use the lock next, because the thread scheduling mechanism is not deterministic. You can use yield() and setPriorit() to give suggestions to the thread scheduler.

Java provides built-in support for preventing resource conflicts in the form of the keyword synchronized. When the task executes a snippet of code protected by the synchronized keyword, it checks to see if the lock is available, obtains the lock, executes the code, and releases the lock. Shared resources are typically objects such as memory fragments, which can be files, input/output ports. To control access to a shared resource, wrap it into an object. Then mark all methods that access the resource as synchronized.

Here’s how to declare synchronized methods:

synchronized void f(a){};
synchronized void g(a){};
Copy the code

All objects automatically contain a single lock (monitor). When any synchronized method on an object is called, that object is locked, and the other synchronized methods on that object cannot be called until the previous method has finished calling and released the lock. For a particular object, all its synchronized methods share the same lock, which can be used to prevent multiple tasks from accessing the object’s memory encoded at the same time.

Note: It is important to make objects private when using concurrency; otherwise, the synchronized keyword does not prevent other tasks from accessing the domain directly, which can cause conflicts.

There is also a lock for each class, so synchronized static methods can prevent concurrent access to static data within the scope of the class.

When should I synchronize?

If you are writing a variable that may be read next by another thread, or if you are reading a variable that was last written by another thread, then you must use synchronization, and both the reading and writing threads must synchronize with the same monitor lock.Copy the code

Synchronize control EvenGenerator

You can prevent unwanted thread access by adding the synchronized keyword to EvenGenerator:

public class EvenGenerator extends IntGenerator{
	private int currentEvenValue = 0;

	@Override
	public synchronized int next(a) {
		++currentEvenValue;
		Thread.yield();
		++currentEvenValue;

		return currentEvenValue;
	}

	public static void main(String[] args) {
		EvenChecker.test(newEvenGenerator()); }}Copy the code

The call to thread.yield () is inserted between the two threads to increase the likelihood of an odd number. Because mutexes prevent multiple tasks from entering the critical section at the same time, there are no failures. The first task to enter next() acquires the lock, and any other task attempting to acquire the lock will be blocked until the first task releases the lock.

Use the displayed Lock object

Java SE5 class library also contains definitions in Java. Util. Concurrent. The locks in the display of the mutex mechanism. The Lock object must be explicitly created, locked, and released. As a result, the code is less elegant than the built-in lock form. But it is more flexible for solving certain types of problems.

Let’s rewrite the above code with the displayed Lock:

public class EvenGenerator extends IntGenerator{
	private int currentEvenValue = 0;
	/ / create a lock
	private Lock lock = new ReentrantLock();
	@Override
	public int next(a) {
		/ / lock
		lock.lock();
		try {
			++currentEvenValue;
			Thread.yield();
			++currentEvenValue;

			return currentEvenValue;
		}finally {
			// Release the lock after calculationlock.unlock(); }}public static void main(String[] args) {
		EvenChecker.test(newEvenGenerator()); }}Copy the code

The idiom of the example is important when you are using lock objects: calls to unlock() methods must be placed in try-finlly statements. Note that the return statement must appear in the try clause to ensure that unlock() does not occur prematurely, exposing data to the second task. Although try-finlly clauses are more common than the synchronized keyword, the advantages of explicit lock are obvious. If something fails with the synchronized keyword, an exception is thrown. But we didn’t have a chance to deal with it to keep the system in good condition. Display the lock object, you can use the Finlly clause to maintain the correct state of the system. In general, we use synchronized more often, using explicit lock objects only to solve specific problems.

Example: Do not attempt to acquire a lock using the synchronized keyword and the acquisition will fail, or try to acquire it for a period of time and then abandon it.

public class AttemptLocking {
	private ReentrantLock lock = new ReentrantLock();
	  public void untimed(a) {
		  // Try to get the lock
	    boolean captured = lock.tryLock();
	    try {
	      System.out.println("tryLock(): " + captured);
	    } finally {
	      if(captured) lock.unlock(); }}public void timed(a) {
	    boolean captured = false;
	    try {
	    	// Failed after 2 seconds
	      captured = lock.tryLock(2, TimeUnit.SECONDS);
	    } catch(InterruptedException e) {
	      throw new RuntimeException(e);
	    }
	    try {
	      System.out.println("tryLock(2, TimeUnit.SECONDS): " +
	        captured);
	    } finally {
	      if(captured) lock.unlock(); }}public static void main(String[] args) {
	    final AttemptLocking al = new AttemptLocking();
	    al.untimed(); // True -- lock is available
	    al.timed();   // True -- lock is available
	    // Now create a separate task to grab the lock:
	    new Thread() {
	      { setDaemon(true); }
	      public void run(a) {
	        al.lock.lock();
	        System.out.println("acquired");
	      }
	    }.start();
	    Thread.yield(); // Give the 2nd task a chance
	    al.untimed(); // False -- lock grabbed by task
	    al.timed();   // False -- lock grabbed by task}}Copy the code

Execution Result:

tryLock(): true
tryLock(2, TimeUnit.SECONDS): true
tryLock(): true
tryLock(2, TimeUnit.SECONDS): true
acquired
Copy the code

ReentrantLock allows us to try to acquire the lock but fail to acquire it, so that if someone else has already acquired the lock, you can decide to go and do something else instead of waiting for the lock to be released. Displayed Lock objects also give you more fine-grained control over locking and releasing locks than built-in synchronized locks.

Atomicity and variability

In Java threads, it is often thought that atomic operations do not require synchronization control. Atomic operations cannot be interrupted by thread scheduling mechanisms. Such thinking is wrong, and reliance on atomicity is dangerous. Atomicity has been implemented in some of the more ingenious builds in Java’s class libraries. Atomicity applies to “simple operations” on all basic types except long and double. However, the JVM performs the 64-bit long and double operations as if they were two separate 32-bit operations, creating a context switch between the read and write operations that leads to the possibility of incorrect results for different tasks. But atomicity is achieved if we use the volatile keyword (which didn’t work correctly until Java SE5).

Thus, atomic operations can be guaranteed to be uninterruptible by the threading mechanism, but even then, this is a simplified mechanism. Sometimes atomic operations that seem safe can actually be unsafe.

On multi-core processors, visibility is far more problematic than atomicity. Changes made by one task may not be visible to other tasks. Because each task temporarily stores information in the cache. Synchronization forces changes made by a task in the processor to be visible. The volatile keyword ensures this visibility. If one task modifies the operation on the modifier object, the modification is visible to all other task reads and writes. Even if cached is used, it is visible because volatile is written to main memory immediately. The read and write operations take place in main memory. Synchronization also causes flushing to main memory, so the volatile modifier is unnecessary if an object has been synchronized for a long time. The only safe time to use volatile instead of synchronized is when there is only one mutable field in a class. Our first choice would be the synchronized keyword, which is the safest approach.

What is atomic operation?

Assignment and return operations to values in a field are usually atomic. But increasing and decreasing are not:

public class Atomicity {
	int i;
	void f(a){
		i++;
	}
	void g(a){
		i +=3; }}Copy the code

Let’s look at the compiled file:

void f(a);
		0  aload_0 [this]
		1  dup
		2  getfield concurrency.Atomicity.i : int [17]
		5  iconst_1
		6  iadd
		7  putfield concurrency.Atomicity.i : int [17]
 // Method descriptor #8 ()V
 // Stack: 3, Locals: 1
 void g(a);
		0  aload_0 [this]
		1  dup
		2  getfield concurrency.Atomicity.i : int [17]
		5  iconst_3
		6  iadd
		7  putfield concurrency.Atomicity.i : int [17]}Copy the code

Each instruction produces a GET and a put, and there are other instructions in between. So between fetching and modifying, another task might modify the field. So, these operations are not atomic:

Let’s see if the following example fits the above description:

public class AtomicityTest implements Runnable {
	  private int i = 0;
	  public int getValue(a) {
		  return i;
	  }

	  private synchronized void evenIncrement(a) {
		  i++;
		  i++;
	  }

	  public void run(a) {
	    while(true)
	      evenIncrement();
	  }

	  public static void main(String[] args) {
	    ExecutorService exec = Executors.newCachedThreadPool();
	    AtomicityTest at = new AtomicityTest();
	    exec.execute(at);
	    while(true) {
	      int val = at.getValue();
	      if(val % 2! =0) {
	        System.out.println(val);
	        System.exit(0); }}}}Copy the code

Test results:

1

Copy the code

The program finds an odd number and terminates. Although return I is an atomic operation, the lack of synchronization allows its value to be read in an unstable intermediate state. There are also visibility issues because I is not volatile. Both getValue() and evenIncrement() must be synchronized. Read and assignment operations on primitive types are considered safe atomic operations. But when the object is in an unstable state, it is still possible to access it using atomic operations. The wisest course of action is to follow the rules of synchronization.

Atomic classes

Java SE5 introduced special atomicity variable classes such as AtomicInteger, AtomicLong, AtomicReference, and so on, which provide atomicity conditional update operations of the following form:

boolean compareAndSet(expectedValue,updateValue);
Copy the code

These classes are tuned to work on modern processors and are machine-level atomicity, so you don’t have to worry about using them. They are rarely used on a regular basis, but they can be very useful for performance tuning.

For example, rewrite the above example:

public class AtomicIntegerTest implements Runnable{
	private AtomicInteger ger = new AtomicInteger(0);
	public int getValue(a) {
		return ger.get();
	}
	private void eventIncrement(a) {
		ger.addAndGet(2);
	}

	@Override
	public void run(a) {
		while (true) { eventIncrement(); }}public static void main(String[] args) {
		ExecutorService exec = Executors.newCachedThreadPool();
		AtomicIntegerTest aIntegerTest = new AtomicIntegerTest();
		exec.execute(aIntegerTest);
		while (true) {
			int val = aIntegerTest.getValue();
			if (val % 2! =0) {
				System.out.println(val);
				System.exit(0); }}}}Copy the code

Atomic classes are designed to build classes in java.util.Concurrent, so they are used in code only in special cases. The above example gets good synchronization without using any locking mechanism. But it’s usually safer for us to rely on locks.

A critical region

Sometimes we need to prevent multiple threads from accessing parts of the code inside a method at the same time rather than the whole method. Code isolated in this way is called a critical section, again using the synchronized keyword. Synchronized is used to specify an object whose lock is used to control the parenthesized code:

synchronized (syncObject){
	// Block of code controlled by synchronization
}
Copy the code

This is called a synchronized code block; Before entering this code, the lock on the syncObject object must be obtained. If another thread has already acquired the lock, it cannot enter the critical section until the lock is released. By using synchronized control blocks instead of the entire approach, you can significantly improve the timeliness of multiple tasks accessing objects.

The following example compares the two synchronization control methods:

public class Pair {
	  private int x, y;
	  public Pair(int x, int y) {
	    this.x = x;
	    this.y = y;
	  }
	  public Pair(a) { this(0.0); }
	  public int getX(a) { return x; }
	  public int getY(a) { return y; }
	  // Increment operations are not thread-safe

	  public void incrementX(a) {
		  x++;
	  }
	  public void incrementY(a) {
		  y++;
	  }

	  public String toString(a) {
	    return "x: " + x + ", y: " + y;
	  }

	  public class PairValuesNotEqualException extends RuntimeException {
	    public PairValuesNotEqualException(a) {
	      super("Pair values not equal: " + Pair.this); }}// Arbitrary invariant -- both variables must be equal:
	  public void checkState(a) {
	    if(x ! = y)throw newPairValuesNotEqualException(); }}Copy the code

Template class:

public abstract class PairManager {
      // Thread safe
	  AtomicInteger checkCounter = new AtomicInteger(0);
	  protected Pair p = new Pair();
	  // Collections are also thread-safe
	  private List<Pair> storage =Collections.synchronizedList(new ArrayList<Pair>());

	  // The method is thread-safe
	  public synchronized Pair getPair(a) {
	    // Make a copy to keep the original safe:
	    return new Pair(p.getX(), p.getY());
	  }

	  // Add each time at an interval of 50 milliseconds
	  protected void store(Pair p) {
	    storage.add(p);
	    try {
	      TimeUnit.MILLISECONDS.sleep(50);
	    } catch(InterruptedException ignore) {

	    }
	  }

	  public abstract void increment(a);
}

Copy the code

Implementation template:

public class PairManager1 extends PairManager{

	// Modify the method body to indicate that the method is synchronized
	@Override
	public synchronized void increment(a) {
		// Incrementing and decrementing are thread-safep.incrementX(); p.incrementY(); store(getPair()); }}public class PairManager2 extends PairManager{

	@Override
	public void increment(a) {
		Pair temp;
		// Synchronize code block, assign value after calculation
	    synchronized(this) { p.incrementX(); p.incrementY(); temp = getPair(); } store(temp); }}Copy the code

Create two threads:

public class PairManipulator implements Runnable {

	  private PairManager pm;
	  public PairManipulator(PairManager pm) {
	    this.pm = pm;
	  }

	  public void run(a) {
	    while(true)
	      pm.increment();
	  }

	  public String toString(a) {
	    return "Pair: " + pm.getPair() +
	      " checkCounter = "+ pm.checkCounter.get(); }}public class PairChecker implements Runnable{

	  private PairManager pm;
	  public PairChecker(PairManager pm) {
	    this.pm = pm;
	  }
	  public void run(a) {
	    while(true) { pm.checkCounter.incrementAndGet(); pm.getPair().checkState(); }}}Copy the code

The test class:

public class CriticalSection {
	 static void testApproaches(PairManager pman1, PairManager pman2) {
	    ExecutorService exec = Executors.newCachedThreadPool();

	    PairManipulator
	      pm1 = new PairManipulator(pman1),
	      pm2 = new PairManipulator(pman2);
	    PairChecker
	      pcheck1 = new PairChecker(pman1),
	      pcheck2 = new PairChecker(pman2);

	    exec.execute(pm1);
	    exec.execute(pm2);
	    exec.execute(pcheck1);
	    exec.execute(pcheck2);
	    try {
	      TimeUnit.MILLISECONDS.sleep(500);
	    } catch(InterruptedException e) {
	      System.out.println("Sleep interrupted");
	    }
	    System.out.println("pm1: " + pm1 + "\npm2: " + pm2);
	    System.exit(0);
	  }
	  public static void main(String[] args) {
	    PairManager
	      pman1 = new PairManager1(),
	      pman2 = newPairManager2(); testApproaches(pman1, pman2); }}Copy the code

Final test results:

pm1: Pair: x: 11, y: 11 checkCounter = 2183
pm2: Pair: x: 12, y: 12 checkCounter = 24600386
Copy the code

Although the results may differ from run to run, PairChecker generally checks PairManager1 less often than PairManager2. The latter is controlled by synchronous blocks of code, so objects remain unlocked for longer periods of time. Make it more accessible to other threads.

Synchronize on other objects

A synchronized block must be given an object on which to synchronize, and, reasonably, use the current object on which its method is being called: Synchronized (this), in which the lock on a synchronized block is acquired, the object’s other synchronized methods and critical sections cannot be called.

Sometimes you must synchronize on another object, but if you do, you must ensure that all related tasks are synchronized on the same object.

The following example demonstrates that two tasks can access the same object at the same time, as long as the methods on the object are synchronized on different locks:

class DualSynch {
  private Object syncObject = new Object();
  public synchronized void f(a) {
    for(int i = 0; i < 5; i++) {
      print("f()"); Thread.yield(); }}public void g(a) {
    synchronized(syncObject) {
      for(int i = 0; i < 5; i++) {
        print("g()"); Thread.yield(); }}}}public class SyncObject {
  public static void main(String[] args) {
    final DualSynch ds = new DualSynch();
    new Thread() {
      public void run(a) { ds.f(); } }.start(); ds.g(); }}Copy the code

Execution Result:

g()
f()
g()
f()...
Copy the code

Where f() is synchronized on this, and G () is a synchronized block on a syncObject. Therefore, the two synchronizations are independent of each other. As you can see from the method calls in main(), these two methods are not blocking.

Thread local storage

A second way to prevent tasks from colliding over shared resources is to eliminate sharing of variable memory. Thread-local storage is an automated mechanism that allows you to create different stores for each different thread of the same variable. So, if you have five threads, the threads will generate five different blocks locally. They allow you to associate states with threads.

Creating and managing ThreadLocal storage can be done with the java.lang.ThreadLocal class:

public class Accessor implements Runnable{

	private final int id;
	protected Accessor(int id) {
		super(a);this.id = id;
	}
	@Override
	public void run(a) {
		while(! Thread.currentThread().isInterrupted()) { ThreadLocalVariableHolder.increment(); System.out.println(this); Thread.yield(); }}@Override
	public String toString(a) {
		// TODO Auto-generated method stub
		return "#"+id+":"+ThreadLocalVariableHolder.get(); }}Copy the code

Thread local storage:

public class ThreadLocalVariableHolder {
	private static ThreadLocal<Integer> value = new ThreadLocal<Integer>(){
		private Random dRandom = new Random(47);
		protected synchronized Integer initialValue(a){
			return dRandom.nextInt(10000); }};public static void increment(a) {
		value.set(value.get()+1);
	}

	public static int get(a) {
		return value.get();
	}

	public static void main(String[] args) throws Exception{
		ExecutorService executorService = Executors.newCachedThreadPool();
		for (int i = 0; i < 5; i++) {
			executorService.execute(new Accessor(i));
		}
		TimeUnit.SECONDS.sleep(3); executorService.shutdown(); }}Copy the code

Test results:

# 0:7 12564
# 0:7 12565
# 0:7 12566
# 0:7 12567
# 0:7 12568 /...
Copy the code

ThreadLocal objects are usually treated as static storage domains. ThreadLocal methods are created to access content only through the get() and set() methods, where get() returns a copy of the object associated with it, and set() inserts arguments into the object stored for its thread and returns the original object in the store. When you run this program, you’ll find that each individual thread has its own storage allocated, because they each keep track of their own count.