Deadlocks can be avoided in some cases. This article shows three techniques for avoiding deadlocks:
-
Lock the order
-
Lock time limit
-
Deadlock detection
1. Lock sequence
Deadlocks can easily occur when multiple threads need the same locks but lock them in different order.
If you can ensure that all threads acquire locks in the same order, deadlocks will not occur. Look at this example:
Thread 1:
lock A
lock B
Thread 2:
wait for A
lock C (when A locked)
Thread 3:
wait for A
wait for B
wait for C
Copy the code
If a thread (such as thread 3) needs some locks, it must acquire them in a certain order. It can acquire the next lock only after it has acquired the first lock in the order.
For example, thread 2 and thread 3 can only attempt to acquire lock C after they have acquired lock A. Because thread 1 already owns lock A, threads 2 and 3 need to wait until lock A is released. They must then successfully lock A before they can attempt to lock B or C.
Sequential locking is an effective deadlock prevention mechanism. This method, however, requires you to know all the locks you might need in advance (and prioritize them appropriately), but sometimes it’s unpredictable.
2. Lock time
Another way to avoid deadlocks is to add a timeout period to the attempt to acquire the lock. This means that the thread will abandon the lock request if the timeout period is exceeded during the attempt to acquire the lock. If a thread does not successfully acquire all the required locks within a given time, it will fall back and release all acquired locks, then wait a random amount of time and try again. This random waiting time gives other threads a chance to try to acquire the same locks, and allows the application to continue running if it doesn’t.
Here is an example of a scenario where two threads try to acquire the same two locks in different order, fall back and try again after a timeout occurs:
Thread 1 locks A
Thread 2 locks B
Thread 1 attempts to lock B but is blocked
Thread 2 attempts to lock A but is blocked
Thread 1's lock attempt on B times out
Thread 1 backs up and releases A as well
Thread 1 waits randomly (e.g. 257 millis) before retrying.
Thread 2's lock attempt on A times out
Thread 2 backs up and releases B as well
Thread 2 waits randomly (e.g. 43 millis) before retrying.
Copy the code
In the example above, thread 2 retries locking 200 milliseconds before thread 1, so it can successfully acquire both locks first. At this point, thread 1 attempts to acquire lock A and is in A wait state. When thread 2 terminates, thread 1 can also acquire both locks (unless thread 2 or another thread acquires some of the locks before thread 1 successfully acquires both locks).
It is important to note that since there are lock timeouts, we cannot assume that this scenario is necessarily deadlocked. It could also be that the thread that acquired the lock (causing other threads to timeout) took a long time to complete its task.
In addition, if there are too many threads competing for the same pool of resources at the same time, even with timeouts and fallbacks, these threads may try repeatedly but never get locked. If there are only two threads and the retry timeout is set to 0 to 500 milliseconds, this may not happen, but if there are 10 or 20 threads the situation is different. Because the probability of these threads waiting for the same retry time is much higher (or so close as to cause problems). (translator note: timeout and retry mechanism is to avoid competition at the same time, but when many threads, in which two or more threads of timeout or close to the possibility will be very big, so even if appear after a timeout, resulted from the competition due to timeout, they will begin to try again at the same time, a new round of competition, brings new problems.)
One problem with this mechanism is that you cannot set a timeout for synchronized blocks in Java. You will need to create a custom lock or use the tools under the Java5 java.util.Concurrent package. Writing a custom lock class is not complicated, but is beyond the scope of this article. A future Java concurrency series will cover custom locks.
3. Deadlock detection
Deadlock detection is a better mechanism for deadlock prevention, mainly for situations where sequential locking is not possible and lock timeouts are not feasible.
Each time a thread acquires a lock, it is recorded in the thread and lock associated data structures (map, graph, and so on). In addition, every time a thread requests a lock, it needs to be logged in this data structure.
When a thread fails to request a lock, the thread can traverse the lock graph to see if a deadlock has occurred. For example, if thread A requests lock 7, but lock 7 is held by thread B, thread A can check to see if thread B has requested the lock currently held by thread A. If thread B does have such A request, then A deadlock has occurred (thread A owns lock 1 and requests lock 7; Thread B owns lock 7 and requests lock 1.
Of course, deadlocks are generally more complicated than two threads holding each other’s locks. Thread A waits for thread B, thread B waits for thread C, thread C waits for thread D, and thread D waits for thread A. To detect deadlocks, thread A needs to progressively detect all locks requested by THREAD B. Starting with the lock requested by thread B, thread A finds thread C, and then finds thread D, discovering that the lock requested by thread D is held by thread A itself. That’s when it knows a deadlock has occurred.
Below is A graph of lock possession and request between four threads (A,B,C, and D). Data structures like this can be used to detect deadlocks.
So what should these threads do when a deadlock is detected?
A possible solution is to release all locks, roll back, and wait a random amount of time and try again. This is similar to a simple lock timeout, except that the deadlock will only be reversed if it has already occurred, not if the lock request timed out. Although there are fallbacks and waits, if a large number of threads compete for the same lock, they will repeatedly deadlock.
A better solution is to prioritize these threads so that one (or several) threads fall back and the remaining threads continue to hold the locks they need as if no deadlocks had occurred. If the priority assigned to these threads is fixed, the same threads will always have higher priority. To avoid this problem, you can set a random priority when a deadlock occurs.
Stay tuned for more in the next update!