• “Live up to the time, the creation of non-stop, this article is participating in 2021 year-end summary essay competition”

1. Performance comparison of locks in iOS

Multithreaded programming will involve the problem of thread safety, a very important way to achieve thread safety is locking. There are several common locks in iOS:

  • OSSpinLock
  • Dispatch_semaphore_t semaphore
  • os_unfair_lock
  • pthread_mutex_t
  • NSLock
  • NSCondition
  • pthreadMutexRecurive
  • NSRecursiveLock
  • NSConditionLock
  • @synchronized

For these types of locks, design a Demo to test their performance, using a for loop100000Only lock and unlock operations are done in the loop body. The start time is recorded before the start of the loop body, and the end time is recorded after the end of the loop bodydispatch_semaphore_tAs an example, the rest of the lock test logic is the same, written differently, the code is as follows:Respectively inThe real iPhone 12 Pro and emulatorThe result is as follows:

Graph the results as follows:

According to the above data, it can be found that there is a large gap between the performance of the same lock on the real machine and the simulator, but the general trend is consistent (the results obtained by different test devices may be different).

Two, @synchronized use and source positioning

While the previous section explored the performance of various locks on iOS, this section returns to the topic of this article and begins to explore @synchronized locking. Comparing the performance of various locks, it can be found that the performance of @synchronized is not outstanding, but this lock has not been abandoned, and the frequency of use is quite high, indicating its own advantages. Let’s explore the following points of concern, which will be addressed in the following sections:

  • Question 1: Is it safe to pass nil
  • Question 2: Is recursion reentrant
  • Question 3: What is the underlying data structure and how to add and unlock it

Let’s look at how @synchronized is used, and let’s look at how @synchronized is not used:

If it is the above code, the execution result is likely to be out of order, as shown below:

Obviously, there’s a thread safety issue with this, so 17 is ahead of 19, so we can pass@synchronizedLock, the code is as follows:The command output is as follows:

The result is correct and the thread is not out of order. Here you can think of the @synchronized wrapped code block as a critical area, and locking this area ensures that thread-safety issues do not occur.

Knowing the use of @synchronized, the principle of @synchronized needs to be explored. First, we can explore the nature of @synchronized. Create an empty Mac project, write down an empty block of @synchronized code, and use Clang to see how the underlying code behaves. Here’s what happens:

Cleaned up and simplified, you can find two key pieces of code about @synchronized:

objc_sync_enter()
objc_sync_exit()
Copy the code

Add a symbolic breakpoint to both functions and enter the following code:

From the figure can be located to the success of @synchronized is in Objc source code, let’s take a look at @synchronized source code how to achieve lock and unlock.

Three,@synchronizedSource code analysis

This section explores the source code implementation of @synchronized, which was located in Objc source code in the previous section. Open objC4-818.2 and search objc_sync_Enter and objc_sync_exit globally to get the following source code:

The two functions are respectively lock and unlock operations, the internal implementation process is similar, here can be analyzed together. When obj is passed in as nil, objc_sync_exit does nothing, objc_sync_enter calls objc_sync_nil(); Function, the implementation of which can be found as follows:

BREAKPOINT_FUNCTION(
    void objc_sync_nil(void));#define BREAKPOINT_FUNCTION(prototype)                             \

    OBJC_EXTERN __attribute__((noinline, used, visibility("hidden"))) \

    prototype { asm(""); }

Copy the code

The macro definition that ultimately calls BREAKPOINT_FUNCTION is equivalent to the following call, which means nothing is done.

void objc_sync_nil(void) { asm(""); }
Copy the code

When obj is not nil, both functions call id2data and return a SyncData pointer. Here’s a look at the underlying data structure of @synchronized based on these two clues.

3.1 @synchronizedAnalysis of underlying data structures

There are two clues in the previous section, the SyncData type and the ID2Data function. Let’s take a look at the data structure of type SyncData, which is defined as follows in the source code:

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData; 
    DisguisedPtr<objc_object> object;
    int32_t threadCount;  // number of THREADS using this block
    recursive_mutex_t mutex;
} SyncData;
Copy the code

Recursive_mutex_t mutex indicates that @synchronized is a recursive lock, thus solving problems 1 and 2. What do the other members mean? NextData points to the next node. Guess this is the node of a linked list. So let’s move on.

If you enter the id2data function, you can see the following code:

Because there is too much code, so fold a part. See the first two sentences of the code, according to the object of the two data, enter the macro definition view, the code is as follows:

SDataLists are a StripedMap

type. SyncList is clearly a generic type, as defined in the source code:

Then look at the StripedMap, which is defined as follows:

StripedMap is a template class that creates a PaddedT structure inside and creates an array of type PaddedT, which is the array passed into the generic, in this case a SyncList array, which contains a member of type SyncData.

Note that the index fetching method of a StripedMap is calculated from indexForPointer instead of a number directly. It is clearly a hash function, that is, a StripedMap is actually a hash table, which can be verified in the source flow in the next section. Thus, the underlying structure of @synchronized can be found as follows:

  • sDataListsIs a hash table, using the zip method to resolve hash conflicts
  • Each item above the lock object is the key, store oneSyncListThis is a linked list structure
  • Each node stores oneSyncDataThis node represents a lock on the object

3.2 id2dataFunctional flow analysis

According to the definition of the ID2data function in the previous section, you can see that the code folded by this function can be divided into four parts from top to bottom

  • Retrieve data and operations from TLS
  • Fetching data from Cache and operations
  • Walk through the linked list and look at the locked objectobjectWhether it has been depositedsDataListsAnd run lockCount
  • objectIt was not locked, so it was locked for the first time

3.2.1 Lock for the first time

Lock an object that has not been locked before, and then debug the source code to find the first time to walk the code as follows:

A new SyncData is created and assigned to result, and result->object points to the currently locked object, and threadCount defaults to 1, because it is the first time the lock is being held, but it must be in one of the threads.

Result ->nextData = *listp; And * listp = the result; SDataLists store each item as a single list.

The result is stored in TLS or Cache.

3.2.2 Retrieving SyncData from TLS and Cache

TLS Thread Local Storage (TLS) is a private space provided by some operating systems for threads. That is, if the operating system supports TLS, threads have a private space where they can store locked information, called SyncData. The code for this section is as follows:

If it is the same lock object, set data to result, ACQUIRE to lock, lockCount to 1, and change TLS. If RELEASE, perform unlock operation, lockCount minus 1, when lockCount down to zero, remove the cache, and call the atomic function OSAtomicDecrement32Barrier number of processing threads.

Cache = TLS;

3.2.3 Judgment and processing of whether the lock object has been used

If an object is not present in Cache or TLS, the list fetched from object is iterated. If the list is empty, the first lock is performed. Otherwise, result is assigned to p and the number of referenced threads is increased.

conclusion

This article explores the underlying data structures and principles of @synchronized, summarized as follows:

  • @synchronized passing nil doesn’t crash, doesn’t actually do anything, but isn’t it necessary
  • The underlying data structure of @synchronized is a hash table that uses the zipper method to resolve conflicts. Each key is a locking object and stores a linked list

The above is for the exploration of @synchronized, welcome to point out the shortcomings.