preface

In ios development, locking in the inside of ios is actually used a lot, but in the outer layer, personal use is not much, this area is also relatively strange, if the understanding is in place, may solve a lot of complex problems in the project.

Lock performance analysis

The time consumed to obtain the corresponding lock through the loop of 100,000 lock unlocking is as follows

@synchronized

You’ve seen and used @synchronized in GCDS, and it’s the most common lock. use

    @synchronized(self){}Copy the code

How to explore?

Assemble +xcrun, see its structure through xcrun (command: xcrun-sdk iphoneos clang-arch arm64e-rewrite-objc main.m), open to see its internal structure.

Declare a variable, and then you go into the function Enter, try Catch, and just in the normal case, it’s a destructor, and then at the end of the block you call exit, so the focus is just on objc_sync_Enter and objC_sync_exit, which is one entry and one exit.

Following the flow, use the objc_sync_enter and objc_sync_exit methods above to set a symbolic breakpoint to find the corresponding source library and find the source in libobjc.a.dylib.

You can also use the clang command to see what the next symbol is from assembly and then set a symbol breakpoint.

To the bottom, it’s just it’s just locking and unlocking.

SyncData

In that case, what does SyncData do before calling the in/out function? What kind of data is that?

The inside is a structure, and the first argument is kind of like what we talked about in class, which is just aSingly linked listThat’s why it’s associated with recursion. The second is the encapsulation of object into an unsigned long integer that can be easily computed.threadCountSatisfies multithreaded recursion, andrecursive_mutex_tNo, it’s a middle pair of recursive locksrecursive_mutex_ttEncapsulation.

Id2data method

If yes, go to data cache or not, these are two ways of storing data. The result is returned after the thread is unlocked to prevent multiple threads from colliding.

Take a look at the macro definitions for lockp and LISTP generated by macros,sDataListsGlobal static variables.

But sDataLists have no way of knowing what’s inside them. How to do? This command is printed through LLDB debugging. This array contains 64 pieces of data. And after the object is passed, the data of data will change, that is, the change of data depends on the object. And the storage is not sequential, which verifies that this is a hash table.

If you’re coming in for the first time, and your data is empty, you just go down here,

So once you have an object, what do you do next? The two logics are similar to one discussed. The first is the number of times the lock was acquired, and there is an exception judgment, and then the state is determined, plus or minus, minus to zero, so this object can be used by other threads, which means multithreaded. ThreadCount records how many threads are caught, and lockcount records how many times this thread is locked.

SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY); 支那if** (data) { fastCacheOccupied = **YES**; A \ * *if** (data->object == object) {

            // Found a match in fast cache.uintptr_t lockCount; \ result = data; lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY); 支那if** (result->threadCount <= 0  ||  lockCount <= 0) {

                _objc_fatal("id2data fastcache is buggy"); } \ * *switch**(why) {

            **case** ACQUIRE: {

                lockCount++;

                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (**void***)lockCount); 支那break* *; } * *case** RELEASE:

                lockCount--;

                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (**void***)lockCount); 支那if** (lockCount == 0) {

                    // remove from fast cache

                    tls_set_direct(SYNC_DATA_DIRECT_KEY, **NULL**);

                    // atomic because may collide with concurrent ACQUIREOSAtomicDecrement32Barrier(&result->threadCount); } * *break* *; 支那case** CHECK:

                // do nothing支那break* *; } \ * *return** result; }}Copy the code

Note that it is best not to lock objects to local variables, because there may be local variables that have been released after the function exits, but that are managed internally by global variables, so using self, you can zip up all managed variables or objects.

Summary of overall structure

First @ synchronized is a hash structure, due to the different architecture, the left-most allocated to him is different, the size of the real machine is 8 size, simulator or MAC is 64, can understand, after all, on the computer can give more space, but at the same time consumes more time to operate, with time in the simulator and so on is different. The simulator takes longer than the real machine. The CPU is recycled as soon as it is idle, so there is no need to worry about running out of CPU. ThreadCount = 1; if so, execute ++ or — (lock ++ or –); if not, execute ++ or — (lock ++ or –). Creates (threadCount ++ or –). If 64 is changed to 1 and both objects are locked separately, then when the second object’s data is created, its NextData will be the first object. In other words, they form zippers.

The types of lock

The mutex

Threads are working on tasks at the same time. Mutex means that one is executing and the other is not executing (mutex), and there is a sequential relationship between the two. (synchronous) and wait while idle.

NSLock: non-recursive lock, if there is a recursive function lock, then using this lock is useless, will cause deadlock.

RecursiveLock: Recursively non-multithreaded. If there is a recursive function lock in multithreading, then only lock to unlock the current thread, can not solve multithreading recursion.

Synchronized: Recursive and multithreaded.

NSCondition: production-consumption. The scenario is that every time I produce something, I can consume it, but there’s a situation where I’m producing 0 and I have to consume something, and I can’t consume anything. NSCondition can initiate wait at this time, and initiate semaphore notification after each production success. Other operations are still normal, such as production locking and unlocking, consumption locking and unlocking, to ensure the safety of each link, and there are more intermediate judgment processes than ordinary locking.

NSConditionLock

The execution result may be 321 or 231, but since 2 is waiting for 0.1, it is more 321.

If the condition is initialized, the operation will be executed only if the condition is 2, and other conditions need to wait. If no condition is entered during lock, the operation will be executed normally.

Explore NSConditionLock

Breakpoint + assembly debugging !!!!

[NSConditionLock initWithCondition:] [NSConditionLock initWithCondition:] [NSConditionLock initWithCondition:]] The first is the method receiver, the second is the method name, and the third is the parameter.

Now let’s see how to jump to which method. Watch the BL jump, break the point at each BL point and read through the register to see which object methods are called and which parameters are used.

At the end of the retun, check the current register, read the memory of the first stored address, is the returned object, and then print the memory structure of the object, analyze its member variables. The result is that an NSCondition is encapsulated and the parameters are passed in.

spinlocks

One task is being performed, and the other is constantly waiting, not coming out.

Read-write lock

Multi-read single-write function implementation

write

throughbarrierThe fence function, which implements sequential writing to the queue, does not cause multiple threads to write conflicts, that is, different threads take different time, but if the first one can not finish, the next one will not go, throughasyncDoes not block the main thread.

Read Synchronously reads the data in the queue, preventing the thread from returning too fast and not getting the data. The outer layer only needs to add read methods to the global queue without blocking the main thread.