Memory management

  • IOS manages objects using memory reference count management when we produce an object.

    • We increment the reference count of the object by one, and when we do something to the object (hold the object) we increment the reference count of the object by one. When we do something and we don’t need the object, we subtract one from the reference count. We should destroy this object when we are not using it at all.
    • Generate and hold objects (methods such as alloc, new, copy, mutableCopy, etc.)
    • Holding object (retain method)
    • Release the object (release method)
    • Discard objects (dealloc method)
  • ARC automatic reference counting

    • Automatic reference counting is a new technology introduced by Xcode 4.2 or above LLVM compiler 3.0 or above.
    • You don’t have to type retain or release code again when writing code, which greatly reduces the development effort while reducing the risk of program crashes, memory leaks, and so on.
  • GNU and Apple source reference counting difference

    • GNU
      • The alloc method uses retained integer in struct obj_layout to hold the reference count and write it to the header of the object.
    • apple
      • The Apple implementation probably uses a hash table to manage reference counting (object memory blocks are allocated without regard to the block header, the reference count table has memory block addresses in each record, and can be traced from each record to each object’s memory block).
  • autorelease

    • The lifetime of an NSAutoreleasePool object corresponds to the scope of a C variable. For all AutouRelease instance objects, the release method is called when the NSAutorelease object is discarded.
    • NSAutoreleasePool objects are generated or discarded during each NSRunLoop (NSAutoreleasePool life cycle ends with NSRunLoop development -)
    • GNU AutoRelease source code implementation
      • The AutoRelease instance method essentially calls the addObject method on the NSAutoreleasePool object
      • NSAutoreleasePool internally stores the object pool as an array. People often ask which pool release works if the NSAutoreleasePool object is nested or held. All pools of objects are one. This is the same as appressing data to a MutableAry object
  • The modifier

    • __strong==strong==copy does not need to write retain or release code again with the strong modifier under ARC. Objects that are generated by themselves can be held by themselves and objects that are not produced by themselves can be held by themselves. Discarding strong-decorated variables or assigning values to them can release objects that you no longer need to own.
    • __weak==weakThe weak modifier avoids circular references. Weak references do not hold object instances, and the object is released when the variable is out of scope.
      • If the object is deprecated, the weak reference is automatically invalidated and left in a nil assigned state
      • Weak-modified variables accessing the reference object are actually accessing objects registered with autoureleasePool. This is because during access to the reference object, Register the object in autoureleasePool to ensure that the object exists until autoureleasePool ends.
        • The number of objects registered with autoureleasePool will increase significantly if weak variables are used in large numbers, so it is best to temporarily use the strong modifier before using weak variables. The strong modifier will only register once in autoureleasePool, greatly reducing system workload
      • When an object is modified with weak, the address of the variable is registered in the weak table (which is the same hash table as the reference count table). When the object is discarded, the address of the variable is removed from the weak table.
        • Object deprecation step
        • Get a record where the address of the discarded object is the key from the weak table
        • 2. Assign nil to all addresses with weak modifiers included in the record
        • 3, delete the record from the Weak table
        • Delete from the reference count table where the address of the discarded object is the key value
      • Heavy use of the weak modifier can consume CPU resources. (Only need to avoid using circular references)
  • The __unsafe_unretained==assign modifier acts similarly to the weak modifier but ensures that the object remains when used. When an object is deprecated, the _unsafe_unretained modifier will not be set to nil

Block

  • Block profile
    • A Block is an anonymous function with automatic variables. Anonymous functions are functions without names
    • Expression: ^ Return value type parameter list expression
    • Block variables are the same as C variables – automatic variables – function parameters – static variables – static global variables – all variables
    • Block nature
      • OC generating an object from a class means that the structure thus generates an instance of the structure of the object generated by the changed class. Each generated object is the object generated by changing the class each structure instance through the cost variable ISA to save the structure instance pointer of the class. From clang you can see that the Block pointer is assigned to the structure member isa of the Block so that Block is also an oc object.
    • Block variable interception
      • Automatic variable value interception The automatic variable value used by the Block syntax expression is stored in the structure instance of the Block (that is, in the Block). The automatic variable value intercepted by the Block can only execute the transient value of the Block syntax. This value cannot be modified after saving.
      • Overwriting an automatic variable in an instance of a Block structure does not change the captured automatic variable
      • Static variables, static global variables, and global variables can be modified in a Block. – When Block intercepts a static variable, the static variable is saved by passing the pointer to the constructor of the Block structure. This is also the easiest way to use a variable out of scope
    • __block specifier
      • __block is the storage domain specifier used to specify which storage domain to set the variable value to.
      • When we use _block to modify the variable _block, it also becomes an automatic variable of the __Block_byref_val_0 struct type, which is an instance of the _Block_byref_val_0 struct generated on the stack. This means that the structure holds member variables that are equivalent to the original automatic variables.
      • The member variable _forwarding for an instance of the __Block_byref_val_0 structure holds a pointer to the instance itself. Access the member variable val (which is the intercepted automatic variable) via the _forwarding variable
      • Why does _forwarding point to a pointer to the instance itself?
        • Configuring a global Block from a variable scope can also be safely used through Pointers, but setting a Block on a stack is deprecated if the owning variable scope ends. Since the __block variable is also configured on the stack, the block variable is also discarded if the owning variable ends in scope.
        • Blocks provides methods for assigning Block and __block variables from the stack to the heap. Assign a Block on the stack to the heap and the Block on the heap continues to exist even when the variable scope ends. (_forwarding on the stack refers to the structure of the _block variable copied to the heap)
        • The __block variable uses the struct member _forwarding to ensure that the _block variable configuration is properly accessed on the stack or on the heap
    • Block Storage type
      • Global blocks cannot use automatic variables where global variables are used, so there is no interception of automatic variables
      • Stack Block
      • Heap Block
    • Intercepted object
      • When Block intercepts an OC object, it adds the member variables copy and Dispose functions to the _mian_blokc_desc_0 structure as Pointers to the _main_block_copy_0 and _mian_blokc_dispose_0 functions
        • A call to block_object_assgin in _main_block_copy_0 is equivalent to a retain instance method function to assign an object to a structure member variable of the object type (the block is copied to the heap on the stack when called).
        • The block_object_Dispose function is called in _mian_blokc_dispose_0, which is equivalent to the release instance method to dispose of an object assigned to a member variable in the structure of the object type (when the block on the heap is discarded).
      • When is a block on the stack copied to the heap?
        • Call the copy instance method of the block
        • Block is returned as the return value of a function
        • When assigning a block to an ID type with the strong modifier or to a block member variable
  • Block circular reference
    • Weak and unsafe_unretained can be used but note that the unsafe_unretained value is not set to nil
    • You can also use __block to modify a fit. But note that the block must execute the block or you will have circular references

GCD

  • Dispatch Queue
    • Wait queue to perform processing. Using apis such as the dispatch_async function, write the processing you want to perform in the block syntax and append it to the Dispatch Queue. Dispatch Queue FIFO executes processing in the order of appending.
    • The Serial Dispatch Queue waits for the processing of an ongoing task to finish
    • The Concurrent Dispatch Queue does not wait for the completion of processing of tasks currently in execution
  • GCD API
    • dispatch_after
      • The dispatch_after function does not execute processing after a specified time, but specifies the time to append processing to the Dispatch Queue.
      • Since the Mian Dispatch Queue is executed in the RunLoop of the main thread, the Block is executed after 3 seconds at the fastest and at the slowest after 3 seconds +1/60 seconds at the slowest in the RunLoop, for example, every 1/60 second. In addition, in the Mian Dispatch Queue, a large number of tasks are appended or the processing of the main thread itself is delayed.
    • Dispatch Group
      • When we are working on multiple parallel queues we want to perform other operations at the end of multiple task processing. We should be using Dispatch groups, which can monitor the end of a task and append the finished processing to the Dispatch Queue once the task has been monitored.
      • The dispatch_group_notify function append the execution Block to the Dispatch Queue, with the first argument specifying the Dispatch Group to be monitored and the third argument, Block, to the second argument Dispatch Queue.
    • dispatch_barrier_async
      • The function dispatch_barrier_Async waits for all concurrent tasks to be added to the parallel queue before appending the specified processing to the parallel queue. The parallel queue then executes the tasks appended to the queue after the dispatch_barrier_Async function has completed the append processing before resuming.
    • dispatch_sync&&dispatch_async
      • The function “dispatch_async” means “async”, which means that the specified block is appented to the specified Dispatch Queue asynchronously, and the function “dispatc_async” does not wait.
      • “Dispatch_sync” sync means that synchronization is going to be the process of appending blocks to the Dispatch Queue, and then “dispatch_sync” is going to wait until the appending blocks are finished
    • dispatch_apply
      • The dispatch_apply function is the management API for the Dispatch_sync function and Dispatch Group. This function appends the specified block to the specified Dispatch Queue the specified number of times and waits for all exits.
    • Dispatch Semaphore
      • Dispatch Semaphore is a signal that holds a count, which is a count-type signal in multithreaded programming, when the count is 0 wait when the count is 1 or greater, subtract 1 without waiting
      • Dispatch_semaphore_create (X) X is the initial value of saving the number of threads of accessible objects
      • Dispatch_semaphore_wait is to Dispatch the Semaphore count value minus one (due to Dispatch the Semaphore count is greater than or equal to 1 so Dispatch Semaphore count value minus one)
      • The value of “dispatch_semAPhore_signal” is added to the count of the Dispatch Semaphore by one. The value of “dispatch_semAPhore_signal” is added to the count of the Dispatch Semaphore by one at the end of the exclusive control processing.