Speaking of multithreading this topic, it is inevitable to understand the technical concepts related to multithreading first. The technical concepts involved in this article are CPU, process, thread, asynchrony, queue and so on. It may also be incomplete or insufficient, which will be supplemented later. Recently, Swift has been used for development, and Swift4 will be used for demonstration of all code examples in this paper.

CPU

What is the CPU

The Central Processing Unit (CPU) is one of the main devices in a computer. Its main function is to interpret computer instructions and process data in computer software.

Computer programmability mainly refers to the programming of the central processing unit.

The central processing unit (CPU), internal memory and input/output devices are the three core components of a modern computer.

Before the 1970s, the CPU was made up of many independent units. Later, cpus made of integrated circuits were developed. These highly contracted components are called microprocessors, in which the most complex circuits of the CPU can be separated into a single small and powerful unit.

CPU is mainly composed of arithmetic unit, controller, register three parts, from the literal meaning of the operation is to play the role of operation, the controller is responsible for issuing CPU each instruction required information, the register is to save operation or instruction of some temporary files, so that you can ensure a higher speed. The CPU has four major functions: processing instructions, executing operations, controlling time and processing data. Metaphorically speaking, the CPU is like our brain, helping us complete a variety of physiological activities. So without the CPU, the computer would be a piece of junk and wouldn’t work.

Multicore CPU and multiple cpus multicore

It is quoted that huhu architecture can be ever-changing, demand-oriented, comprehensive consideration is king.

Here’s a quick example. Suppose we now want to design the architecture of the processor part of a computer. We now have two choices in front of us, multiple single-core cpus and a single multi-core CPU. If we choose multiple single-core cpus, each CPU needs to be supported by a relatively independent circuit, with its own Cache, and they communicate with each other through the bus on the board. If we were to run a multithreaded program (the typical scenario), regardless of hyperthreading, each thread would run on a separate CPU, all collaboration between threads would run on the bus, and shared data would probably exist in several caches at the same time. In this case, the bus overhead is relatively large, how to do? With so many caches, even if we don’t care about wasting storage capacity, how can consistency be guaranteed? If you really do, but also in the motherboard accounted for a number of sites, to bring greater challenges to the layout and wiring, how to fix? If we choose multi-core single CPU, then we only need a set of chipset, a set of memory, multi-core communication through the internal bus of the chip, shared memory.

On this architecture, if we run a multithreaded program, the communication between threads will be faster than in the previous case. If the final implementation, the board space occupation is smaller, layout and wiring pressure is also smaller. Looks like multicore, single CPU wins. But what if you need to run multiple large programs at the same time? Assuming that two large programs, each with multiple threads and almost full cache, use the CPU at a time of day, it would take a lot of work to replace optical instructions and data when switching between programs. Therefore, most of the computers we use are single-CPU and multi-core. For example, the Dell T3600 we match has an Intel Xeon E5-1650 with 6 cores, which are virtually 12 logical cores. A few high-end people who need more multitasking and concurrency will build a machine with multiple, multi-core cpus, like the Mac Pro has two.

A core can only handle one thread at a time, and a single-core CPU can only handle concurrency, not parallelism. If you have two threads, a dual-core CPU, the two threads are parallel, and if you have three threads, they’re still concurrent. The difference between concurrency and parallelism will be discussed below.

process

What is the process

A wikipedia process (English: Process) is the entity of a running program on a computer. The process was once the basic operating unit of time-sharing systems. In process-oriented systems (such as early UNIX, Linux 2.4 and earlier), a process is the basic execution entity of a program; In thread-oriented systems (such as most modern operating systems, Linux 2.6 and later), the process itself is not the basic unit of operation, but the container for the thread.

The program itself is only a description of the instructions and data and their organization; the process is the actual running instance of the program (those instructions and data).

Several processes may be associated with the same program, and each process may run independently, either synchronously (sequentially) or asynchronously (in parallel). Modern computer systems can load multiple programs into memory as processes at the same time, and use time sharing (or TDM) to give the impression of simultaneous (parallel) running on a single processor.

Similarly, in an operating system or computer architecture that uses multithreading (where each thread represents an independent execution context within a process), parallel threads of the same program can actually run simultaneously (on different cpus) on a multi-CPU host or network.

In iOS, the running entity of an APP represents a process. A process has independent memory space, system resources, ports, and so on. Multiple threads can be generated in a process, and these threads can share resources in the process.

In an analogy, the CPU is like a factory, processes are a workshop, and threads are the workers in the workshop. Workshop Spaces are shared by workers, with many rooms accessible to each worker. This indicates that the memory space of a process is shared, and each thread can use the shared memory.

Interprocess communication

I have collected some information. There are about 8 communication modes between processes in iOS, which may not be complete. I will add them later.

IOS system is a relatively closed system. Each App runs in its own sandbox. Each App can only read the content under the folder AppData created by iOS system for the App on iPhone, and cannot access the content in other App sandboxes at will. Therefore, the mode of App to App communication in iOS system is relatively fixed. The common mode of App to App communication and application scenarios are summarized as follows.

  • 1. Port (Local socket)

NSMachPort: Foundation layer CFMachPort: Core Foundation layer Mach Ports: Mach kernel layer (threads and processes can use it to communicate)

Port1234 (App2) : connect to port1234 (App2) : connect to Port1234 (App2) : connect to Port1234 (App2) However, there is a limitation that requires both App processes to be active and not killed by the background. The awkward part is that iOS gives each TCP 600 seconds in the background for network communication before the APP goes to sleep.

  • 2, URL Scheme

This is the most commonly used communication method for iOS App communication. App1 jumps to App2 through the method of openURL, and carries the desired parameters in the URL, which is similar to the parameter transfer of HTTP GET request. This way is the most common and most used method of use is also very simple only need to source App1 in the info. That is configured in the plist LSApplicationQueriesSchemes, specify the target App2 scheme; Then, URL types are configured in info.plist of target App2 to indicate which URL Scheme is invoked by the App.

Typical usage scenarios are the sharing functions of SDK of each open platform, such as sharing to wechat moments and Weibo, or payment scenarios. For example, switching from didi Taxi to wechat for payment.

  • 3, Keychain

IOS Keychain is a secure storage container, it is essentially a sqllite databases, its location is stored in the/private/var/Keychains/Keychain – 2 db, but it is to save all the data is encrypted, It can be used to save sensitive information for different apps, such as user names, passwords, etc. IOS also uses Keychain to store VPN credentials and Wi-Fi passwords. It is independent of each App’s sandbox, so information in the Keychain remains even after the App is deleted.

Keychain is used to store sensitive information such as login and identity credentials to an App based on security and sandbox independence. In this way, once a user has logged in to an App, the user does not need to log in again even after deleting the App and installing it again.

A typical scenario of Keychain used for inter-app communication is also related to App login, which is the unified account login platform. For multiple apps using the same account platform, as long as the user of one App logs in, the other apps can automatically log in without the user entering the account and password for many times. Generally, open platforms provide login SDK, in which login-related information can be written to Keychain. In this way, if multiple apps integrate the SDK, unified account login can be realized.

To use a Keychain, you can use KeychainItemWrapper provided by iOS and Keychain access Groups to share data in a Keychain between applications.

Import Security // MARK: - Save and read UUID class funcsaveUUIDToKeyChain() {
    var keychainItem = KeychainItemWrapper(account: "Identfier", service: "AppName", accessGroup: nil)
    var string = (keychainItem[(kSecAttrGeneric as! Any)] as! String)
    if (string == "") | |! string { keychainItem[(kSecAttrGeneric as! Any)] = self.getUUIDString() } } class funcreadUUIDFromKeyChain() -> String {
    var keychainItemm = KeychainItemWrapper(account: "Identfier", service: "AppName", accessGroup: nil)
    var UUID = (keychainItemm[(kSecAttrGeneric as! Any)] as! String)
    return UUID
}

class func getUUIDString() -> String {
    var uuidRef = CFUUIDCreate(kCFAllocatorDefault)
    var strRef = CFUUIDCreateString(kCFAllocatorDefault, uuidRef)
    var uuidString = (strRef as! String).replacingOccurrencesOf("-", withString: "")
    CFRelease(strRef)
    CFRelease(uuidRef)
    return uuidString
}

Copy the code
  • 4, UIPasteboard

As the name implies, UIPasteboard is a clipboard function, because the native controls of iOS, UITextView, UITextField, UIWebView, when we use it, if we long press, there will be a copy, cut, select, select all, paste and other functions. This is the use of the system shear board function to achieve. Each App can access the system clipboard, so data can be transmitted between apps through the system clipboard.

// Create system clipboardletPasteboard = uipasteboard. general // Pasteboard = pasteboard. String =¥rkUy0Mz97CV¥rkUy0Mz97CV¥👉 👈// Taobao cut from the background to the foreground, read tao password to displaylet alert = UIAlertView.init(title: "Tao Password", message: "Found a treasure, password rkUy0Mz97CV.", delegate: self, cancelButtonTitle: "Cancel", otherButtonTitles: "View")
alert.show()

Copy the code

The typical usage scenario of UIPasteboard is the link sharing between Taobao and wechat /QQ. Tencent blocks links to Taobao in both wechat and QQ due to Tencent and Alibaba’s corporate strategies. What if taobao users want to share a Taobao product with their friends through QQ or wechat? Alibaba’s engineers have made clever use of the clipboard to achieve this function. First of all, taobao App will link custom tao password, guide the user to copy, and go to QQ friends dialogue paste. Then QQ friends will open their Taobao App after receiving the message. Taobao App will check whether there is tao password in the system clipboard every time it cuts from the background to the foreground. If there is tao password, it will parse and jump to the right product page.

Wechat friends copy the tao password to Taobao, you can open the taobao link shared by friends.

  • 5, UIDocumentInteractionController

Between App on UIDocumentInteractionController with equipment is mainly used to implement the Shared documents, and document preview, print, email and copy, etc. It’s very simple to use.

InteractionControllerWithURL: first by calling it the only class method, is introduced into a URL (NSURL), for the Shared file you want to initialize an instance object. Then UIDocumentInteractionControllerDelegate, then display the menu and the preview window.

let url = Bundle.main.url(forResource: "test", withExtension: "pdf")
ifurl ! = nil {let documentInteractionController = UIDocumentInteractionController.init(url: url!)
    documentInteractionController.delegate = self
    documentInteractionController.presentOpenInMenu(from: self.view.bounds, in: self.view, animated: true)}Copy the code

The effect is shown below.

  • 6, AirDrop

AirDrop is a build-on-demand network in Apple’s MacOS and iOS operating systems. Introduced since MacOS X Lion (MacOS X 10.7) and iOS 7, AirDrop allows file transfer on supported Macintosh computers and iOS devices. No need for mail or mass storage.

Before OS X Yosemite (OS X 10.10), the air-drop protocol in OS X was different from the air-drop protocol in iOS and therefore could not be transmitted to each other [2]. However, OS X Yosemite or later supports iOS’s Air-drop protocol (using Wi-Fi and Bluetooth), which works between a Mac and an iOS device and between two Mac computers from 2012 or later. [3][4] It is also possible to transfer between two 2012 or earlier Macs using the old wi-Fi-only mode of the old Air-drop protocol. [4]

There is no limit to the size of files that can be held by airdrop. Apple users have reported transferring less than 10GB of video over air.

IOS does not provide a direct implementation interface for AirDrop, but uses UIActivityViewController methods to invoke AirDrop for data interaction.

  • 7, UIActivityViewController

The UIActivityViewController class is a standard ViewController that provides several standard services, such as copying items to the clipboard, sharing content to social networking sites, and sending data via Messages. In the iOS 7 SDK, the UIActivityViewController class provides a built-in AirDrop feature.

If you have a batch of objects that need to be shared via AirDrop, all you need to do is initialize the UIActivityViewController from an array of objects and display it on screen:

UIActivityViewController *controller = [[UIActivityViewController alloc] initWithActivityItems:objectsToShare applicationActivities:nil]; 
[self presentViewController:controller animated:YES completion:nil]; 

Copy the code

The renderings are as follows

  • 8, App Groups

App Group Is used to share the same read and write space between apps developed by the same development team, including App and Extension, for data sharing. Multiple applications developed by the same team can share data directly, greatly improving the user experience.

Implementation details refer to data sharing between apps — App Group configuration

thread

What is a thread

Thread (English: thread) is the smallest unit of operation scheduling that an operating system can do. It is contained within the process and is the actual operating unit within the process.

A thread is a single sequential flow of control in a process, and multiple threads can be concurrent in a process, each performing a different task in parallel. In Unix System V and SunOS, they are also referred to as Lightweight processes, but more commonly referred to as kernel threads, while user threads are referred to as threads.

Talking about thread can not mention task, what task is, colloquially speaking task is just a thing or a piece of code, thread is actually to execute this thing.

A thread is an independent code execution path, that is, a thread is the smallest branch of the code execution path. In iOS, the underlying implementation of threads is based on the POSIX Threads API, also known as PThreads;

Hyperthreading technology

Hyperthreading technology is to use special hardware instructions to simulate a physical kernel into two logical cores, so that a single processor can use thread-level parallel computing, which is compatible with multi-threaded operating systems and software, reducing the IDLE time of the CPU. Improved CPU running speed. Using hyperthreading means that applications can use different parts of the chip at the same time.

This thing is not always better than not to open hyperthreading. Because the number of ALU and FPU units in each CPU core is limited, one of the purposes of hyperthreading is to keep another thread running when one thread is using fewer units. But if a thread is an integer, and floating-point operations are various, the current core unit is not much idle, and you cram another thread into it, then the resources are tight. The two threads compete for resources and slow each other down. As for hyperthreading, one thread can handle a cache miss and the other can handle a cache miss, but if both threads miss, then both threads are waiting. This still doesn’t work as well as multiple warp runs in one SM on a GPU. So, if your program is single-threaded, turn off hyperthreading so that others don’t steal your resources, but if it’s multi-threaded, each thread doesn’t do a lot of computation, so hyperthreading is more useful.

Interthread communication

The communication between threads is as follows: one thread passes data to another thread; After a particular task has been executed in one thread, the task continues in another thread.

The following is mainly introduced __ other threads to perform time-consuming tasks, in the main thread to perform UI refresh __, is also a relatively common business.

  • 1. NSThread Communication between threads

Nsthreads are threads in Swift

This scheme of NSThread is packaged by Apple and fully object-oriented. So you can manipulate thread objects directly, which is very intuitive and convenient. However, its life cycle still needs to be managed manually, so it is actually used less. GCD and NSOperation are used more frequently.

Of course, NSthreads can also be used for communication between threads. For example, download images and display them. The operation that takes time to download is put in the sub-thread, and then switch back to the main thread to display the images on the UI after downloading

func onThread() {
    let urlStr = "http://tupian.aladd.net/2015/7/2941.jpg"
    self.performSelector(inBackground: #selector(downloadImg(_:)), with: urlStr)} @objc func downloadImg(_ urlStr: String) {// Prints the current threadprint("Download picture thread", thread.current) // Get the image link guardlet url = URL.init(string: urlStr) else {return} // Download image binary data guardlet data = try? Data.init(contentsOf: url) else {return} // Set the image guardlet img = UIImage.init(data: data) else {return} // go back to the main thread and refresh UI self.performSelector(onMainThread:#selector(downloadFinished(_:)), with: img, waitUntilDone: false)} @objc func downloadFinished(_ img: UIImage) {// Prints the current threadprint("Refresh the UI thread", Thread.current)
}
Copy the code

<NSThread: 0x1C4464a00 >{number = 5, name = (null)} <NSThread: 0x1C007a400 >{number = 1, name = main}

Some of you might be wondering why the thread that implements NSObject becomes the thread that implements NSThread. In fact, this method is NSObject encapsulation of NSThread, convenient thread implementation method, the next breakpoint verification.

  • 2. GCD thread communication

Grand Central Dispatch (GCD), a great Central Dispatch system, is a C language concurrency technology framework proposed by Apple for multi-core parallel computing. The GCD automatically uses more CPU cores. The thread lifecycle is automatically managed (thread creation, task scheduling, thread destruction, and so on). The programmer only needs to tell the COMMUNIST party how it wants to perform what tasks, without writing any thread management code.

func onThread() {
    let urlStr = "http://tupian.aladd.net/2015/7/2941.jpg"

    let dsp = DispatchQueue.init(label: "com.jk.thread"Dsp.async {self.downloadimg (urlStr)}} @objc func downloadImg(_ urlStr: String) {// Prints the current threadprint("Download picture thread", thread.current) // Get the image link guardlet url = URL.init(string: urlStr) else {return} // Download image binary data guardlet data = try? Data.init(contentsOf: url) else {return} // Set the image guardlet img = UIImage.init(data: data) else {return} // go back to the main thread and refresh UI dispatchqueue.main.async {self.downloadFinished(img)}} @objc downloadFinished(_ img: UIImage) {// Prints the current threadprint("Refresh the UI thread", Thread.current)
}

Copy the code

<NSThread: 0x1C426B9c0 >{number = 4, name = (null)} <NSThread: 0x1C0263140 >{number = 1, name = main}

  • 3. Communication between NSOperation threads

NSOperation is Operation in Swift

NSOperation is a concurrency technique recommended by Apple that provides some functionality that is not very well implemented with GCD. NSOperation is simpler to use than GCD. NSOperation is an abstract class, meaning that it should not be used directly, but rather a subclass of it. Swift can use BlockOperation and custom subclasses derived from Operation.

NSOperation is often used in conjunction with NSOperationQueue. Any instance created using a subclass of NSOperation can be added to the NSOperationQueue operation queue. Once added to the queue, the operation is automatically executed asynchronously (note asynchronously). If the start method is used instead of adding to the queue, it is executed in the current thread.

func onThread() {
    let urlStr = "http://tupian.aladd.net/2015/7/2941.jpg"

    letque = OperationQueue.init() que.addOperation { self.downloadImg(urlStr) } } @objc func downloadImg(_ urlStr: String) {// Prints the current threadprint("Download picture thread", thread.current) // Get the image link guardlet url = URL.init(string: urlStr) else {return} // Download image binary data guardlet data = try? Data.init(contentsOf: url) else {return} // Set the image guardlet img = UIImage.init(data: data) else {return} / / back to the main thread to refresh the UI OperationQueue. Main. AddOperation {self. DownloadFinished (img)}} @ objc func downloadFinished (_ img: UIImage) {// Prints the current threadprint("Refresh the UI thread", Thread.current)
}
Copy the code

<NSThread: 0x1C4271d80 >{number = 3, name = (null)}

The object OperationQueue executes the addOperation method, which generates a BlockOperation object that performs the current task asynchronously. At the next breakpoint, you can see the execution of BlockOperation.

The thread pool

Thread pool (English: Thread pool) : a thread usage pattern. Too many lines will bring scheduling overhead, which will affect cache locality and overall performance. A thread pool maintains multiple threads, waiting for the supervisor to assign tasks that can be executed concurrently. This avoids the cost of creating and destroying threads while working on short-duration tasks.

The execution process of the thread pool is as follows: first, a number of threads are started and put to sleep. Second, when the client receives a new request, the thread pool wakes up a sleeping thread to handle the client request. Finally, when the request is finished, the thread is put to sleep again

So how many threads can be running at the same time is determined by the number of thread caches in the thread pool.

  • 1, the GCD

The GCD has an underlying thread pool that houses individual threads. Called a pool, it is easy to understand that the threads in the pool are reusable and will be destroyed when the thread is not called after a certain period of time. The pool is maintained automatically by the system and does not need to be manually maintained.

How many caches are there in the GCD thread pool?

@IBAction func onThread() {
    let dsp = DispatchQueue.init(label: "com.jk.thread", attributes: .concurrent)
    for i in0.. <10000 {dsp.async {// Prints the current threadprint("\(I) Current thread"Thread.sleep();forTimeInterval: 5)
        }
    }
}

Copy the code

This code generates a concurrent queue, loops 10000 times, executes the asynchronous task, which is equivalent to generating 10000 threads. Since the asynchronous task is executed immediately and does not wait for the end of the task, the thread prints and executes immediately when the thread is generated. From the printed result, there are 64 lines in a single print, which gives the GCD a thread pool cache of 64. Of course, if you want more concurrent execution with multiple threads, you can use the open source YYDispatchQueuePool.

  • 2, NSOperation

NSOperationQueue provides a mechanism similar to thread pool to facilitate concurrent multi-threaded operations. It constructs a thread pool and adds a task object to the thread pool. The thread pool allocates threads and calls the main method of the task object to execute the task.

Write code, below three distribution maxConcurrentOperationCount and see the effect

@IBAction func onThread() {
    let opq = OperationQueue.init()
    opq.maxConcurrentOperationCount = 3
    for i in0.. <10 {opq.addOperation({// Prints the current threadprint("\(I) Current thread"Thread.sleep();forTimeInterval: 5)
        })
    }
}
Copy the code

1当前线程 <NSThread: 0x1c0274a80>{number = 4, name = (null)} 0当前线程 <NSThread: 0x1c4673900>{number = 6, name = (null)} 2当前线程 <NSThread: 0x1c4674040>{number = 7, name = (null)}

If the thread pool cache is set to 3, 3 threads will be printed at a time. When these 3 threads are recycled into the thread pool, 3 more threads will be printed. Of course __ if one of the threads completes first, it will be recycled __ first.

So how many threads can NSOperationQueue execute concurrently at one time? Let’s run this code

@IBAction func onThread() {
    let opq = OperationQueue.init()
    opq.maxConcurrentOperationCount = 300
    for i in0.. <100 {opq.addOperation({// Prints the current threadprint("\(I) Current thread"Thread.sleep();forTimeInterval: 5)
        })
    }
}

Copy the code

You can see that there are also 64 __NSOperationQueue interfaces that can operate on the number of threads, but the maximum number of concurrent threads is still 64 __.

Multithreaded synchronization

What is multithreaded synchronization

Synchronization is a collaborative pace that runs in a predetermined order. It can be understood as thread A and THREAD B cooperate together. When thread A executes to A certain extent, it depends on A certain result of B, so it stops and signals B to run. B does what he says and gives the result to A; A Continue operations.

Thread synchronization refers to concurrent queues, where tasks are executed sequentially and are themselves synchronized.

2, multithreading synchronization purpose

Result transmission: WHEN A is executed to A certain extent, it depends on A result of B, so it stops and signals B to run. B does what he says and gives the result to A; A Continue operations. For example, Xiao Ming and Xiao Li have three watermelons, which can be cut at the same time (concurrent). After all the watermelons are cut, they can be frozen in the refrigerator (concurrent). Xiao Ming and Xiao Li can eat cold watermelons (concurrent).

Resource contention: A solution to the problem of contention when multiple threads simultaneously access A resource. This solution enables multiple threads to operate on the same resource. For example, thread A adds A data to array M, and thread B can receive array M after adding data. Thread synchronization is the communication between threads. For example, the purchase of train tickets, multiple Windows sell tickets (concurrent), after selling tickets to reduce inventory (synchronous), multiple Windows issued tickets successfully (concurrent).

3, multi-thread synchronization

Multithreaded synchronization can be implemented in many ways: DispatchSemaphore, NSLock, @synchronized, dispatch_barrier_async, addDependency, pthread_mutex_t, The following uses this method to realize train ticket destocking.

  • DispatchSemaphore

GCD semaphore can also solve the problem of resource preemption, support signal notification and signal wait. Each time a signal notification is sent, the semaphore +1; Semaphore -1 whenever a wait signal is sent; If the semaphore is 0, the signal will wait until the semaphore is greater than 0.

To put it simply, there is only one pit in the bathroom. One person comes in and closes the door, while others line up. After this person opens the door and exits, another person can come in. Examples of code are shown below.

var tickets: [Int] = [Int]()

@IBAction func onThread() {
    let que = DispatchQueue.init(label: "com.jk.thread", Attributes:.concurrent) // Generate 100 ticketsfor i in0.. <100 {tickets.append(I)} // Beijing ticketwindow que.async {self.saleticket ()} // Shanghai ticketwindow que.async {self.saleticket ()}} // a pit existslet semp = DispatchSemaphore.init(value: 1)

func saleTicket() {
    while true{// Semp.wait ();if tickets.count > 0 {
            print("Remaining votes", tickets.count, "Ticket window", Thread.current)
            tickets.removeLast()
            Thread.sleep(forTimeInterval: 0.2)}else {
            print("The tickets are sold out."Semp.signal (); semp.signal();breakSemp.signal ()}}Copy the code

<NSThread: 0x1C0472e40 >{number = 6, name = (null)} <NSThread: 0x1C027b540 >{number = 4, name = (null)} 0x1C0472e40 >{number = 6, name = (null)} 0x1C027b540 >{number = 4, name = (null)} 0x1C0472e40 >{number = 6, name = (null)} 0x1c027b540>{number = 4, name = (null)} …………….. 0x1C0472e40 >{number = 6, name = (null)} 0x1C027b540 >{number = 4, name = (null)} 0x1C0472E40 >{number = 6, name = (null)} Remaining tickets 1 Ticket window <NSThread: 0x1C027B540 >{number = 4, name = (null)} Tickets have been sold Out Tickets have been sold out

Running without semaphores crashes after a while, caused by the removeLast deinventory method of the Tickets pool operated by a multi-threaded colleague, which obviously doesn’t meet our needs, so we need to consider thread safety.

  • NSLock

The concept of locking, which is the most common synchronization tool. A code segment can only be accessed by one thread at A time. For example, after thread A enters the locking code, another thread B cannot access the locking code because it has been locked. Thread B can access the locking code only after the previous thread A finishes executing the locking code and unlocks it. Don’t put too much extra code into it, otherwise one thread will be executing while another thread is waiting, and you won’t be able to use multithreading.

A simple mutex is implemented in NSLockin Cocoa programs to implement the NSLocking Protocol. The implementation code is as follows:

var tickets: [Int] = [Int]()

@IBAction func onThread() {
    let que = DispatchQueue.init(label: "com.jk.thread", Attributes:.concurrent) // Generate 100 ticketsfor i in0.. <100 {tickets.append(I)} // Beijing tickets.async {self.saleticket ()} // Shanghai tickets.async {self.saleticket ()} // Generate a locklet lock = NSLock.init()

func saleTicket() {
    while true{// Close the door and execute the task lock.lock()if tickets.count > 0 {
            print("Remaining votes", tickets.count, "Ticket window", Thread.current)
            tickets.removeLast()
            Thread.sleep(forTimeInterval: 0.2)}else {
            print("The tickets are sold out.") // Open the door so that other tasks can execute lock.unlock()break} // open the door so that other tasks can execute lock.unlock()}}Copy the code

<NSThread: 0x1C467D300 >{number = 6, name = (null)} <NSThread: 0x1C4862380 >{number = 7, name = (null)} 0x1C467D300 >{number = 6, name = (null)} 0x1C4862380 >{number = 7, name = (null)} 0x1C467D300 >{number = 6, name = (null)} 0x1c4862380>{number = 7, name = (null)} …………….. <NSThread: 0x1C467D300 >{number = 6, name = (null)} 0x1C4862380 >{number = 7, name = (null)} 0x1C467D300 >{number = 6, name = (null)} Remaining tickets 1 Ticket window <NSThread: 0x1C4862380 >{number = 7, name = (null)} Tickets have been sold Out Tickets have been sold out

  • @synchronized

In Objective-C, you can decorate an object with the @synchronized keyword and automatically lock and unlock it. In Swift, however, there is no equivalent, that is, @synchronized no longer exists (or is temporarily) in Swift. What @synchronized does behind the scenes is call the objc_sync_Enter and objc_sync_exit methods of objc_sync, which I can implement directly.

var tickets: [Int] = [Int]()
    
@IBAction func onThread() {
    let que = DispatchQueue.init(label: "com.jk.thread", Attributes:.concurrent) // Generate 100 ticketsfor i in0.. <100 {tickets.append(I)} // Beijing tickets.async {self.saleticket ()} // Shanghai tickets.async {self.saleticket ()}} funcsaleTicket() {
    while true{// Lock, close, execute task objc_sync_Enter (self)if tickets.count > 0 {
            print("Remaining votes", tickets.count, "Ticket window", Thread.current)
            tickets.removeLast()
            Thread.sleep(forTimeInterval: 0.2)}else {
            print("The tickets are sold out."// Open the door so that other tasks can execute objc_sync_exit(self)break// Open the door so that other tasks can execute objc_sync_exit(self)}}Copy the code

<NSThread: 0x1C04697C0 >{number = 6, name = (null)} <NSThread: 0x1C44706c0 >{number = 4, name = (null)} 0x1C04697c0 >{number = 6, name = (null)} 0x1C44706c0 >{number = 4, name = (null)} 0x1c04697c0>{number = 6, name = (null)} …………….. <NSThread: 0x1C44706C0 >{number = 4, name = (null)} 0x1C04697C0 >{number = 6, name = (null)} Remaining Tickets 1 Ticket window <NSThread: 0x1C44706C0 >{number = 4, name = (null)} Tickets have been sold Out Tickets have been sold out

  • GCD fence method: dispatch_barrier_async

We sometimes need to perform two sets of operations asynchronously, and after the first set of operations is complete, the second set of operations can be performed. We need a fence to separate two asynchronously executed action groups, which can contain one or more tasks. This requires the dispatch_barrier_async method to form a fence between the two action groups. The dispatch_barrier_async function waits for all tasks appended to the concurrent queue to complete and then appends the specified task to the asynchronous queue. Then, after the dispatch_barrier_async function completes the tasks appended to the asynchronous queue, the asynchronous queue returns to the normal action, and the task is appended to the asynchronous queue and executed.

var tickets: [Int] = [Int]()

@IBAction func onThread() {
    let que = DispatchQueue.init(label: "com.jk.thread", Attributes:.concurrent) // Generate 100 ticketsfor i in0.. <100 { tickets.append(i) }for _ in0.. Que.async {self.saleticket ()} que.async(flags:.barrier) {ifSelf.tickets.count > 0 {self.tickets.removelast ()}} que.async {self.saleticket ()} Que.async (flags:.barrier) {if self.tickets.count > 0 {
                self.tickets.removeLast()
            }
        }
    }
}

func saleTicket() {
    if tickets.count > 0 {
        print("Remaining votes", tickets.count, "Ticket window", Thread.current)
        Thread.sleep(forTimeInterval: 0.2)}else {
        print("The tickets are sold out.")}}Copy the code

<NSThread: 0x1C0463C40 >{number = 3, name = (null)} <NSThread: 0x1C0463C40 >{number = 3, name = (null)} 0x1C0463C40 >{number = 3, name = (null)} 0x1c0463c40>{number = 3, name = (null)} …………….. <NSThread: 0x1C0670100 >{number = 6, name = (null)} 0x1C0670100 >{number = 6, name = (null)} 0x1C0670100 >{number = 6, name = (null)} 0x1c0670100>{number = 6, name = (null)} …………….. <NSThread: 0x1C0463C40 >{number = 3, name = (null)} <NSThread: 0x1C0463C40 >{number = 3, name = (null)} Remaining tickets 1 Ticket window <NSThread: 0x1C0463C40 >{number = 3, name = (null)} Tickets have been sold Out Tickets have been sold out

  • AddDependency (Operation dependency)

The most interesting thing about NSOperation and NSOperationQueue is that it can add dependencies between operations. With operation dependencies, we can easily control the order of execution between operations. Multithreaded synchronization is implemented using action dependencies as follows.

var tickets: [Int] = [Int]()

@IBAction func onThread() {
    letQue = OperationQueue. The init () / / concurrent queue que maxConcurrentOperationCount = 1 / / generates 100 ticketsfor i in0.. <100 { tickets.append(i) }for _ in0.. <51 {//addDependency method, synchronous destockinglet sync1 = BlockOperation.init(block: {
            ifSelf.tickets.count > 0 {self.tickets.removelast ()}}letBj = blockoperation.init (block: {self.saleticket ()}) bj.addDependency(sync1let sync2 = BlockOperation.init(block: {
            ifTickets. count > 0 {self.tickets.removeLast()}}letsh = BlockOperation.init(block: {self.saleticket ()}) sh.addDependency(sync2) que.addOperation(sync1) que.addOperation(bj) que.addOperation(sync2) que.addOperation(sh) } } funcsaleTicket() {
    if tickets.count > 0 {
        print("Remaining votes", tickets.count, "Ticket window", Thread.current)
        Thread.sleep(forTimeInterval: 0.2)}else {
        print("The tickets are sold out.")}}Copy the code

<NSThread: 0x1C42672C0 >{number = 4, name = (null)} 0x1C06731C0 >{number = 5, name = (null)} 0x1C06731C0 >{number = 5, name = (null)} 0x1c06731c0>{number = 5, name = (null) …………….. <NSThread: 0x1C42672C0 >{number = 4, name = (null)} 0x1C42672C0 >{number = 4, name = (null)} 0x1c42672c0>{number = 4, name = (null)} …………….. <NSThread: 0x1C42672C0 >{number = 4, name = (null)} <NSThread: 0x1C42672C0 >{number = 4, name = (NULL)} Tickets Are Sold Out Tickets are sold out

  • Use POSIX mutex

POSIX mutex is easy to use in many programs. To create a new mutex, you declare and initialize a pthread_mutex_t structure. To lock and unlock a mutex, you can use the pthread_mutex_lock and pthread_mutex_unlock functions. Listing 4-2 shows the basic code for initializing and using a POSIX thread mutex. When you’re done with a lock, simply call pthread_mutex_destroy to release the lock’s data structure.

var tickets: [Int] = [Int]()

@IBAction func onThread() {
    let que = DispatchQueue.init(label: "com.jk.thread", Attributes:.concurrent) mutex() // Generates 100 ticketsfor i in0.. <100 {tickets.append(I)} // Beijing tickets.async {self.saleticket ()} // Shanghai tickets.async {self.saleticket ()} // Generate a lock var lock = pthread_mutex_t.init() funcmutex() {// Set the attribute var attr: pthread_mutexattr_t = pthread_mutexattr_t() pthread_mutexattr_init(&attr) pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)let err = pthread_mutex_init(&self.lock, &attr)
    pthread_mutexattr_destroy(&attr)
    
    switch err {
    case 0:
        // Success
        break
        
    case EAGAIN:
        fatalError("Could not create mutex: EAGAIN (The system temporarily lacks the resources to create another mutex.)")
        
    case EINVAL:
        fatalError("Could not create mutex: invalid attributes")
        
    case ENOMEM:
        fatalError("Could not create mutex: no memory")
        
    default:
        fatalError("Could not create mutex, unspecified error \(err)")
    }
}


func saleTicket() {
    while truePthread_mutex_lock (&lock)if tickets.count > 0 {
            print("Remaining votes", tickets.count, "Ticket window", Thread.current)
            tickets.removeLast()
            Thread.sleep(forTimeInterval: 0.2)}else {
            print("The tickets are sold out."Pthread_mutex_unlock (&lock) pthread_mutex_unlock(&lock)breakPthread_mutex_unlock (&lock)}} deinit {pthread_mutex_destroy(&lock)}Copy the code

<NSThread: 0x1C446F7c0 >{number = 5, name = (null)} <NSThread: 0x1C0472900 >{number = 6, name = (null)} 0x1C446f7c0 >{number = 5, name = (null)} 0x1C0472900 >{number = 6, name = (null)} 0x1c446f7c0>{number = 5, name = (null)} …………….. <NSThread: 0x1C0472900 >{number = 6, name = (null)} 0x1C446f7c0 >{number = 5, name = (null)} 0x1C0472900 >{number = 6, name = (null)} 0x1c446f7c0>{number = 5, name = (null)} …………….. <NSThread: 0x1C446f7c0 >{number = 5, name = (null)} <NSThread: 0x1C0472900 >{number = 6, Name = (NULL)} Tickets Are Sold Out Tickets are sold out

As you can see, multi-threaded synchronization is done using pthread_mutex_t. Of course, there are many types of locks that can be implemented, For example, NSRecursiveLock, NSConditionLock, NSDistributedLock, OSSpinLock, etc., will not be implemented in code one by one.

Asynchrony and queue

What is asynchrony

The main difference between a step and an asynchronous operation is whether to wait for the operation to complete, that is, whether to block the current thread. Synchronous operations wait for the operation to complete before continuing to execute the following code, while asynchronous operations do the opposite and return immediately after invocation without waiting for the execution result of the operation.

synchronous asynchronous
Whether to block the current thread is no
Whether to wait for the task to complete is no

What is a queue

  • Queue meaning

Wikipedia queues, also known as queues, are linear FIFO tables. In the specific application commonly used linked list or array to achieve. Queues only allow insertion at the back end (called rear) and deletion at the front (called front). Queues operate like stacks, except that queues allow only new data to be added at the back end.

  • Serial and concurrent queues

Serial queues can only execute one task at a time, whereas concurrent queues can allow multiple tasks to execute simultaneously. The iOS system uses these queues to schedule tasks. It dynamically creates and destroys threads according to the needs of scheduling tasks and the current load of the system, without manual management.

Note the concurrent execution of multiple tasksAt the same timeAt the same time (due toCPUExecution is quick and feels almost simultaneous)

One of the recurring puzzles here is that a serial queue executes one task at a time, tasks are executed sequentially, first in, first out, which makes sense. To execute several tasks simultaneously is also first-in, first-out. Because concurrent tasks are executed, the advanced task does not always finish first, but even if the later task finishes first, it still waits for the previous task to exit. This is due to the nature of the queue.

Serial queues Concurrent queue
Synchronous execution The current thread executes, one after the other, in sequence, after one task is completed, the next task is executed The current thread executes, one after the other, in sequence, after one task is completed, the next task is executed
Asynchronous execution The other threads, one after the other, execute Multiple tasks, multiple threads, multiple tasks executed concurrently

Note that this is a queue execution time, not a first-in, first-out example.

  • Concurrent and parallel queues

In the old single-CPU era, a single task could only execute a single program at a point in time. Then came the multi-task stage, in which the computer could perform multiple tasks or processes in parallel at the same time. Although not in the true sense of the “same point in time”, but multiple tasks or processes share a CPU, and the operating system to complete the multi-task CPU switching, so that each task has a chance to get a certain time slice to run. Multithreading is more of a challenge than multitasking. Multithreading is executed in parallel within the same program, so concurrent reads and writes are performed on the same memory space. This is probably never a problem in a single-threaded program. Some of these errors are also unlikely to occur on a single-CPU machine, as the two threads are never really executed in parallel. However, the advent of more modern computers with multi-core cpus means that different threads of __ can be executed in true parallel by different CPU cores.

The key to concurrency is that you have the ability to handle multiple tasks, not necessarily all at once. The key to parallelism is your ability to handle multiple tasks at once. Concurrency is the ability, the ability to handle multiple tasks. Parallelism is a state in which multiple tasks are executed simultaneously. As you can see, concurrency and parallelism are not the same concept, so they are not comparable. Concurrency includes parallelism, just as fruit includes watermelon. What is concurrent is not necessarily parallel, but what is parallel must be concurrent.

A CPU core can only handle one thread at a time. A single CPU thread is currently in parallel (more than two are called parallel, but for the sake of understanding). Single-core CPU has two threads, currently concurrent. A dual-core CPU has two threads, currently in parallel. A dual-core CPU has four threads, currently concurrent.

Pay attention to my

Welcome to follow the public account: Jackyshan, technical dry goods first wechat, the first time push.