The original address: ali-akhtar.medium.com/concurrency…

This is the first in a series of four concurrent programming for Swift. Contains the following contents:

  • Why concurrent programming
  • Concurrency versus parallelism
  • Queues and functions in GCD
  • Deadlock prevention
  • DispatchWorkItem

Why concurrency is needed

Suppose you are on the main thread and need data from the server. You request data from the server and wait until you get a response from the server. In the meantime, your main thread does not perform any UI-related work, which makes your application unresponsive. Assuming the server takes 10 seconds to respond during this time, if the user clicks the button, the system will not respond to it, which is bad for the user.

If you can run both tasks at the same time (or about the same time), one thread is dedicated to user interface related work, and the other threads handle time-consuming tasks. In this way, the above situation would not happen.

concurrent

Concurrency means that value applications can process multiple tasks simultaneously in a time-split manner. If a computer has only one CPU, it cannot run multiple tasks simultaneously at a precise point in time, but it can perform multiple tasks over a short period of time (such as 1s) by context switching.

Context switching refers to storing the state of a thread and restoring that state to continue execution in the future. This allows multiple processes to share a CPU resource and is a basic function of a multitasking operating system.

parallel

Parallelism refers to multiple tasks happening at the same time without context switching.

In the case we discussed, if the network calls were executed in parallel, there would be two threads executing the main thread and background instructions on two different kernels, very fast compared to the previous one, but with additional physical requirements – requiring the CPU to have multiple cores.

As shown in the figure below, in the parallel case, two threads really execute at the same time; Concurrency, on the other hand, is when one thread is executing while the other is sleeping.

A little bit about threads

  1. On a single-core CPU, if you create 10 threads, it can only execute them using concurrent/time slice/context switch

  2. On a 10-core CPU, if you create 10 threads, they might execute as follows:

    1. Use context switch mode on 1 coreconcurrentperform
    2. Each thread is on a separate coreparallelperform
    3. Part of theconcurrentExecution, the other partparallelperform
  3. On a single-core CPU, if you create 1000 threads, the CPU will be busy with context switching and not doing any real work. As you can see, creating the right number of threads is also a challenge.

How does GCD perform concurrency/parallelism

GCD manages the shared thread pool behind the scenes and adds the optimal number of threads to the pool. With GCD, you add blocks of code or work items to the queue, and the GCD decides on which thread to execute them. GCD performs this task concurrently or in parallel, depending on the physical conditions of the system or the current load.

Note: If you assign two tasks to a GCD, you are not sure whether it will run concurrently or in parallel.

From now on, we will use the term concurrency to stand for concurrency/parallelism.

What are the responsibilities of developers using GCDS

All you have to do is define the tasks to execute concurrently and add them to the appropriate scheduling queues. The GCD is responsible for creating the required threads and scheduling your tasks to run on them, which is pretty cool 😃

The scheduling queue

A scheduling queue is a C-based component used to perform custom tasks. Scheduling queues always unqueue and start tasks in the order they are added to the queue. Scheduling queues are thread-safe, which means you can access them from multiple threads at the same time. Note that queues are not threads!

If you want to perform concurrent tasks through GCD, add them to the appropriate scheduling queue. GCD will pick and execute tasks based on the configuration of the queue.

Serial queues

  1. Serial scheduling queues perform tasks one at a time in the order they were added to the queue. Assuming you add five tasks to the serial queue, GCD will start with the first task, and the second task will not start until its execution is complete.
  2. Serial queues are typically used to synchronize access to specific resources. Suppose you have two network calls that both take 10 seconds, so you decide to move those two tasks to some background thread, and they’re both accessing the same resources, and you want to do some synchronization, you can put those tasks in a serial queue.
  3. Serial queue Serial execution of tasks means that only one thread is in use at a time, but there is no guarantee that they will be executed on the same thread.
  4. You can create as many serial queues as you need, and each queue runs at the same time relative to all the others. In other words, if you create four serial queues, each queue performs only one task at a time, but you can still perform up to four tasks simultaneously, one for each queue.
  5. If you have two tasks accessing the same shared resource, but they are running on different threads, then either thread can modify the resource first, and you need to use locks to ensure that both tasks do not modify the resource at the same time. You can add two tasks to the serial scheduling queue to ensure that only one task modifies the shared resource at any given time. This queue-based synchronization is more efficient than locking, because locking always requires an expensive kernel trap in both content-free and content-free situations, whereas scheduling queues work primarily in the application’s process space and only invoke the kernel when absolutely necessary.

Concurrent queue

  1. Concurrent queues execute one or more tasks concurrently
  2. If you add four separate tasks to the concurrent queue, the tasks will start in the order they were added to the queue. The GCD selects the first task and executes it for a set period of time, then starts the second task without waiting for the first task to complete, and so on. This is the ideal place, not only can you really do it in the background, but you don’t care if these tasks are also running at the same time as other tasks
  3. The tasks currently being executed are running on different threads managed by scheduling queues.
  4. The number of tasks performed at any point in time is variable, depending on system conditions. When you create a concurrent queue with four tasks, how many threads does it create? The answer is no. Depending on system conditions, the GCD may use one or four threads to perform these tasks.
  5. In GCD, there are two ways to run tasks simultaneously, creating custom concurrent queues or using global concurrent queues.

The difference between custom and global concurrent queues

As shown in the figure below, we created two global concurrent queues. You can see that since the global queue is a concurrent queue shared by the entire system, it always returns the same queue; Custom concurrent queues are private and a new queue is returned each time it is created.

There are four global concurrent queues with different priorities, but priorities are not directly specified when setting up global concurrent queues. Instead, you specify quality of service (QoS), which includes user interaction, user startup, utilities, and background, where user interaction has the highest priority and background has the lowest priority. The following are QoS usage suggestions.

  1. .userInteractiveFlag tasks need to be performed immediately to provide a better user experience. Usually used to doUIUpdate, event processing and other low latency tasks. This type of task should not be excessive.
  2. .userInitiatedFlag tasks are created by the user through the UI, but can be executed asynchronously. It is usually used when the user takes an action and waits for the result to return to continue the previous action.
  3. .defaultThe default value. Used for generic asynchronous tasks.
  4. .utilityFlag tasks take a long time and are usually associated with a progress. Such as:I/O, network request, etc.
  5. .backgroundThe execution of the flag task is less of a concern to the user. For example, preload data.

In contrast to global queues, you can use custom queues to perform the following tasks:

  1. You can specify a label that is meaningful to you for debugging on custom queues
  2. You can pause and restart:queue.suspend() queue.resume()
  3. submitThe fence task: queue.async(flags: .barrier) { ... }

The home side column

  1. A primary queue is a globally available serial queue that performs tasks on the main thread of an application
  2. This queue works with the application’s running loop to interleave the queued tasks with other event source tasks running the loop. Because it runs on the main thread of the application, the main queue is often used as the key synchronization point for the application.

Synchronous and asynchronous

We’ve learned how to execute tasks serially or concurrently on queues. With GCD, you can also schedule queues synchronously or asynchronously.

In general, the synchronization function (sync) returns control to the caller after the task is complete. An async function returns control to the caller as soon as the function is called, rather than waiting for the task to complete.

As shown above, you are performing time-consuming tasks on a concurrent global queue, but the main thread is still busy because you are dispatching tasks synchronously on the main thread, which will wait until the task completes.

As shown above: We dispatch the task asynchronously, it immediately returns to the main thread, the main thread prints first, and the column tasks on the queue execute concurrently.

Deadlock prevention

In concurrent computing, a deadlock is a state in which each member of a group is waiting for another member (including itself) to take action

In the figure above, queue is A serial queue that dispatches task A via Async and task B via sync. At this point, A will wait for B to finish and continue downward, while B will not start until A has finished. This creates a deadlock.

Similarly, the main queued task A(viewDidLoad method) in the figure above uses sync to dispatch task B, which also causes A deadlock by waiting for each other.

DispatchWorkItem

DispatchWorkItem is a wrapper around a task that can be used multiple times or cancelled.

let queue = DispatchQueue(label: "com.swiftpal.dispatch.workItem")

// Create a work item
let workItem = DispatchWorkItem() {
    print("Stored Task")}// Task 1
queue.async(execute: workItem)

// Task 2
queue.asyncAfter(deadline: DispatchTime.now() + 1, execute: workItem)

// Work Item Cancel
workItem.cancel()

// Task 3
queue.async(execute: workItem)

if item.isCancelled {
    print("Task was cancelled")}Copy the code

Here we create a serial queue, and we create a DispatchWorkItem, which is just one line of code.

Next, we sent the task twice before canceling it, and then again. But for the output, we will only see Stored Task once.