A common scenario in iOS network programming is that we need to process two requests in parallel and both are successful before we can proceed to the next step. The following are some of the common ways to handle this, but it is also easy to make mistakes:

  • DispatchGroup: Multiple requests are placed into one group using the GCD mechanism and processed successfully using dispatchgroup.wait () and dispatchgroup.notify ().
  • OperationQueue: Instantiate an Operation object for each request, then add these objects to the OperationQueue and determine the order of execution based on their dependencies.
  • Synchronous DispatchQueue: Use synchronous queue and NSLock mechanism to avoid data contention and achieve synchronous secure access in asynchronous multithreading.
  • Third party libraries: Futures/Promises and responsive programming provide a higher level of concurrency abstraction.

In the years of practice, I realized that these methods above all have certain defects. In addition, it is difficult to use these libraries properly.

Challenges in concurrent programming

It’s hard to think concurrent: most of the time, we read code the way we read a story: from the first line to the last. If the logic of the code is not linear, it can be difficult to understand. In a single-threaded environment, debugging and tracking the execution of multiple classes and frameworks is already a headache.

Data race problem: In a multi-threaded concurrent environment, data reads are thread-safe and write operations are non-thread-safe. If multiple threads write to a memory at the same time, a data race can occur, leading to potential data errors.

Understanding dynamic behavior in a multithreaded environment is not an easy task in itself, and identifying the threads causing data contention is even more difficult. While we can solve the problem of data contention with a mutex mechanism, it can be very difficult to maintain the mutex mechanism for code that can be modified.

Difficult to test: Many problems in a concurrent environment do not show up during development. Tools like Xcode and LLVM provide Thread Sanitizer to examine these issues, but debugging and tracking these issues remains a challenge. Because in a concurrent environment, applications are affected by the system as well as the code itself.

A simple way to handle concurrent situations

Given the complexity of concurrent programming, how should we resolve multiple requests in parallel?

The simplest way to do this is to avoid writing parallel code and instead write multiple requests in linear succession:



let session = URLSession.shared

session.dataTask(with: request1) { data, response, error in
    // check for errors
    // parse the response data

    session.dataTask(with: request2) { data, response error in
        // check for errors
        // parse the response data

        // if everything succeeded...
        callbackQueue.async {
            completionHandler(result1, result2)
        }
    }.resume()
}.resume()Copy the code

To keep the code simple, details such as error handling and request cancellation are omitted. But this linear ordering of unrelated requests hides some problems. For example, if the server had supported HTTP/2, we would not have taken advantage of the HTTP/2 feature to process multiple requests over the same link, and linear processing would have meant that we were not taking advantage of processor performance.

Incorrect perception of URLSession

To avoid possible data contention and thread safety issues, I rewrote the code above to make nested requests. That is, if you change it to concurrent requests: requests cannot be nested, two requests may write to the same block of memory and data contention is very difficult to reproduce and debug.

One possible solution to this problem is a locking mechanism that allows only one thread to write to shared memory at a time. The execution of the locking mechanism is also very simple: request the lock, execute the code, release the lock. Of course, there are a few tricks to using the locking mechanism properly.

But according to the URLSession documentation, there is a simpler solution to concurrent requests.



init(configuration: URLSessionConfiguration,
          delegate: URLSessionDelegate? , delegateQueue queue:OperationQueue?).Copy the code

[…]. queue : An operation queue for scheduling the delegate calls and completion handlers. The queue should be a serial queue, in order to ensure the correct ordering of callbacks. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.

This means that all callbacks to URLSession instances including urlsession.shared singletons will not be executed concurrently unless you explicitly pass a concurrent queue to the argument queue.

URLSession extends concurrency support

Based on the new understanding of URLSession above, let’s extend it to support thread-safe concurrent requests (complete code address click preview).



enum URLResult {
    case response(Data.URLResponse)
    case error(Error.Data? .URLResponse?). }extension URLSession {
    @discardableResult
    func get(_ url: URL, completionHandler: @escaping (URLResult) -> Void) - >URLSessionDataTask
}

// Example

let zen = URL(string: "https://api.github.com/zen")!
session.get(zen) { result in
    // process the result
}Copy the code

First, we used a simple URLResult enumeration to simulate the different results we could get in the URLSessionDataTask callback. This enumerated type helps us simplify the processing of the results of multiple concurrent requests. The complete implementation of the urlsession.get (_:completionHandler:) method is not posted here for brevity, The method uses the GET method to request the URL, executes resume() automatically, and encapsulates the result into an URLResult object.



@discardableResult
func get(_ left: URL, _ right: URL, completionHandler: @escaping (URLResult, URLResult) -> Void) - > (URLSessionDataTask.URLSessionDataTask) {}Copy the code

This API takes two URL parameters and returns two instances of URLSessionDataTask. The following code is the first section of the function implementation:



 precondition(delegateQueue.maxConcurrentOperationCount == 1."URLSession's delegateQueue must be configured with a maxConcurrentOperationCount of 1.")Copy the code

Since a concurrent OperationQueue object can still be passed in while instantiating the URLSession object, we need to use the above code here to rule this out.



var results: (left: URLResult? .right: URLResult?). = (nil.nil)

func continuation(a) {
    guard case let (left? .right?). = resultselse { return }
    completionHandler(left.right)}Copy the code

This code continues to be added to the implementation, where a tuple variable results is defined to represent the returned result. In addition, we define another utility function inside the function to check that both requests have completed the result processing.



let left = get(left) { result in
    results.left = result
    continuation()
}

let right = get(right) { result in
    results.right = result
    continuation()
}

return (left.right)Copy the code

Finally, we append this code to the implementation, where we request both urls separately and return the result once the request is complete. It’s worth noting that here we determine whether the request is complete by executing a continuation() twice:

  1. First executioncontinuation()Because one of the requests was not completed, the result isnilSo the callback function does not execute.
  2. The second execution completes both requests and performs callback processing.

We can then test this code with a simple request:



extension URLResult {
    var string: String? {
        guard case let .response(data, _) = self.let string = String(data: data, encoding: .utf8)
        else { return nil }
        return string
    }
}

URLSession.shared.get(zen, zen) { left.right in
    guard case let(quote1? , quote2?) = (left.string, right.string)
    else { return }

    print(quote1, quote2, separator: "\n")
    // Approachable is better than simple.
    // Practicality beats purity.
}Copy the code

Parallel paradox.

I’ve found that the simplest and most elegant way to solve the parallelism problem is to use as little concurrent programming as possible, and our processors are well suited to executing that linear code. But breaking up large blocks of code or tasks into smaller blocks and tasks that execute in parallel makes the code more readable and maintainable.

By Adam Sharp, 2017/9/21 translation: BigNerdCoding Translation address, original link