The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_159

As the outbreak in Beijing continues to escalate, the battle against the epidemic will be a protracted one, which cannot be won lightly or easily. The virus can resurface at any time, and the lessons learned at the cost of our lives are worth each and every one of us thinking long and hard. The community where the author lives has also started to organize residents to take nucleic acid tests in batches. It was expected to be a crowded scene, but it was surprisingly orderly, hierarchical and highly efficient. Quarantine authorities had adopted an unusual strategy: a quick screening of hundreds of people in a few minutes using kits for every five people.

Here to explain the virus nucleic acid detection principle of extraction dweller nasal swabs or pharyngeal swab (that is, using a swab deep in the throat or nasal scrape secretions), and then put the swab in kit, with virus unique target gene sequence detection, by PCR amplification, make we choose the target DNA sequence exponential increase, Each amplified DNA sequence can be combined with a section of fluorescence label probe that we pre-added to generate fluorescence signal. The more target genes amplified, the stronger the accumulated fluorescence signal. To put it bluntly, the stronger the discoloration of the kit fluorescence reaction, the stronger the virus volume and activity.

Groups of five people share one test kit, and if the results are positive, four of them can be tested separately. Since the vast majority of people are healthy, this can increase the number of tests by five times, so that more people can be tested. It is clear that this quarantine uses a “divide and conquer” method like merging to solve the problem and improve efficiency.

Two people cure four people: “Of all countries where there are diseases, 8 kinds of herbs make doctors divide and cure them; they do not heal themselves.” Its core idea is: a big problem that is difficult to solve directly, split into a number of smaller identical problems, and then break them up one by one, divide and rule. It can be understood as: if the original problem can be divided into n sub-problems, 1<n<= the original problem, and these sub-problems can be solved and the solution of the original problem can be obtained by using the solution of these sub-problems, then the divide-and-conquer method is feasible. Subproblems generated by divide-and-conquer are often smaller patterns of the original problem, which provides traversal for the use of recursive algorithms. The repeated application of divide-and-conquer means can make the subproblem consistent with the original problem while its scale is constantly reduced, and finally make the subproblem reduced to the point where it is easy to solve directly. This naturally leads to the use of recursion. So divide-and-conquer and recursion are often used together in algorithmic solutions.

Nucleic acid testing is just the case for the divide-and-conquer algorithm: the problem can be easily solved as long as it is scaled down to a certain size. The problem can be broken down into a number of smaller identical questions (positive or not).

And we in the technical interview, can use divide-and-conquer algorithm to solve the classic problem as follows:

Merge sort

def merge_sort(lst):  
    # Return a sequence of length 1 from the recursion
    if len(lst) <= 1:  
        return lst            
  
    middle = len(lst) / 2  
    # 1. Decomposition: Divide the original sequence into N smaller sequences through continuous recursion
    left = merge_sort(lst[:middle])       
    right = merge_sort(lst[middle:])  
    Sort and merge
    return merge(left, right)  
  
def merge(left, right):  
    i, j = 0, 0  
    result = []  
    # 2. Resolve: Compare two subsequences passed in, sort the two subsequences
    while i < len(left) and j < len(right):    
        if left[i] <= right[j]:  
            result.append(left[i])  
            i += 1  
        else:  
            result.append(right[j])  
            j += 1  
    # 3. Merge: Merge sorted subsequences
    result.extend(left[i:])           
    result.extend(right[j:])  
    return result
Copy the code

Quick sort

def quickSort(listx):  
    if len(listx)<=1:  
        return listx  
    pivot = listx[len(listx)//2]              # Take the middle element of the list as the number to be compared pivot
    listl = [x for x in listx if x < pivot]   #
    listm = [x for x in listx if x ==pivot]   #= Pivot's in a list
    listr = [x for x in listx if x > pivot]   #> Pivot's in a list
    left = quickSort(listl)                   # Recursively perform the function
    right = quickSort(listr)                  # Recursively perform the function
    return left + listm + right               # integration
print(quickSort([9,3, 6, 8, 9, 19, 1, 5]))     #[1, 3, 5, 6, 8, 9, 9, 19]
Copy the code

Binary search

def binary_search(lis, key):  
  low = 0  
  high = len(lis) - 1  
  time = 0  
  while low < high:  
    time += 1  
    mid = int((low + high) / 2)  
    if key < lis[mid]:  
      high = mid - 1  
    elif key > lis[mid]:  
      low = mid + 1  
    else:  
      # Number of times to print half
      print("times: %s" % time)  
      return mid  
  print("times: %s" % time)  
  return False
Copy the code

Maximum depth problem of binary tree

class Solution(object):  
    def maxDepth(self, root):  
        """ :type root: TreeNode :rtype: int """  
        if not root:  
            return 0  
        left = self.maxDepth(root.left) + 1  
        right = self.maxDepth(root.right) + 1  
        return left if left > right else right
Copy the code

Compute x to the NTH power

class Solution(object):  
    def myPow(self, x, n):  
        """ :type x: float :type n: int :rtype: float """  
        if not n:  
            return 1  
        if n < 0:  
            return 1 / self.myPow(x, -n)  
        if n % 2:  
            return (x * self.myPow(x, n - 1))  
        return self.myPow(x * x, int(n / 2))
Copy the code

Of course, the divide-and-conquer algorithm is not foolproof. Going back to the nucleic acid scenario, it is indeed fivefold more efficient in the best case, but it is actually more work in the worst case. If there are no infected patients in the test, that is the most optimistic case, five people in a group of examination is OK; If all the people in the group are infected (correctly, at least one in each group after grouping), this extremely bad situation will result in at least an increase in the number of groups of work, so the fundamental question becomes how to determine how many samples in a group are good for testing, assuming a certain infection rate. Considerations may include test efficiency, cost, and rapid location of positives. In actual monitoring, different detection strategies can be adopted in different regions, and the monitoring strategies can also be adjusted according to the detection results.

Conclusion: Algorithm is ubiquitous in life. Many students are often afraid to do algorithm questions when they go out for interviews. In fact, algorithm is just a way to solve problems, and the purpose is only to improve efficiency.

The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_159