Introduction to the

The top 10 classic sorts are direct insertion sort, Hill sort, bubble sort, quicksort, direct selection sort, heap sort, merge sort, count sort, bucket sort, and radix sort

And sorting is divided into comparison sorting and non-comparison sorting, its classification is shown in the following figure

Its time and space complexity are shown in the figure below:

Stability and instability

Stability means that after sorting, the number of the same elements (objects), after sorting, the order of their order remains unchanged (that is, after sorting, the order of the two equivalent elements (A, B) before and after array is still A, B)

On the contrary, the order of the elements (objects) with the same number cannot be guaranteed after sorting (that is, the order of the two elements A and B with the same value before and after the array may become B and A after sorting).

Note: The following are ordered from smallest to largest

To make it easier to understand, when explaining the top 10 classic sorting, we added sorting GIF, first figure and then explain,

Comparison sort

As the name suggests, the pairwise comparison is used to determine the order, and there are seven commonly used comparison sorts: direct insertion sort, Hill sort, bubble sort, quick sort, selection sort, heap sort, merge sort

Among them, Hill sort, quick sort, heap sort, merge sort are the results of sorting optimization, the problem of high time complexity appears in reverse sort, so they through skip exchange sort, greatly reduce the number of search exchange, so the time complexity can be reduced from O(n^2) to lower

Direct insertion sort

The figure above illustrates the insertion sort process:

Sorting steps

1. Set the first element to an ordered queue by default, and insert sort from the second element

2. Save the value of the current element by comparing the ith element forward

3. If you find that the preceding elements are larger than you (in order from smallest to largest), assign the previous element to the rotation position of the current element, assign the current element to the first digit, and repeat Step 3. Otherwise, the comparison ends

4. Assign yourself to the current position

5. The sort ends when the last element is inserted forward

Introduction to Complexity

Time complexity:

The worst is O(n^2), which is the reverse sorting process, and each sorting process actually compares one round forward

It is better to be O(n), which is the positive sorting process, and the internal forward comparison process, which only goes once per round, so the internal is not limited by n

The average is O(n^2), which defaults to an inner and outer two-layer loop to n

Space complexity:

Is O(1), since only a few constant terms are created, it has nothing to do with n (even though the value is assigned once in each round of loop, the temporary variable is saved to the stack, and the temporary variable is released at the end of the round of loop)

Code implementation

Void sortByStraightInsertion(int list[], int length) {for (int I = 1; i < length; i++) { int pre, current = list[i]; for (pre = i - 1; pre >= 0 && list[pre] > current; pre--) { list[pre + 1] = list[pre]; } list[pre + 1] = current; }}Copy the code

Hill sorting

The figure above demonstrates the process of hill sorting (also known as reduced increment sort). Note that the actual comparison is a skip comparison with an interval:

Hill sort based on insertion sort, set the step gap, reduce its incremental n, to sort, can change the step gap (here the default step size is reduced to half the original), to adjust to improve the sorting speed, so its complexity varies according to the gap

Sorting steps

1. Set the step to gap (that is, the forward comparison interval). The default is half the length of the list, and each round is reduced to half the current step

2. The default list is divided into several groups virtually by step gap, and the length of each round is length/2, length/4, length/8…

3. Start the insertion sort with interval according to the specified step (interval), start from the gap element, and save as the current element

4. Compare the current element with the previous element with gap distance, and find that the previous element is larger than oneself (in order from smallest to largest), then assign the value of the previous gap element to the rotation position of the current element, and assign the value of the current element to the gap position, repeat Step 4; Otherwise, the comparison ends

5. Reduce the step gap to half of the original one and repeat Step 3 to perform the insertion sort with interval until the round with step size 1 is completed

Case diagram demonstration

Because the DYNAMIC may not look clear enough, the case diagram is included here

Introduction to Complexity

Time complexity:

By setting the step size, the time complexity of the reverse order is greatly reduced. The time complexity varies greatly according to the different step size, and the average time complexity is O(nlog2n ~ n^2). By gap optimization, some people calculate that it is about O(n^1.3).

Space complexity:

Is O(1), which is independent of n because only a few constant terms are created

Code implementation

#pragma mark -- pragma mark void sortByShell(int list[], int length) {pragma mark -- pragma mark -- pragma mark void sortByShell(int list[], int length) { For (int gap = length/2; int gap = length/2; gap > 0; gap /= 2) { for (int i = gap; i < length; i++) { int pre, current = list[i]; for (pre = i - gap; pre >= 0 && list[pre] > current; pre -= gap) { list[pre + gap] = list[pre]; } list[pre + gap] = current; }}}Copy the code

Bubble sort

The bubbly sorting process is shown in the figure above

Sorting steps

1. Start with the first element and compare to the next

2. If the following elements are smaller than you (sorted from smallest to largest), you swap them. Otherwise, you do not swap them

3. Repeat step 2 to the last element. The largest element is swapped to the last element.

4. Repeat steps 1 to 3 to exchange the first N-1 elements until they are not exchangeable (there is only one element to be exchanged).

Introduction to Complexity

Time complexity:

The worst is O(n^2), which is the reverse sorting process, and each sorting process actually compares one round forward

It is better to be O(n), which is a positive sorting process. If there is no exchange in the next round, it will be finished directly by setting the exchange or not identifier in the inner layer

The average is O(n^2), and the default is two layers up to n,

Space complexity:

Is O(1), which is independent of n because only a few constant terms are created

Code implementation

#pragma mark void sortByBubble(int list[], int length) {for (int I = 0; i < length; i++) { // int noSort = true; for (int j = 0, last = length - i - 1; j < last; j++) { if (list[j] > list[j + 1]) { int tem = list[j + 1]; list[j + 1] = list[j]; list[j] = tem; // noSort = false; } } // if (noSort) break; // The optimal measures can be optimized to O(n) in the best case, and the first positive order is directly ended}}Copy the code

Quick sort

The quicksort process is shown in the figure above

Quicksort is based on bubble sort. It sets a reference value (usually the first one), puts the smaller value on the left, the larger value on the right, and then takes the reference value on both sides, and so on, until there is only one element

Note: In fact, quicksort is generally significantly faster than any other o (nlog2n) algorithm, since its internal loop can be implemented efficiently on most of the architectures, and requires little space, so it is generally examined much

Sorting steps

Here, the selected reference value elements are cleverly used to continuously transpose before and after, reducing some space waste

1. Judge whether the interval of the segmentation sequence can be divided again, that is, if the beginning and end of the interval are the same, and the parameters of the beginning and end of the interval are low and height, then it is over; otherwise, continue and go to the next step

2. Set the first variable as the reference value, and save a copy of L and H together with the first and last indexes. The position of the reference value variable is idle

3. The queue is traversed and the sequence is split with reference values

4. Start from after H to look up l, each time the index h decreases and finds that it is less than the base value, assign it to L (the position of the first base value, when L was idle). The current position enters the idle state and enters the next step; If h== L is found, the partition is complete. Otherwise, go to Step 6

5. Search from front L to back h. Each time the index L increases, it is found that the value is greater than the reference value, and it is assigned to h(when the position of H is idle). If h== L is found, the partition is complete. Otherwise, go to Step 6

6. After steps 4 and 5, L ==h is the actual position (and idle) of the base value, and the base value is assigned to the past

7. Continue to segment the left and right intervals, with the front interval being [low, L-1] and the back interval being [L +1, height]. Repeat Step 1 until the end

Case diagram demonstration

Because the DYNAMIC may not look clear enough, the case diagram is included here

Introduction to Complexity

Time complexity:

The worst is O(n^2), and the base value is usually set to the first. In the case of reverse or positive sorting, each partition with the base value is extreme, and has evolved into bubbling sorting

The best is O(nlog2n). Since a reference value is selected to separate the sequence each time, and the sequence is divided into large on the left and small on the right by the reference value, and the reference value is selected for the sub-sequence after each round to separate the sequence, the best case is that the median number is just selected each time, and the average number is divided into two until the end

The average is O(nlog2n), and the segmentation is relatively uniform in most cases because the base value is rarely all extreme cases

Space complexity:

Is O(nlog2n). Since it is in the form of function stack, the base line is set almost dichotomously, and the sequence is divided downward, the variables in the middle will have this accumulation, which cannot be released immediately. Since it continues to be so until it reaches a single, the memory result accumulates to O(nlog2n) level

Code implementation

#pragma mark -- quick sort //low minimum index, Void quickSortList(int list[], int low, int hight) {if (hight <= low) return; int l = low, h = hight; Int pri = list[l]; While (l < h && list[h] >= pri) h--; while (l < h && list[h] >= pri) h--; if (l == h) break; list[l] = list[h]; While (l < h && list[l] <= pri) l++; if (l == h) break; list[h] = list[l]; } list[l] = pri;} list[l] = pri; quickSortList(list, low, l - 1); quickSortList(list, l + 1, hight); }Copy the code

Direct selection sort

The figure above illustrates the process of selecting sorting directly

Sorting steps

1. Set the first element as the smallest element, and compare it with the next element. If there is a smaller one behind it, then switch

2. Then set the second element as the smallest element and repeat step 1 until the last element. Then the sort is finished

Introduction to Complexity

Time complexity:

The best, the worst and the average are all O(n^2), because it will select the smallest one from each round after n times of comparison, put it in front, and finish after n rounds, so the time complexity is fixed as O(n^2).

Space complexity:

Is O(1), which is independent of n because only a few constant terms are created

Code implementation

Void sortByStraightSelect(int list[], int length) {for (int I = 0; i < length; i++) { int min = i; for (int j = i + 1; j < length; j++) { if (list[min] > list[j]) { min = j; } } int tem = list[i]; list[i] = list[min]; list[min] = tem; }}Copy the code

Heap sort

The figure above illustrates the heapsort process

Heap sort, according to the characteristics of the heap (big heap, small heap, the parent node is always larger than the child node, small), the previous article has introduced the heap, each adjustment of the heap process to select the heap, put in the last, readjust the heap to repeat the selection, and so on

Sorting steps

1. Now treat the list as a complete binary tree, transform it into the big head heap (sort from small to large), adjust the heap from bottom to top, and update the sequence

2. Replace the stack with the last element (i.e., select the largest element to the end), and select the element not to participate in the next adjustment of the heap

3. Resize the heap to the big head heap and repeat Step 2 until all the elements in the heap are selected

Introduction to Complexity

Time complexity:

The best, the worst, and the average are all O(nlog2n). The process of adjusting the heap is equivalent to the dichotomy comparison, and n rounds are carried out, so the complexity is O(nlog2n).

Space complexity:

Is O(1), which is independent of n because only a few constant terms are created

Code implementation

#pragma mark -- sort the heap // adjust the heap to normal (adjust the heap to big head, Void heapShift(int list[], int start, int end) {int ori = list[start]; Int sub = start * 2 + 1; While (sub <= end) {while (sub <= end) {// the right node cannot be larger than the index, If (sub < end && list[sub] < list[sub + 1]) sub++; if (ori >= list[sub]) break; List [start] = list[sub]; start = sub; Sub = sub * 2 + 1; } list[start] = ori; } // Set the root node to the root node and the root node to the root node; // set the root node to the root node and the root node to the root node; // set the root node to the root node and the root node to the root node. Void sortByHeap(int oriList[], int length) {int *list = copyList(oriList, int length) length); // Adjust the original array to the big head heap, because the adjustment starts from the parent node, according to the index double relationship, from the half to the root node to form the big head heap // Adjust from the bottom, so when the top is larger than the bottom child node, because the bottom child node has the heap structure, so you can directly end, For (int I = length / 2; i >= 0; HeapShift (list, I, length-1) {// heapShift(list, I, length-1); } for (int i = length - 1; i > 0; Int tem = list[I]; int tem = list[I]; int tem = list[I]; list[i] = list[0]; list[0] = tem; heapShift(list, 0, i - 1); } showSortResult(list, length, "heap sort "); }Copy the code

Merge sort

The figure above shows the merge sort process. (In the actual case, two groups are combined, and the second round generates the reduced group after merging the first round, and so on.)

Merge sort, default to the entire set of elements as a group, each round of two adjacent groups are merged (from small to large, ordered), and so on until merged into a group, that is, sorted

So it follows that the merge process is a small ordered sequence in each group (which will be used later)

Sorting steps

1. By default, each element has its own group, and each group has one. Create a result array of length n

2. Starting from the first one, compare and merge the two adjacent groups from small to large

3. Since each group is a small ordered sequence in the merging process, the two groups are compared from front to back, the small one is added to the result array, and then moved back one bit, and then compared until one side is placed, and then the other side is placed in the result array in order

4. Repeat Step 3 until the current group is merged

5. Mark the number of groups as two and repeat Step 2 until all groups are merged into one group

Introduction to Complexity

Time complexity:

The best, the worst and the average are all O(nlog2n). Since the binary method is used to merge each time, the number of merges is positively correlated with n, so the complexity is O(nlog2n).

Space complexity:

Is O(n), since a result array of length n is created for the merge group to generate and update the results for each round

Code implementation

#pragma mark -- merge sort; #pragma mark -- merge sort Void mergeList(int mergeList [], int mergeList [], int mergeList [], int mergeList, int mergeList Int group) {if (group >= length) return; int k = 0; // add the last index to the new array, open the interval // divide the array into two groups (length/(2*group)), make sure to get the last group, For (int I = 0, last = ceil(length/(2 * group))); i <= last; I++) {// start and end indexes in pairs int start, startEnd, end, end; start = 2 * i * group; startEnd = start + group > length ? length : start + group; // open interval end = startEnd; endEnd = end + group > length ? length : end + group; If (end > length) break; if (end > length) break; Int I = start, j = end; int I = start, j = end; for (; i < startEnd && j < endEnd; k++) { if (list[i] <= list[j]) { temList[k] = list[i++]; }else { temList[k] = list[j++]; } } if (i >= startEnd) { while (j < endEnd) { temList[k++] = list[j++]; } }else { while (i < startEnd) { temList[k++] = list[i++]; }}} for (int I = 0; int I = 0; i < k; i++) { list[i] = temList[i]; } mergeList(list, temList, length, group * 2); Void sortByMerge(int list[], int length]){// merge (int list[], int length]; Int temList[length]; int temList[length]; mergeList(list, temList, length, 1); }Copy the code

Noncomparative sort

As the name implies, the order is not determined by pair-comparison. There are three kinds of non-comparison sort commonly used at present: counting sort, bucket sort, and radix sort

By virtue of the characteristics of numbers, it finds a new way to solve the sorting problem. It can be used according to the usage scenario, and sometimes can greatly reduce the time and space complexity

Count sorting

The figure above illustrates the counting sort process

It is suitable for sorting integers where numbers are relatively dense, such as sorting 100 million positive numbers between 1 and 1000

Sorting steps

1. Before counting sort, you need to know the maximum and minimum values in advance (or know the number range of sort queue in advance).

2. Create a temporary array for sorting. Determine the size of the temporary array by the difference between the maximum and minimum values

3. Iterate over the array once, and map the corresponding number to the corresponding position of the array by subtracting the minimum value according to its numeric size. The corresponding position is marked +1

4. Walk through the temporary array once, from small to large, take out the number of values in turn (each index takes out a number marked -1 until it is 0), take out the value of index + the minimum value, and put it into the result array in turn

Introduction to Complexity

Time complexity:

The best, the worst and the average are all O(n+k), where K is the size of the interval between the array fields. The process of mapping the value has gone through n times, and when the value is taken out, it will go through K times. Since the possible values are all the same, the maximum is O(n+k).

Therefore, if the array range is too large, it will cause a serious waste of space, and it is more suitable for sorting with relatively dense numbers. For example, sorting 100 million numbers between 1-1000 is very suitable

Space complexity:

Is O(n+k), which includes the field value interval K and the object itself. In the form of the object, it needs a store of N objects to avoid incorrect fetching of elements

Code implementation

#pragma mark -- counting counting (int list[], int length) {void sortByCounting(int list[], int length); min = max = 0; for (int i = 1; i < length; i++) { if (list[min] > list[i]) min = i; if (list[max] < list[i]) max = i; } min = list[min]; max = list[max]; If (min == Max) {showSortResult(list, length, "count sort "); return; } int dk = max - min + 1; Int temList[dk]; // Int temList[dk]; For (int I = 0; int I = 0; int I = 0; i < dk; i++) { temList[i] = 0; } for (int i = 0; i < length; i++) { temList[list[i] - min]++; } int k = 0; for (int i = 0; i < dk; i++) { while (temList[i]-- > 0) { list[k++] = i + min; }}}Copy the code

Bucket sort

Above is the bucket in bucket sort diagram

It can be seen that according to the relationship between the elements, each bucket is divided into several intervals (buckets) with the same value, and the numbers that meet the conditions are put into the corresponding bucket, and the elements in the bucket are sorted, and then they can be taken out

As you can see, bucket sort is more suitable for well-distributed sequences (if one element is one bucket, it evolves into counting sort).

Sorting steps

1. Find the maximum and minimum values of the numbers in the sequence, and according to the maximum and minimum values

2. Divide the range of the maximum and minimum values into several regions (buckets) to create an array to accommodate sorted elements

3. Subtract the minimum value and put the elements into the corresponding bucket by taking the quotient method

4. Sort the elements in the bucket (blister, select, insert)

5. Remove the elements from the bucket in turn

Introduction to Complexity

Time complexity:

The worst is O(n^2), and all the elements are placed in a bucket, following the worst time complexity of sorting in the bucket

The best is O(n), and the lowest possible is O(n) if an ordered sequence is placed in a bucket, e.g. insert, bubble sort

The average is O(n + K), default inside and outside two layers to n cycle, assuming that the distribution of elements is relatively average, k = m * (n/m) * log2(n/m) = nlog2(n/m) = n(log2n – log2m), plus n times take out

Space complexity:

Is O(n+k), including the size of the bucket and the size of the resulting data itself, because the bucket contains all the elements, that is: element size + bucket size O(n+k)

Code implementation

#pragma mark -- bucket sort, which is similar to counting sort, but maps to several buckets by scope, saving some memory. You need to use dynamic arrays otherwise you're going to get to O (n + k * n), so every bucket is going to be n and if you don't have a fairly uniform distribution of numbers and it's just an array that's out of order, if you don't, it's not really recommended to use this sort, because the idea is to split the array and sort it in the bucket, Void sortByBucket(int list[], int length) {void sortByBucket(int list[], int length) {// calculate the maximum and minimum values for k interval int min, Max; min = max = 0; for (int i = 1; i < length; i++) { if (list[min] > list[i]) min = i; if (list[max] < list[i]) max = i; } min = list[min]; max = list[max]; If (min == Max) {showSortResult(list, length, "bucket sort "); return; } int dk = max - min; Int bucket = 5; Int bucketSize = ceil(dk/bucket) + 1; LSListNode *p[bucket]; for (int i = 0; i < bucket; i++) { p[i] = NULL; } for (int i = 0; i < length; i++) { int group = (list[i] - min) / bucketSize; P [group] = pushNode(p[group], list[I]); Int k = 0; int k = 0; for (int i = 0; i < bucket; i++) { int count = getListCount(p[i]); Int temList[count] int temList[count] int temList[count]; for (int j = 0; j < count; j++) { temList[j] = geListNode(p[i], j)->data; } for (int x = 0; x < count; x++) { for (int y = 0, last = count - x - 1; y < last; y++) { if (temList[y] > temList[y+1]) { int tem = temList[y+1]; temList[y+1] = temList[y]; temList[y] = tem; }}} for (int j = 0; j < count; j++) { list[k++] = temList[j]; }}}Copy the code

Radix sort

The figure above illustrates the cardinality sorting process

That is, the array of a certain number of digits is sorted from small to large according to the specified number (m), mapped to the array of 0~9 respectively, and then taken out successively. After k rounds of sorting, the final result is generated

Sorting steps

1. If given the maximum number of digits, use it directly, otherwise find the maximum and calculate the number of digits (in decimal, divide by 10 each time).

2. Take the decimal system as an example. Divide the digits 0 to 9 into 10 mapping arrays

3. Sort from the units digit

4. Use the mod method to take out the corresponding decimal digits and map them to the mapping array from 0 to 9 in sequence

5. Put the values into the array in order from smallest to largest for the next round

6. Continue with the tenth digit and repeat step 4 until you have reached the highest digit

Introduction to Complexity

Time complexity:

The best, the worst and the average are ALL O(NK), where K is the size of a single digit range (0~9) 10, multiplied by the largest digit m, that is, 10mn, that is, O(NK).

Space complexity:

Is O(n+ K), similar to bucket sort and count sort, only need to add (0~9) array interval, accumulated to hold N elements, that is, O(n+k).

Code implementation

#pragma mark void sortByRadixSort(int list[], int length, Int maxBit) {if (maxBit < 1) {if (maxBit < 1); for (int i = 1; i < length; i++) { if (list[maxBit] < list[i]) { maxBit = i; } } int max = list[maxBit]; maxBit = 1; while (max / 10) { max /= 10; maxBit++; } } int d = 10; / / 1 ~ 10; LSListNode *queue[d]; for (int i = 0; i < maxBit; For (int x = 0; int x = 0; x < d; x++) { queue[x] = NULL; } int bitNumber = pow(10, i); for (int k = 0; k < length; k++) { int num = list[k] / bitNumber % 10; queue[num] = pushNode(queue[num], list[k]); } for (int k = 0, j = 0; k < d; k++) { LSListNode *q = queue[k]; int count = getListCount(q); if (count < 1) continue; while (q) { list[j++] = q->data; q = shiftNode(q); }}}}Copy the code