This series is my summary of data structure and algorithm when I was looking for a job many years ago. There are basic parts and classic interview questions of various companies. It was first published on CSDN. Is sorted out for a series of friends in need of reference, if there is a mistake, welcome to correct. The full code address for this series is here.
0 overview
Sorting algorithms are also frequently mentioned in interviews, and the most frequently asked questions should be quicksort, heap sort. These sorting algorithms are pretty basic, but if you don’t write much code, you’ll get a lot of bugs in your interview. Although thought all know, but just can’t write out. This paper intends to make a summary of various sorting algorithms, including insertion sort, bubble sort, selection sort, count sort, merge sort, radix sort, bucket sort, quick sort and so on. Quicksort is important, I will write a separate article, and heap sort can see the binary heap article in this series.
One thing to mention is that insertion sort, bubble sort, merge sort, and count sort are all stable sorts, while the rest are unstable. The full code for this article is here.
1 insert sort
Insert sort is a very basic sort, especially in the case of basically ordered data, the performance of insert sort is very high, the best case can reach O(N), its worst case and average case time complexity is O(N^2). The code is as follows:
/** * insertSort */ void insertSort(int a[], int n) {int I, j;for(i = 1; i < n; I ++) {/* * Loop invariant: a[0... I -1] is ordered. Before the start of each iteration, a [0... I - 1] orderly, * the end of the loop I = n, a n - [0... 1] order * * / int key = a, [I].for(j = i; j > 0 && a[j-1] > key; j--) { a[j] = a[j-1]; } a[j] = key; }}Copy the code
2 Hill sort
Hill sort internal calls to insert sort, by N/2, N/4… Order 1 is sorted separately to obtain the overall order.
Void shellSort(int a[], int n) {int gap;for (gap = n/2; gap > 0; gap /= 2) {
int i;
for (i = gap; i < n; i++) {
int key = a[i], j;
for(j = i; j >= gap && key < a[j-gap]; j -= gap) { a[j] = a[j-gap]; } a[j] = key; }}}Copy the code
3 Selection Sort
The idea of selection sort is to take the ith smallest element and put it at position I. For example, the first time I choose the smallest element and put it at position 0, and the second time I choose the second smallest element and put it at position 1. Both the best and worst time complexity of selection sort are O(N^2). The code is as follows:
/** * selectSort */ void selectSort(int a[], int n) {int I, j, min, TMP;for (i = 0; i < n-1; i++) {
min = i;
for (j = i+1; j < n; j++) {
if (a[j] < a[min])
min = j;
}
if(min ! = i) tmp = a[i], a[i] = a[min], a[min] = tmp; // Switch a[I] and a[min]}}Copy the code
Loop invariant: Before the outer loop executes,a[0...i-1]
containsa
The smallesti
Number, and ordered.
-
At the beginning, I =0, a[0…-1] is empty, obviously true.
-
After each execution, a[0… I] contains the smallest number of I +1 in a and is ordered. That is, after the first execution, a[0…0] contains the smallest number of a in an orderly manner.
-
At the end of the loop, I =n-1, then a[0…n-2] contains the smallest n-1 number of a and is already ordered. So the whole array is sorted.
4 Bubble sort
Bubble sort time complexity is the same as selection sort. The idea is to sort n minus 1 times, each time floating the smallest number up like a fish bubbling. The worst case is order N^2. The code is as follows:
Void bubbleSort(int a[], int n) {int I, j, TMP;for (i = 0; i < n; i++) {
for (j = n-1; j >= i+1; j--) {
if(a[j] < a[j-1]) tmp = a[j], a[j] = a[j-1], a[j-1] = tmp; }}}Copy the code
Loop invariant: subarray before loop begins iterationa[0...i-1]
Contains an arraya[0..n-1]
的 i-1
Is the minimum value, and is sorted.
An improvement to bubble sort is to determine whether swapping occurs at each sort run. If neither swap occurs, the array is sorted and can exit without continuing the remaining runs. The improved code is as follows:
Void betterBubbleSort(int a[], int n) {int TMP, I, j;for (i = 0; i < n; i++) {
int sorted = 1;
for (j = n-1; j >= i+1; j--) {
if(a[j] < a[j-1]) { tmp = a[j], a[j] = a[j-1], a[j-1] = tmp; sorted = 0; }}if (sorted)
return; }}Copy the code
5 Count sort
Suppose the array is a[0…n-1], there are duplicate numbers in the array, and the largest number in the array is K, build two auxiliary arrays B [] and C [], B [] is used to store sorted results, and C [] is used to store temporary values. The time complexity is O(N), which is suitable for arrays with a small number range.
Counting sort principle is shown in the figure above, the code is as follows:
/** * count sort */ void countingSort(int a[], int n) {int I, j; int *b = (int *)malloc(sizeof(int) * n); int k = maxOfIntArray(a, n); Int *c = (int *)malloc(sizeof(int) * (k+1)); // Auxiliary arrayfor (i = 0; i <= k; i++)
c[i] = 0;
for(j = 0; j < n; j++) c[a[j]] = c[a[j]] + 1; //c[I] contains elements equal to Ifor(i = 1; i <= k; i++) c[i] = c[i] + c[i-1]; //c[I] contains the number of elements less than or equal to Ifor(j = n-1; j >= 0; J --) {b[c[a[j]]-1 = A [j]; C [a[j]] = C [a[j]] -1; } /* For testing code, this step is not required */for (i = 0; i < n; i++) {
a[i] = b[i];
}
free(b);
free(c);
}
Copy the code
Extension: if the assignment statement for (j=n-1; j>=0; J -) to the for (j = 0; j<=n-1; J++), the code is still correct, but the sorting is no longer stable.
6 merge sort
Merge sort Uses the divide-and-conquer algorithm to sort two subarrays and then merge the two subarrays. The time is O(NlgN). The code is as follows:
/* */ mergeSort(int a[], int l, int u) {if (l < u) {
int m = l + (u-l) / 2; mergeSort(a, l, m); mergeSort(a, m + 1, u); merge(a, l, m, u); }} /** * merge(int a[], int l, int m, int u) {int n1 = m-l + 1; int n2 = u - m; int left[n1], right[n2]; int i, j;for (i = 0; i < n1; i++) /* left holds a[l..m] */
left[i] = a[l + i];
for (j = 0; j < n2; j++) /* right holds a[m+1..u] */
right[j] = a[m + 1 + j];
i = j = 0;
int k = l;
while (i < n1 && j < n2) {
if (left[i] < right[j])
a[k++] = left[i++];
else
a[k++] = right[j++];
}
while (i < n1) /* left[] is not exhausted */
a[k++] = left[i++];
while (j < n2) /* right[] is not exhausted */
a[k++] = right[j++];
}
Copy the code
Extension: What about a non-recursive implementation of merge sort?
The non-recursive implementation of merge sort is actually the most natural way to do it, which is to merge in pairs, then merge in fours, and so on, from the bottom up. The code is as follows:
Void mergeSortIter(int a[], int n) {int I, s=2;while (s <= n) {
i = 0;
while(i+s <= n){ merge(a, i, i+s/2-1, i+s-1); i += s; } // Merge (a, I, I + S /2-1, n-1); s*=2; } // Merge (a, 0, s/2-1, n-1); }Copy the code
7 Sort radix and buckets
The idea of radix sort is to sort each bit of the number separately (note that it must be a stable sort, such as counting sort, otherwise the result will be wrong), and finally get the whole sort. Suppose sorting N numbers, if the number has D bits, and the maximum possible value of each bit is K, then the stable sorting of each bit takes O(N+K) time, and the total time is O(d(N+K)) time. When D is constant and K=O(N), the total time complexity is O(N).
The idea of bucket sorting is to divide the interval [0,1) into N sub-intervals of the same size, distribute the N inputs evenly among each bucket, and then use insertion sorting for the linked list of each bucket, and finally list all the elements of the bucket successively.
These two kinds of sorting are used in a limited number of scenarios, so the code will be skipped. See Chapter 8 of Introduction to Algorithms for more details.
The resources
- Introduction to Algorithms
- www.cnblogs.com/liushang041… (Merge sort non-recursive implementation reference here)