1. Introduction
Algorithm is king.
Want to learn front end, first practice good internal skills, only deep internal skills, the front end of the road will go further.
The author wrote the beauty of JavaScript data structures and algorithms series in the language of JavaScript, aimed at the introduction of data structures and algorithms and convenient review later.
Merge sort, quicksort, hill sort, and heapsort are compared together because their average time complexity is O(nlogn).
Please take a question: quicksort and merge use the idea of divide and conquer, recursive formula and recursive code are very similar, what is the difference between them? To read the following.
2. Merge Sort
thought
To sort an array, we split the array down the middle, sort the front and back, and then merge the two parts together, so that the array is in order.
Merge sort is divide-and-conquer.
Divide and conquer, as the name suggests, is to divide and conquer, to break down a big problem into smaller sub-problems to solve. When the small subproblem is solved, the big problem is solved.
Note: x >> 1 === math.floor (x / 2); x >> 1 === math.floor (x / 2)
implementation
const mergeSort = arr= > {
// Take a top-down recursive approach
const len = arr.length;
if (len < 2) {
return arr;
}
// length >> 1 is equivalent to math.floor (len / 2)
let middle = Math.floor(len / 2),
left = arr.slice(0, middle),
right = arr.slice(middle); // Split into two subarrays
return merge(mergeSort(left), mergeSort(right));
};
const merge = (left, right) = > {
const result = [];
while (left.length && right.length) {
// Note: the condition is less than or equal to, if only less than, then the sort will be unstable.
if (left[0] <= right[0]) {
result.push(left.shift());
} else{ result.push(right.shift()); }}while (left.length) result.push(left.shift());
while (right.length) result.push(right.shift());
return result;
};
Copy the code
test
/ / test
const arr = [3.44.38.5.47.15.36.26.27.2.46.4.19.50.48];
console.time('Merge sort time');
console.log('arr :', mergeSort(arr));
console.timeEnd('Merge sort time');
// arr : [2, 3, 4, 5, 15, 19, 26, 27, 36, 38, 44, 46, 47, 48, 50]
Merge sort time: 0.739990234375ms
Copy the code
Analysis of the
-
First of all, is merge sort in place? This is because the merge function of merge sort requires additional storage when merging two ordered arrays into one ordered array. In fact, although additional memory space is requested for each merge operation, the temporary memory space is freed up after the merge. At any given time, the CPU has only one function executing, and therefore only one temporary memory space in use. The temporary memory space does not exceed the size of n data, so the space complexity is O(n). So merge sort is not a place sort algorithm.
-
Second, is merge sort a stable sort algorithm? Left [0] <= right[0] in merge ensures that elements with the same values are in the same order before and after merge. Merge sort is a stable sort method.
-
Third, what’s the time complexity of merge sort? Merge sort is one of the best sorting algorithms in terms of efficiency. If the length of the array is n, then splitting the array takes logn steps, and each step is an ordinary subarray merging process, the time complexity is O(n), so the time complexity of the synthesis is O(nlogn). Best case: T(n) = O(nlogn). Worst case: T(n) = O(nlogn). Average case: T(n) = O(nlogn).
animation
3. Quick Sort
The characteristics of quicksort is fast, and high efficiency! It’s one of the fastest sorting algorithms for big data.
thought
- First find a reference point (the middle of the general index group), then the array is divided into two parts by the reference point, and then compare with the data of the reference point in turn, if smaller than it, put on the left; Otherwise, to the right.
- An empty array is used to store the data after the comparison.
- Finally, recurse until the array length is <= 1;
Features: Fast, commonly used.
Disadvantages: You need to declare two additional arrays, wasting memory space resources.
implementation
Method one:
const quickSort1 = arr= > {
if (arr.length <= 1) {
return arr;
}
// Take the reference point
const midIndex = Math.floor(arr.length / 2);
Splice (index,1) returns an array of deleted elements.
const valArr = arr.splice(midIndex, 1);
const midIndexVal = valArr[0];
const left = []; // Store an array smaller than the reference point
const right = []; // Store an array larger than the reference point
// Iterate over the groups to determine the allocation
for (let i = 0; i < arr.length; i++) {
if (arr[i] < midIndexVal) {
left.push(arr[i]); // Put smaller than the reference point in the left array
} else {
right.push(arr[i]); // Put anything larger than the reference point in the right-hand array}}// Recursively perform the above operations until the array length is <= 1
return quickSort1(left).concat(midIndexVal, quickSort1(right));
};
const array2 = [5.4.3.2.1];
console.log('quickSort1 ', quickSort1(array2));
// quickSort1: [1, 2, 3, 4, 5]
Copy the code
Method 2:
// Quicksort
const quickSort = (arr, left, right) = > {
let len = arr.length,
partitionIndex;
left = typeofleft ! ='number' ? 0 : left;
right = typeofright ! ='number' ? len - 1 : right;
if (left < right) {
partitionIndex = partition(arr, left, right);
quickSort(arr, left, partitionIndex - 1);
quickSort(arr, partitionIndex + 1, right);
}
return arr;
};
const partition = (arr, left, right) = > {
// Partition operation
let pivot = left, // Set the pivot value.
index = pivot + 1;
for (let i = index; i <= right; i++) {
if (arr[i] < arr[pivot]) {
swap(arr, i, index);
index++;
}
}
swap(arr, pivot, index - 1);
return index - 1;
};
const swap = (arr, i, j) = > {
let temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
};
Copy the code
test
/ / test
const array = [5.4.3.2.1];
console.log('original array:, array);
const newArr = quickSort(array);
console.log('newArr:', newArr);
// 原始 array: [5, 4, 3, 2, 1]
// newArr: [1, 4, 3, 2, 5]
Copy the code
Analysis of the
-
First of all, is quicksort an in-place algorithm? Quicksort is a sort in place algorithm because the partition() function does not require much extra memory space to partition.
-
Second, is quicksort a stable sorting algorithm? Similar to selection sort, quicksort has the potential to break the order between elements that were originally the same value each time the elements are exchanged may not be adjacent. Therefore, quicksort is not stable.
-
Third, what is the time complexity of quicksort? Extreme example: if the data in the array is already ordered, such as 1,3,5,6,8. If we choose the last element as pivot each time, the two intervals will be unequal in each partition. We need to do about n partitions to complete the process of quicksorting. On average, we scan about N / 2 elements per partition, in which case the time complexity of quicksort degrades from O(nlogn) to O(n2). Best case: T(n) = O(nlogn). Worst case: T(n) = O(n2). Average case: T(n) = O(nlogn).
animation
Answer the opening question
Quicksort and merge both use divide and conquer, and the recursive formula and recursive code are very similar, so what’s the difference?
It can be found:
- The merge sort process is
From bottom up
Of, first deal with the subproblem, and then merge. - Quicksort, on the other hand, does the opposite
top-down
Partition first and deal with subproblems later. - Although merge sort is stable, the time complexity is O(nlogn) sorting algorithm, but it is not in place sorting algorithm.
- Merge is a non-place sort algorithm, the main reason is that the merge function cannot be executed in place.
- Quicksort through the design of clever in-situ partition function, you can achieve in-situ sort, to solve the merge sort too much memory.
4. Shell Sort
thought
- First, the entire sequence of records to be sorted is divided into several sub-sequences.
- Direct insertion sort, respectively.
- When the records in the whole sequence are basically in order, the whole records are directly inserted in order.
process
- For an easy to understand example: [35, 33, 42, 10, 14, 19, 27, 44], we take interval 4. Creates a virtual sublist of all values spaced four places apart. The following values are {35, 14}, {33, 19}, {42, 27}, and {10, 44}.
- We compare the values in each sublist and swap them (if necessary) in the original array. After this step, your new array should look like the following.
- Then, we take the interval of 2, which generates two sublists: {14, 27, 35, 42}, {19, 10, 33, 44}.
- We compare and exchange the values in the original array (if necessary). When this is done, the array becomes: [14, 10, 27, 19, 35, 33, 42, 44], as shown below, with 10 and 19 swapped.
- Finally, we sort the rest of the array using the value interval 1, and Shell Sort sorts the array using insertion sort.
implementation
const shellSort = arr= > {
let len = arr.length,
temp,
gap = 1;
console.time('Hill sort time');
while (gap < len / 3) {
// Define the interval sequence dynamically
gap = gap * 3 + 1;
}
for (gap; gap > 0; gap = Math.floor(gap / 3)) {
for (let i = gap; i < len; i++) {
temp = arr[i];
let j = i - gap;
for (; j >= 0 && arr[j] > temp; j -= gap) {
arr[j + gap] = arr[j];
}
arr[j + gap] = temp;
console.log('arr :', arr); }}console.timeEnd('Hill sort time');
return arr;
};
Copy the code
test
/ / test
const array = [35.33.42.10.14.19.27.44];
console.log('original array:, array);
const newArr = shellSort(array);
console.log('newArr:', newArr);
// Raw array: [35, 33, 42, 10, 14, 19, 27, 44]
// arr : [14, 33, 42, 10, 35, 19, 27, 44]
// arr : [14, 19, 42, 10, 35, 33, 27, 44]
// arr : [14, 19, 27, 10, 35, 33, 42, 44]
// arr : [14, 19, 27, 10, 35, 33, 42, 44]
// arr : [14, 19, 27, 10, 35, 33, 42, 44]
// arr : [14, 19, 27, 10, 35, 33, 42, 44]
// arr : [10, 14, 19, 27, 35, 33, 42, 44]
// arr : [10, 14, 19, 27, 35, 33, 42, 44]
// arr : [10, 14, 19, 27, 33, 35, 42, 44]
// arr : [10, 14, 19, 27, 33, 35, 42, 44]
// arr : [10, 14, 19, 27, 33, 35, 42, 44]
// Hill sort time: 3.592041015625ms
// newArr: [10, 14, 19, 27, 33, 35, 42, 44]
Copy the code
Analysis of the
-
First of all, is Hill sort in place? In the process of Hill sorting, only the exchange operation of adjacent data is involved, and only the temporary space of constant level is needed, and the space complexity is O(1). So, Hill sort is an in-place sort algorithm.
-
Second, is Hill sorting a stable sorting algorithm? We know that a single direct insertion sort is stable, it does not change the relative order of the same elements, but in the process of multiple different insertion sorts, the same elements may move in their respective insertion sorts, which may cause the relative order of the same elements to change. Therefore, the Hill sequence is unstable.
-
Third, what is the time complexity of Hill sorting? Best case: T(n) = O(n logn). Worst case: T(n) = O(n (log(n))2). Average case: T(n) = depends on the gap sequence.
animation
5. Heap Sort
The definition of the heap
A heap is actually a special kind of tree. As long as those two things are true, it’s a heap.
- The heap is a complete binary tree. Complete binary tree: The number of nodes in all layers is full except for the last layer, which is arranged to the left.
- The value of every node in the heap must be greater than or equal to the value of every node in its subtree. In other words, each node in the heap has a value greater than or equal to the value of its left and right children. These two statements are equivalent.
A heap where each node’s value is greater than or equal to the value of each node in the subtree is called a large top heap. The heap where each node’s value is less than or equal to the value of each node in the subtree is called the small top heap.
Figures 1 and 2 are large top heaps, Figure 3 is small top heaps, and Figure 4 is not a heap. In addition, as you can see from the diagram, there are many different forms of heap that can be built for the same set of data.
thought
- Set the initial sequence of keywords to be sorted (R1, R2…. Rn) is constructed into a large top heap, which is the initial unordered region;
- Swap the top element R[1] with the last element R[n] to get a new unordered region (R1, R2,…..) Rn-1) and the new ordered region (Rn), where R[1, 2… n-1] <= R[n].
- Since the new heap top R[1] after the swap may violate the properties of the heap, the current unordered region (R1, R2……) is required Rn-1) to the new heap, and then swap R[1] again with the last element of the unordered region to get the new unordered region (R1, R2….) Rn-2) and the new ordered region (Rn-1, Rn). This process is repeated until the number of elements in the ordered region is n-1, and the sorting process is complete.
implementation
/ / heap sort
const heapSort = array= > {
console.time('Heap sort time');
// Initialize the large top heap, starting with the first non-leaf node
for (let i = Math.floor(array.length / 2 - 1); i >= 0; i--) {
heapify(array, i, array.length);
}
// Sort, each for loop to find a current maximum, the array length minus one
for (let i = Math.floor(array.length - 1); i > 0; i--) {
// The root node switches with the last node
swap(array, 0, i);
// Start from the root node, and the last node is already the maximum value, so there is no need to participate in the comparison, so the third parameter is I, that is, compare to the last node before the one
heapify(array, 0, i);
}
console.timeEnd('Heap sort time');
return array;
};
// Switch the two nodes
const swap = (array, i, j) = > {
let temp = array[i];
array[i] = array[j];
array[j] = temp;
};
// Make the heap below the I node into a large top heap. Note that this step is actually based on:
// Assume that the subheap below node I is already a large top heap
// Find the correct location of node I in the heap that includes node I.
// This is followed by a for loop, starting with the first non-leaf node and for each non-leaf node
// both heapify, so that the subheap below node I is already a large top heap
const heapify = (array, i, length) = > {
let temp = array[i]; // The current parent node
// select * from 'I' where 'I' = 'I'
for (let j = 2 * i + 1; j < length; j = 2 * j + 1) {
temp = array[i]; Array [I] = array[I] = array[I
if (j + 1 < length && array[j] < array[j + 1]) {
j++; // Find the older of the two children and compare it with the parent
}
if (temp < array[j]) {
swap(array, i, j); // If the parent node is smaller than the child node: switch; Or jump out
i = j; // After switching, the subscript of temp is changed to j
} else {
break; }}};Copy the code
test
const array = [4.6.8.5.9.1.2.5.3.2];
console.log('original array:, array);
const newArr = heapSort(array);
console.log('newArr:', newArr);
// 原始 array: [4, 6, 8, 5, 9, 1, 2, 5, 3, 2]
// Heap sort time: 0.15087890625ms
// newArr: [1, 2, 2, 3, 4, 5, 5, 6, 8, 9]
Copy the code
Analysis of the
-
First, is heapsort an in-place sort algorithm? The whole heap sort process, all only need very individual temporary storage space, so heap sort is in place algorithm.
-
Second, is heapsort a stable sorting algorithm? Because the last node in the heap is interchanged with the top node in the sorting process, it is possible to change the original relative order of data with the same value. So, heapsort is an unstable sort algorithm.
-
Third, what is the time complexity of heapsort? Heapsort consists of two operations: heapsort and heapsort. Heapsort takes O(n) time and sorting takes O(nlogn) time, so heapsort as a whole takes O(nlogn) time. Best case: T(n) = O(nlogn). Worst case: T(n) = O(nlogn). Average case: T(n) = O(nlogn).
animation
6. Complexity comparison of sorting algorithms
Complexity contrast
The name of the | The best | On average, | The worst | memory | The stability of | note |
---|---|---|---|---|---|---|
Merge sort | nlog(n) | nlog(n) | nlog(n) | n | Yes | . |
Quick sort | nlog(n) | nlog(n) | n2 | log(n) | No | In the in-place version, memory complexity is usually O(log(n)). |
Hill sorting | nlog(n) | Depending on the gap sequence | n(log(n))2 | 1 | No | . |
Heap sort | nlog(n) | nlog(n) | nlog(n) | 1 | No | . |
Algorithm visualization tool
- Algorithm-visualizer algorithm-Visualizer is an interactive online platform that allows you to visualize algorithms from your code and see how the code is executed.
The effect is shown below.
It aims to reveal the mechanism behind the algorithm through the implementation of interactive visualizations.
-
Algorithm visualization source visualgo.net/en effect is shown below.
-
www.ee.ryerson.ca
- illustrated-algorithms
Visual representations of variables and operations enhance the control flow and the actual source code. You can quickly execute forward and backward to get a close look at how the algorithm works.
7. Article output plan
The beauty of JavaScript data structures and algorithms is a series of articles, adhere to about 3-7 days to update one, the tentative plan is listed below.
The title | link |
---|---|
Time and space complexity | Github.com/biaochenxuy… |
Linear table (array, linked list, stack, queue) | Github.com/biaochenxuy… |
To implement a front-end routing, how to implement the browser forward and backward? | Github.com/biaochenxuy… |
Stack memory versus heap memory, shallow copy versus deep copy | Github.com/biaochenxuy… |
recursive | Github.com/biaochenxuy… |
Nonlinear table (tree, heap) | Github.com/biaochenxuy… |
Bubble sort, selection sort, insertion sort | Github.com/biaochenxuy… |
Merge sort, quicksort, Hill sort, heap sort | Github.com/biaochenxuy… |
Count sort, bucket sort, radix sort | Wonderful to be continued |
Top 10 classics sorted summary | Wonderful to be continued |
I highly recommend the GitHub data structure and algorithm project for front-end learning | Github.com/biaochenxuy… |
If there are any mistakes or irregularities, please be sure to correct them. Thank you very much.
8. The last
All of the code and test cases in this article are on my GitHub.
Find it useful? Like the collection, by the way a like it.
Reference article:
- JS to implement heap sort
- The beauty of data structures and algorithms
- Summary of top 10 Sorting Algorithms (JavaScript description)
- JS for all the sorting algorithms you can possibly use