I will my warehouse sorting algorithm for you to summarize, write very very fine, but also for each algorithm made animation, must be able to help you, welcome to read. In addition, I have also summarized the algorithm problems that can use the sorting algorithm to kill leetcode, which will be published in the next article. Welcome to read it.
In addition, if you need other animation problems, you can look at this warehouse, about 100 animations
Github.com/chefyuan/al…
Practical application and sorting algorithm details
Writing in the front
Inside Yuan Ji Restaurant
Yuen kitchen: small 2, recently is close to the New Year, we also will send you some year-end bonuses, according to let’s you go red and black beans small laptop, look at all the hair how much of the year-end bonuses, then according to the amount of money since the childhood, one by one in order to send money, you go back to have a good year, also eldest brother not small, you go back to get a daughter-in-law.
Small two: good drop shopkeeper, I go right now.
Above said according to the amount of money from large to small is what we are going to talk about today — sort.
Sorting is we often face the problems of life, physical education, the teacher will let us arrange from low to high, when one’s deceased father grind is admitted, result will be exactly sorted according to the total score from high (one’s deceased father grind the readers, you will receive the right school for you to send the big envelope), when we online shopping, sometimes by sales from high to low, the price from low to high, List the items that best meet our expectations first.
Concept: The process of sorting random data elements by keyword (k) in a certain method (sorting algorithm) is called sorting. For example, our sales volume and price are keywords
Stability of sorting algorithm
What is the stability of sorting algorithms?
Because there may be two or more records with equal keywords in the record sequence to be sorted, the sorting result may not be unique. Therefore, if the original order between equal elements remains unchanged after sorting. The sorting method used is said to be stable, otherwise it is said to be unstable. See below
For example above, we have two of the same element in an array of 4, we use different sorting algorithm for the sorting, after a sorting algorithm, two of the same element relative position did not change, we call it the sorting algorithm of the stable, two relative position change after sorting algorithms, sorting algorithms is unstable.
So what’s the use of the stability of the sorting algorithm?
In most of our problems, we just sort the array. We only need to consider the time complexity and space complexity, and whether the sorting algorithm is stable is generally not considered. But the stability of sorting algorithms is a particularly important metric in real software development. Let’s go back to our example. We want to rank the year-end bonus from least to most, and then rank the number of red beans in the same year-end bonus range from least to most.
The stability of the sorting algorithm is crucial here. Why is that? See below
After the first sorting, all the employees are ordered from the lowest to the highest number of red beans.
In the second sorting, we use a stable sorting algorithm, so after the second sorting, the employees with the same year-end bonus still keep the order of the red beans (want to keep the same position), and the red beans are still ranked from small to large. We use a stable sorting algorithm that only requires two sorts.
Stable sorting allows the results of the first keyword sort to serve those numbers with equal values in the second keyword sort.
In the above case, if we use an unstable sorting algorithm, it is very complicated to achieve this effect.
Comparison classes and non-comparison classes
We determine the relative order of elements based on whether they depend on comparisons with other elements. In order to distinguish comparison sorting algorithm and non-comparison sorting algorithm.
Inner sort and outer sort
Inner sort is when all the records to be sorted are placed in memory during the whole sorting process. Sorting is due to sort of record number is too much, cannot be placed in memory at the same time, the entire sorting process need, many times to exchange data between peripheral storage, common internal sorting algorithms are: insertion sort, hill sorting, selection, bubble sort, merge sort, quick sort, heap sort, radix sort, etc.
For our internal sorting, we are mainly influenced by three aspects: time performance, auxiliary space, and algorithm complexity
Time performance
In the execution of our sorting algorithm, we mainly perform two operations comparison and exchange, comparison is the least operation of sorting algorithm, movement refers to the record from one position to another position. So our efficient sorting algorithm should have as few comparisons and moves as possible.
Auxiliary space
The amount of auxiliary space required to execute the algorithm is also an important indicator to measure the performance of the sorting algorithm
Complexity of the algorithm
The algorithm complexity here is not the time complexity of the algorithm, but the complexity of the algorithm itself, too complex algorithm will also affect the performance of sorting.
So let’s go over two simple sorting algorithms, bubble sort and simple selection sort, and see if there’s anything we missed.
Bubble Sort
Estimate when we introduce sorting in various algorithm books, the first estimate is bubble sort. Mainly is this sort algorithm idea is the simplest, but also the most easy to understand, (may be its name good, ha ha), learned the old brothers also review it, we dig a bubble sort.
The basic idea of bubble sort is to compare the keywords of adjacent records in pairs and swap them if they are in reverse order until there are none. Bubble One bubble will move at least one element to where it should be, so if the array has n elements, repeat n times to finish sorting. So by definition bubble sort is obviously a comparison sort.
The simplest sort implementation
Let’s take a look at this code
class Solution {
public int[] sortArray(int[] nums) {
int len = nums.length;
for (int i = 0; i < len; ++i) {
for (int j = i+1; j < len; ++j) {
if(nums[i] > nums[j]) { swap(nums,i,j); }}}return nums;
}
public void swap(int[] nums,int i,int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
If nums[I] > nums[j], then nums[0] must be the minimum after a loop. So is this code bubble sort?
Obviously not, our idea of bubble sort is to compare two adjacent records of the keyword, notice that there are adjacent records, so this code is not our bubble sort, we use the following GIF to simulate the bubble sort of execution process, after reading it must be able to write authentic bubble sort.
The subject code
class Solution {
public int[] sortArray(int[] nums) {
int len = nums.length;
for (int i = 0; i < len; ++i) {
for (int j = 0; j < len - i - 1; ++j) {
if (nums[j] > nums[j+1]) {
swap(nums,j,j+1); }}}return nums;
}
public void swap(int[] nums,int i,int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
To improve the
The code in the figure above is the real bubbling sort code, but we may have found this problem
So we have a fully ordered array and we can just return it, but in the GIF we’re not returning it, we’re just continuing, so is there any way we can make it completely ordered and just return it and not continue?
Let’s imagine that we compare nums[j] with nums[j+1] and swap if greater than nums. Let’s imagine a perfectly ordered array where we bubble sort and swap every time we compare.
So if there is no swap then the current is completely ordered. Can we use a flag bit to tell if there’s an exchange going on? Of course you can
Bubble sort improvements
class Solution {
public int[] sortArray(int[] nums) {
int len = nums.length;
/ / sign
boolean flag = true;
// Notice the for loop condition
for (int i = 0; i < len && flag; ++i) {
// If no swap occurs, it will remain false and the next time the loop will break
flag = false;
for (int j = 0; j < len - i - 1; ++j) {
if (nums[j] > nums[j+1]) {
swap(nums,j,j+1);
// If an exchange occurs, it becomes true
flag = true; }}}return nums;
}
public void swap(int[] nums,int i,int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
In this way, we avoid meaningless circular judgments in already ordered cases.
Bubble sort time complexity analysis
In the best case, if the tables we want to sort are perfectly ordered, we only need to go through them once, based on our improved code,
It only takes n minus one comparisons, order n time. In the worst case, that is, if the table to be sorted is in reverse order, you need to compare (n-1) + (n-2) +…. + 2 + 1= N *(n-1)/2, and exchange of the same magnitude, then the time complexity is O(n^2). *
On average, n(n-1)/4 swaps are required, the comparison is greater than or equal to the swap, and the complexity is up to O(n^2), so the average time complexity is O(n^2).
Complexity analysis of bubble sort space
Because bubble sort is just a swap between adjacent elements, using only constant extra space, the space complexity is O(1).
Stability analysis of bubble sort
So is bubble sort stable? Nums [j] > nums[j + 1]; nums[j] > nums[j + 1]; nums[j] > nums[j + 1];
The algorithm name | Best time complexity | Worst time complexity | Average time complexity | Spatial complexity | Is stable |
---|---|---|---|---|---|
Bubble sort | O(n) | O(n^2) | O(n^2) | O(1) | stable |
Simple selection sort
Our bubble sort is constantly exchanged, through which the final sort is completed. Our idea of simple selection sort is also easy to understand. The main idea is that we choose the record with the smallest keyword in the n-i+1 record as the ith record of the ordered sequence every time.
For example, in the figure above, green represents sorted elements and red represents unsorted elements. We’re currently pointing to 4, so we walk through the red element, find the minimum, and swap with 4. We found that selection sort can return at least one element after a loop.
Now let’s take a look at the code execution process, after reading the code will be able to write.
Note: To make things easier to understand, min values are stored as values, not indexes
Simply select the sort code
class Solution {
public int[] sortArray(int[] nums) {
int len = nums.length;
int min = 0;
for (int i = 0; i < len; ++i) {
min = i;
// Iterate to the minimum value
for (int j = i + 1; j < len; ++j) {
if (nums[min] > nums[j]) min = j;
}
if(min ! = i) swap(nums,i,min); }return nums;
}
public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Simple selection sort time complexity analysis
From the perspective of the process of simple selection sort, his biggest characteristic is a relatively few number of mobile data exchange, it saves sort time, simple selection and bubble sort, we found that despite the best and the worst case, the comparison between the element number is the same, the time, I need to n – I times, n represents the array length, Then we need to compare (n-1) + (n-2) +…. + 2 + 1= n times n minus 1 over 2 times, for swaps, 0 times in the best case, n minus 1 times in the worst case. So the time complexity of simple selection sort is also O(n^2), but the number of swaps is much less than bubble sort, so its efficiency is better than bubble sort.
Simple selection sort space complexity analysis
As can be seen from the GIF, our simple selection sort only uses extra space of constant level, so the space complexity is O(1).
Simple selection sorting stability analysis
So let’s think about, is our simple choice ordering stable? Obviously not stable, because we need to find the smallest value after the pointer and swap it with the value the pointer points to, as shown in the figure below.
At this point we need to find the smallest element in the following element and swap it with the pointer to the element, which is element 2. However, after swapping, we find that the relative positions of the two equal elements 3 have changed, so simple selection sort is an unstable sorting algorithm.
The algorithm name | Best time complexity | Worst time complexity | Average time complexity | Spatial complexity | Is stable |
---|---|---|---|---|---|
Simple selection sort | O(n^2) | O(n^2) | O(n^2) | O(1) | unstable |
Inside Yuan Ji Restaurant
Yuan: Ok, let’s close. Let’s play poker.
Small two: good, shopkeeper, let’s play dou landlord.
I believe we should have played poker, we usually touch cards, is not while touching cards, while managing cards, touch the new card, will insert it into the appropriate position. That’s really our idea of insertion sort.
Direct insertion sort: A new ordered table is created by inserting a record into an already ordered table. In popular understanding, we first divide the sequence into two intervals, the ordered interval and the unordered interval. We take a value in the unordered interval every time, find the appropriate insertion position in the sorted interval and insert it, and ensure that the sorted interval is always in order. Let’s take a look at the GIF.
Note: In order to express the idea of the algorithm more clearly, the algorithm is presented in the form of cutting out elements to be sorted, which will also be expressed in the future.
And you can go to this warehouse and see how this algorithm sort linked listsList insertion sort
Insert the sorting code directly
class Solution {
public int[] sortArray(int[] nums) {
// Note that the initial value of I is 1, starting with the second element
for (int i = 1; i < nums.length; ++i) {
// The value to be sorted
int temp = nums[i];
// Be careful
int j;
for (j = i-1; j >= 0; --j) {
// Find the right location
if (temp < nums[j]) {
nums[j+1] = nums[j];
continue;
}
// Break the loop
break;
}
// Insert into place, which is why we don't define variables inside the for loop
nums[j+1] = temp;
}
returnnums; }}Copy the code
Insert sort time complexity analysis directly
In the best case, in the ordered case, we don’t have to move the elements, we only have to compare them once at a time to find the insertion position, so in the best case, the time is order n.
In the worst case, where the table to be sorted is in reverse order, you need to compare 2+3+…. +n = (n+2)(n-1)/2, the number of moves also reached the maximum, 3 +4+5+…. N +1 = (n+4)(n-1)/2, the time complexity is O(n^2).
If the time complexity of each insert is O(n), then the average time complexity of each insert is O(n^2).
Insert sort space complexity analysis directly
According to the animation, insert sort does not require additional storage space, so its space complexity is O(1).
Direct insertion sort stability analysis
We know from our code that we will only move elements that are larger than temp, so we can sort to ensure that the relative positions of the same elements remain the same. So the insertion sort is the stability sort algorithm.
Shell’s Sort
We mentioned earlier that direct insert sort is very efficient when the records are basically ordered and when there are fewer elements, and when the records are basically ordered, it takes only a few inserts to sort the entire record. When the number of elements is small, the efficiency is also very high, such as our frequent use of arrays.sort (), when the number of elements is less than 47, the sorting algorithm is direct insertion sort. So what’s the relationship between direct Hill sort and direct insert sort?
Hill Sort, also known as “Diminishing Increment Sort”, is an advanced variant of direct insertion Sort. Its idea is simply that insertion Sort with span, which will gradually shrink until it becomes 1. When it becomes 1, the records will be basically orderly. And that’s what we’re going to do with direct insertion sort.
Basic order: that is, small keywords are basically in the front, large keywords are basically in the back, medium and small keywords are basically in the middle. See below.
Now that we have seen the basic idea of Hill sorting, let’s use a diagram to illustrate the steps to perform it.
Step by step group for rough tuning, then the idea of direct insertion sort is Hill sort. The grouping span that we just did (4,2,1) is called the increments of hill sort, and the increments that we used above are increments that are gradually halved, which is a naive method that was introduced when hill sort was invented, called hill increments,
Let’s use a GIF to simulate the execution process of Hill sort using Hill delta
Outside the chain picture archiving failure, the source station might be hotlinking prevention mechanism, proposed to directly upload picture preserved (img – 7 KCFZLRK – 1621395235780) (cdn.jsdelivr.net/gh/tan45du/…)”
You may have seen the video simulation, and it’s not very easy to write algorithm code, but you’ll be familiar with the code.
Hill sort code
class Solution {
public int[] sortArray(int[] nums) {
int increment = nums.length;
// Pay attention to the end condition
while (increment > 1) {
// You can set it yourself
increment = increment / 2;
// Group by increment
for (int i = 0; i < increment; ++i) {
// This looks a bit familiar
for (int j = i + increment; j < nums.length; j += increment) {
int temp = nums[j];
int k;
for (k = j - increment; k >= 0; k -= increment) {
if (temp < nums[k]) {
nums[k+increment] = nums[k];
continue;
}
break; } nums[k+increment] = temp; }}}returnnums; }}Copy the code
We said that our deltas can be set by ourselves. Our example above is hill deltas. Let’s look at this example and see what happens when we use Hill deltas.
We find that there is no exchange of elements within each group, either in increments of 4 or 2. The array is not adjusted to direct insert sort until the increment is 1. So is hill sort less efficient than insert sort in this case?
Our Hill increments are equally proportional to each round, so there are blind spots, where the selection of increments is critical.
Here are two typical Sedgewick deltas and Hibbard deltas
The Sedgewick delta sequence is as follows:
1,5,19,41,109.
The general formula 94 to the K minus 92 to the
With this incremental hill sort, the worst time complexity is O(n^(4/3)).
Hibbard delta sequence is as follows:
1,3,7,15……
The general term is 2 to the k minus 1
With this incremental Hill sort, the worst time complexity is O(n^(3/2)).
The above are two typical incremental methods, but it is still a mathematical problem to choose the best one. One thing to note, though, is that the last increment in the sequence of increments must be equal to 1.
Hill sort time complexity analysis
The time complexity of Hill sort is related to the choice of incremental sequence, and the range is O(n^(1.3-2)). The time complexity of the sorting algorithm before this is basically O(n^2). Hill sort is one of the first algorithms to break through this time complexity.
Hill sort space complexity analysis
According to our video, the spatial complexity of Hill sort is O(1),
Stability analysis of Hill order
Let’s see the figure below, and let’s analyze the stability of Hill sort.
According to the figure above, if we choose 4 as the span, the relative positions of the two same elements 2 will change after the exchange, so Hill sort is an unstable sort
Quick sort
Today we are going to talk about quick sorting, this sorting algorithm is also the interview of the high frequency test point, the principle is very simple, let’s pick him together.
So let’s talk about the basic idea of quicksort.
1. Find a base number from the array
2. Split the array into two parts by moving the larger element to one side and the smaller element to the other.
3. Repeat the second step for the left and right intervals until each interval has only one number.
See below
The diagram above is a quick sorting diagram. Below, we use recursion to execute the appeal process for the left half of the region, namely [3,1,2] and the right half of the region [7,6,5,8], until the region is reduced to 1, namely, the third article. Then all the data are in order.
To put it simply, we divide the records to be sorted into two independent parts by using the datum number through a sorting. One part of the records has a smaller keyword than the datum number, and the other part has a larger keyword than the datum number. Then, we continue to sort these two parts of records respectively to achieve order.
I think we now know the idea of quicksort, so remember when we talked about merge sort, they both used divide-and-conquer, so what’s the difference between them? See below
Note: For quicksort we use the first element of the sequence as the base number
Although merge sort and quicksort both use the divide-and-conquer idea, merge sort is bottom-up, dealing with subproblems first, then merging, combining small sets into large sets, and finally sorting. Quicksort works from top to bottom, partitioning first and then dealing with subproblems. Although merge sort is a stable sorting algorithm with O(nlogn) time complexity, it is a non-in-situ sorting algorithm. As we saw earlier, the main reason why merge is not an in-place sort algorithm is because merge functions can’t be executed in place. Quicksort can realize in-place sort by cleverly designed in-place partition function, which solves the problem that merge sort occupies too much memory
According to the idea, the core of the sorting algorithm is how to use the datum to partition the records. Here we mainly introduce two easy to understand methods, one is to dig holes to fill the number, the other is to use the idea of double Pointers to exchange elements.
Dig a hole to fill the number
Let’s first introduce the partition method of digging pit fill
If nums[hight] is greater than the base number, move the hight pointer left until an element smaller than the base number is found, and fill it into the previous pit, then a new pit will appear at the hight position. At this point, move the low pointer, find the element greater than the base number, and fill the new pit. Iterate until partitioning is complete.
Let’s just look at our video simulation, and see at a glance.
Note: In order to facilitate understanding, it adopts the form of excavation to display
If it’s easy to understand, let’s go straight to the code.
class Solution {
public int[] sortArray(int[] nums) {
quickSort(nums,0,nums.length-1);
return nums;
}
public void quickSort (int[] nums, int low, int hight) {
if (low < hight) {
int index = partition(nums,low,hight);
quickSort(nums,low,index-1);
quickSort(nums,index+1,hight); }}public int partition (int[] nums, int low, int hight) {
int pivot = nums[low];
while (low < hight) {
// Move the hight pointer
while (low < hight && nums[hight] >= pivot) {
hight--;
}
/ / filling holes
if (low < hight) nums[low] = nums[hight];
while (low < hight && nums[low] <= pivot) {
low++;
}
/ / filling holes
if (low < hight) nums[hight] = nums[low];
}
// Put the base number in the appropriate position
nums[low] = pivot;
returnlow; }}Copy the code
Swap places
Let’s take a look at the exchange of ideas, the principle is consistent, the implementation is relatively simple.
See below.
The low pointer finds an element greater than pivot, and the hight pointer finds an element less than pivot. Then the two elements switch positions, and finally the base number is returned. Both methods are easy to understand and implement, even if you haven’t studied quicksort at all, you can do it yourself once you understand the idea, so let’s continue with the video simulation of the implementation of this method.
Both methods are easy to implement, very friendly to beginners, you can go to AC ah.
class Solution {
public int[] sortArray (int[] nums) {
quickSort(nums,0,nums.length-1);
return nums;
}
public void quickSort (int[] nums, int low, int hight) {
if (low < hight) {
int index = partition(nums,low,hight);
quickSort(nums,low,index-1);
quickSort(nums,index+1,hight); }}public int partition (int[] nums, int low, int hight) {
int pivot = nums[low];
int start = low;
while (low < hight) {
while (low < hight && nums[hight] >= pivot) hight--;
while (low < hight && nums[low] <= pivot) low++;
if (low >= hight) break;
swap(nums, low, hight);
}
// Return the base value
swap(nums,start,low);
return low;
}
public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Time complexity analysis of quicksort
Quicksort is also implemented recursively. So the time performance of quicksort depends on the depth of the recursive tree of quicksort. If each partition operation, the array can be divided into size close to equal between two residential area, so the recursive tree is balanced, the performance is better also, the depth of the recursion tree also agree before and merge sort algorithm, and then we need an array scanning each partition again, do n times, so the optimal conditions, The time complexity of fast sorting is order nlogn.
However, in most cases, we cannot divide the array evenly. For example, when the array is in positive or reverse order, i.e. [1,2,3,4] or [4,3,2,1], this is the worst case. Then we need to recurse n-1 times, and the time complexity is reduced to O(n^2).
Spatial complexity analysis of quicksort
Quicksort is mainly the use of stack space caused by recursion, which at best is O (logn), corresponding to the depth of the recursion tree. In the worst case, n-1 recursive calls are required, and the space complexity is O(n).
Stability analysis of quicksort
Quicksort is an unstable sorting algorithm because the comparison and exchange of keywords is performed by jumping, as shown in the figure below.
Either way, after the first sort, the yellow one will be ahead of the red one, so quicksort is an unstable sort algorithm.
Okay, quicksort is pretty much what we’ve got, so let’s take a look at how to optimize it.
Iterative writing of quicksort
The method is relatively simple to implement, with the help of the stack to achieve, easy to implement. The main use of the first out feature, here need to pay attention to is the order of the stack, here is a small detail, need to pay attention to.
class Solution {
public int[] sortArray(int[] nums) {
Stack<Integer> stack = new Stack<>();
stack.push(nums.length - 1);
stack.push(0);
while(! stack.isEmpty()) {int low = stack.pop();
int hight = stack.pop();
if (low < hight) {
int index = partition(nums, low, hight);
stack.push(index - 1);
stack.push(low);
stack.push(hight);
stack.push(index + 1); }}return nums;
}
public int partition (int[] nums, int low, int hight) {
int pivot = nums[low];
int start = low;
while (low < hight) {
while (low < hight && nums[hight] >= pivot) hight--;
while (low < hight && nums[low] <= pivot) low++;
if (low >= hight) break;
swap(nums, low, hight);
}
swap(nums,start,low);
return low;
}
public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Quicksort optimization
Middle method of three numbers
We used nums[Low] as our base value in the above examples, but what happens when we run into special cases? See below
If we do the above, we pick the first element as the base element, and then we do a round of sorting, and we find that we just swap 2 and 7, because 7 is the maximum value of the sequence. Therefore, the selection of pivot is particularly important, and the maximum or minimum value of the sequence should be avoided as far as possible. Then we can choose the reference value by using the trinary method.
Take the middle value of the three elements and place it at nums[low] as the reference value. This avoids using maximum or minimum values as a baseline.
So we can add these lines of code to the middle of three.
int mid = low + ((hight-low) >> 1);
if (nums[low] > nums[hight]) swap(nums,low,hight);
if (nums[mid] > nums[hight]) swap(nums,mid,hight);
if (nums[mid] > nums[low]) swap(nums,mid,low);
Copy the code
The middle element is placed at nums[low], the maximum is placed at nums[hight], and the minimum is placed at nums[mid]. At this point, we choose 3 as the base value, thus avoiding the situation of choosing the maximum or minimum as the base value.
Middle method of three numbers
class Solution {
public int[] sortArray(int[] nums) {
quickSort(nums,0,nums.length-1);
return nums;
}
public void quickSort (int[] nums, int low, int hight) {
if (low < hight) {
int index = partition(nums,low,hight);
quickSort(nums,low,index-1);
quickSort(nums,index+1,hight); }}public int partition (int[] nums, int low, int hight) {
// You can use other methods as well
int mid = low + ((hight-low) >> 1);
if (nums[low] > nums[hight]) swap(nums,low,hight);
if (nums[mid] > nums[hight]) swap(nums,mid,hight);
if (nums[mid] > nums[low]) swap(nums,mid,low);
// This is the same as before, just a few more lines of code
int pivot = nums[low];
int start = low;
while (low < hight) {
while (low < hight && nums[hight] >= pivot) hight--;
while (low < hight && nums[low] <= pivot) low++;
if (low >= hight) break;
swap(nums, low, hight);
}
swap(nums,start,low);
return low;
}
public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Used with insert sort
As we said before, insert sort is most efficient when you have a small number of elements, so when you have a small number of elements, quicksort is not as good as insert sort, so we can set a threshold where we use quicksort when the number of elements is greater than the threshold, and insert sort when the number of elements is less than or equal to the threshold. We set a threshold of 7.
Three numbers take the + insertion sort
class Solution {
private static final int INSERTION_SORT_MAX_LENGTH = 7;
public int[] sortArray(int[] nums) {
quickSort(nums,0,nums.length-1);
return nums;
}
public void quickSort (int[] nums, int low, int hight) {
if (hight - low <= INSERTION_SORT_MAX_LENGTH) {
insertSort(nums,low,hight);
return;
}
int index = partition(nums,low,hight);
quickSort(nums,low,index-1);
quickSort(nums,index+1,hight);
}
public int partition (int[] nums, int low, int hight) {
// You can use other methods as well
int mid = low + ((hight-low) >> 1);
if (nums[low] > nums[hight]) swap(nums,low,hight);
if (nums[mid] > nums[hight]) swap(nums,mid,hight);
if (nums[mid] > nums[low]) swap(nums,mid,low);
int pivot = nums[low];
int start = low;
while (low < hight) {
while (low < hight && nums[hight] >= pivot) hight--;
while (low < hight && nums[low] <= pivot) low++;
if (low >= hight) break;
swap(nums, low, hight);
}
swap(nums,start,low);
return low;
}
public void insertSort (int[] nums, int low, int hight) {
for (int i = low+1; i <= hight; ++i) {
int temp = nums[i];
int j;
for (j = i-1; j >= 0; --j) {
if (temp < nums[j]) {
nums[j+1] = nums[j];
continue;
}
break;
}
nums[j+1] = temp; }}public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Trinary + trinary partition + insertion sort
Let’s move on to the next case
After we sort it once, it will look like this, and then we continue to do the same for the left and right of the blue base value. That’s [2,3,6,3,1,6] and [8,6]. Notice that the result array of a sort contains many repeating elements.
So why can’t we put the same elements together? This greatly reduces the size of the interval for recursive calls, as shown in the figure below.
This reduces the size of our interval, dividing the array into three parts, smaller than the left interval of the base, larger than the right interval of the base, equal to the middle interval of the base.
So let’s see how we can do that, and we’ll do a video simulation, and we’ll see what we can do.
So let’s break it down a little bit. It’s very simple. We use the pathfinder pointer, which is I, and when we encounter something bigger than Pivot, we swap it with the right pointer, and the right pointer must be bigger than Pivot, so right– but, The element to which nums[I] refers does not know what is going on, so our I pointer does not move. If nums[I] < pivot, then the left pointer is switched. Note that our left pointer must be equal to povit. So after the swap we want left++, I ++, nums[I] == and when the pivot is done, we just need I ++, and we move on to the next element. We can also use this idea to solve the classic Dutch flag problem.
All right, let’s jump right into the code.
class Solution {
private static final int INSERTION_SORT_MAX_LENGTH = 7;
public int[] sortArray(int[] nums) {
quickSort(nums,0,nums.length-1);
return nums;
}
public void quickSort(int nums[], int low, int hight) {
// Insert sort
if (hight - low <= INSERTION_SORT_MAX_LENGTH) {
insertSort(nums,low,hight);
return;
}
// Select the right number
int mid = low + ((hight-low) >> 1);
if (nums[low] > nums[hight]) swap(nums,low,hight);
if (nums[mid] > nums[hight]) swap(nums,mid,hight);
if (nums[mid] > nums[low]) swap(nums,mid,low);
// Three-way segmentation
int left = low, i = low + 1, right = hight;
int pvoit = nums[low];
while (i <= right) {
if (pvoit < nums[i]) {
swap(nums,i,right);
right--;
} else if (pvoit == nums[i]) {
i++;
} else {
swap(nums,left,i);
left++;
i++;
}
}
quickSort(nums,low,left-1);
quickSort(nums,right+1,hight);
}
public void insertSort (int[] nums, int low, int hight) {
for (int i = low+1; i <= hight; ++i) {
int temp = nums[i];
int j;
for (j = i-1; j >= 0; --j) {
if (temp < nums[j]) {
nums[j+1] = nums[j];
continue;
}
break;
}
nums[j+1] = temp; }}public void swap (int[] nums, int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Well, some commonly used optimization methods are sorted out, there are some other optimization algorithms nine numbers, optimization recursive operation is not described here, interested in you can have a look. Well, that’s all for this article, and I’ll see you next time. Bye.
Heap sort
Before we talk about heap sort, let’s just talk a little bit about what is a heap? Heap data structure application scenarios are very many, so we need to master it!
So before we get to heaps, let’s just take a quick look at, what is a complete binary tree?
Let’s take a look at the definition of Baidu Baike, complete binary tree: leaf nodes can only appear at the lowest and lower levels, and the lowest level of leaf nodes are concentrated in the left part of the tree.
Oh! We can understand that the number of nodes is full except for the last layer, and the last layer of leaf nodes must be left.
Let’s take a look at some examples
In the above examples, (1) (4) is a complete binary tree, (2) (3) is not a complete binary tree. Through the above examples, we know what is a complete binary tree.
So what exactly is a heap?
Now let’s look at the binary heap requirements
(1) It must be a complete binary tree
(2) Every node in the binary heap must be greater than or equal to (or less than or equal to) the value of every node in its subtree.
If each node is greater than or equal to every node in the subtree, we call it a big top heap, and if each node is less than or equal to every node in the subtree, we call it a small top heap. See below
Now let’s look at a concrete example of a binary heap.
The above picture shows the big top heap and the small top heap. Let’s review the requirements of the heap again to see if they meet
(1) It must be a complete binary tree
(2) Every node in the heap must be greater than or equal to (or less than or equal to) the value of every node in its child tree.
Ok, so now that we have a complete understanding of binary heaps, how do we store binary heaps? Since the heap is a complete binary tree, we can store it in arrays. The idea is shown in the figure below, where we simply store the nodes in an array in order, as demonstrated by the small top heap.
Note: We store the binary heap at subscript 1 to save some computation. We refer to the binary heap as the heap for short
Let’s see why we can use arrays to store heaps.
Let’s look at the root node, which is 1, and it’s subscript 1 in the array, and its left child node, which is 4, has an index of 2, and its right child node, which is 2, has an index of 3.
Do we see a connection?
In the array, the subscript of a node (non-leaf node) is I, the subscript of the left child is 2* I, the right child is 2i+1**, and the parent of ** is I /2. Since we can find the left and right children of a node by index, we have no problem with array storage.
So now that we know what a heap is and how to store a heap in arrays, how do we accomplish heap sorting?
Heap sort actually has two main steps
- Building the heap
- The sorting
Let’s take a look at building a heap
We just talked about using an array to store the big top (small top) heap, where the elements are already greater than or equal to (or less than or equal to) a subtree node, but given a random array, that doesn’t necessarily satisfy the appeal, so we need to adjust the array so that it meets the big top or small top heap requirement. This is called heapification, or heapbuilding.
So there are two ways to build a heap, which is to float up, which is to keep adding elements to the heap, or to sink down, which is to go over the parent node, keep sinking it down, so let’s see.
Let’s start with the first way to build a heap
Build the heap using the float operation
So before WE do that, let’s see how we can insert new elements into a heap that we’ve already built, so let’s just look at the example, and we’ll see.
Let’s say we insert a new element 1 (the green node), and we find that 1 is less than its parent, 7, and does not obey the small top heap rule, so we need to move element 1. Swap 1 with 7 (if the newly inserted element is greater than the value of the parent node, then the small top heap rule is still satisfied after the new node is inserted and no swap is required).
We said earlier that we can store the heap in arrays, and we can get the value of its parent by I /2, so now we know what to do.
Swap the insert node with its parent.
After the swap, we continue to compare the newly inserted element, which is 1, to its parent. If it is greater than its parent, no swap is required and the loop ends. If it’s less than that, it keeps swapping until it gets to the right place. Have you figured out what to do? So let’s go straight to the GIfs.
Compare the new element with its parent to determine whether it is smaller than the top element (the small top heap). If it is smaller, swap it until it fits the regular position of the heap, while the big top heap does the opposite.
We find that our newly inserted elements don’t float up one layer at a time until they find their place, which we call the float up operation.
So if we know how to float, isn’t it possible to build a heap? Yes, we can iterate through the array, just like adding new elements to the heap until we’re done, and we’re done building the heap, which is certainly possible.
Let’s take a look at the float operation code.
public void swim (int index) {
while (index > 1 && nums[index/2] > nums[index]) {
swap(index/2,index);/ / exchange
index = index/2; }}Copy the code
Now that we know how to build a heap using the float operation, let’s look at how to build a heap using the sink operation, which is also easy to understand.
Sinking pile of building
Give us an unordered array (which does not satisfy the heap requirements), as shown below
We find that 7 is at the top of the heap, but it’s not a small top heap at this point, so we need to put 7 where it belongs, so what do we do?
Without further ado, let’s take a look at the video simulation, after watching the guarantee can understand
After watching the video, you get the general idea, but I don’t know if you noticed this place. Why does 7 swap with its left child node 2 the first time and with its right child node 3 the second time? See below
It’s easy to understand that we want to swap with the smallest of the children, because we want the parent to be less than two children, and if we swap with the first step, 7 and 5, it’s also not going to satisfy the small top heap.
So how do we know that the node has found its location? There are two main cases
- The element to sink is less than (or greater than) two child nodes, which is in accordance with the heap’s rule of disordered sinking, such as 6 in the figure above
- Sinks to become a leaf node, at which point there are no children, such as 7 sinks to become a leaf node.
We call the above operation the sink operation.
Then we have a question again, I understand the sinking operation, but it has a hammer relationship with the pile ah!
Don’t worry, let’s go ahead and watch the video, and this time we’re going to build a big top heap by sinking.
Initial array [8,5,7,9,2,10,1,4,6,3]
Let’s unpack the video. We just need to start at the last non-leaf node and do the sink operations one by one. Once that’s done, we’re ready to heap. Did you get it all at once?
Ok, let’s look at the code for the sink operation.
public void sink (int[] nums, int index,int len) {
while (true) {
// Get the child node
int j = 2 * index;
if (j < len-1 && nums[j] < nums[j+1]) {
j++;
}
// The parent node sinks to swap with the oldest child node
if (j < len && nums[index] < nums[j]) {
swap(nums,index,j);
} else {
break;
}
// Continue sinkingindex = j; }}Copy the code
Ok, so now that we know both ways to build a heap, how do we sort it?
So before we look at sorting, let’s see how we can delete the top of the heap, and we want to make sure that after we delete the top of the heap, all the other elements will still satisfy the requirements of the heap, so let’s think about how we can do that. See below
Suppose we want to remove 11 from the top of the heap, we need to swap it with the last node of the heap, which is 2, 2, and then perform a sink operation that still satisfies the heap, as shown in the figure below
Well, you’ve already learned how to sort it. You don’t believe? Then I’ll show you the video
Ok, so if you’ve got it all figured out, let’s summarize the implementation of heap sort
1. Build heap, build heap more efficiently by sinking. The specific process is to find the last non-leaf node, and then traverse backwards to perform the sinking operation.
2. Sort, swap the top element (representing the largest element) with the last element, then sink the new top element, iterate through the append operation, then complete the sort.
Ok, let’s look at the code
class Solution {
public int[] sortArray(int[] nums) {
int len = nums.length;
int[] a = new int[len + 1];
for (int i = 0; i < nums.length; ++i) {
a[i+1] = nums[i];
}
// Sink to build a heap
for (int i = len/2; i >= 1; --i) {
sink(a,i,len);
}
int k = len;
/ / sorting
while (k > 1) {
swap(a,1,k--);
sink(a,1,k);
}
for (int i = 1; i < len+1; ++i) {
nums[i-1] = a[i];
}
return nums;
}
public void sink (int[] nums, int k,int end) {
/ / sink
while (2 * k <= end) {
int j = 2 * k;
// Find the largest or smallest child node
if (j + 1 <= end && nums[j + 1] > nums[j]) {
j++;
}
if (nums[j] > nums[k]) {
swap(nums, j, k);
} else {
break; } k = j; }}public void swap (int nums[], int i, int j) {
inttemp = nums[i]; nums[i] = nums[j]; nums[j] = temp; }}Copy the code
Ok, so heap sort, we’re done, we’re done, heap sort in general is a little bit harder to understand than other sort algorithms, but the point is to build a heap, and it’s more widely used, so remember to punch in.
Ok, let’s analyze heap sort’s time complexity, space complexity, and stability.
Heapsort time complexity analysis
Because our heap building time is O(n) and sorting time is O(nlogn), so the total space complexity is O(nlogn).
Heapsort space complexity analysis
Notice that in the description above, for the sake of visualization, we left the first part of the array empty, so that we can find the left child and the right child by I * 2 and I * 2+1. We can also get the child node according to I * 2 + 1 and I * 2 + 2, so we don’t need the temporary array to process the original array, and move all the elements back one bit, so the space complexity of heap sort is O(1), it is in place sorting algorithm.
Heapsort stability analysis
Heap sort is not a stable sorting algorithm. In the sorting process, we swap the last node of the heap with the top node, changing the original relative position of the same element.
Finally, let’s compare quicksort with heap sort
1. For quicksort, data is accessed sequentially. For heap sort, the data is accessed in a hop. This is not CPU cache friendly
2. For the same data, the number of data exchanges in heap sort is more than that in quicksort.
So the above two points show that heap sort performance is not as good as quicksort performance in real development.
Well, that’s all for today and I’ll see you next time.
Merge sort
Merge sort is the sort algorithm that must be mastered, and it is also the interview frequency test point, so let’s take a swipe at merge sort, the principle is very simple, you can understand.
Inside Yuan Ji Restaurant
The 23rd Gusteau Tournament is open!
Yuan Hutch wants to be in his top 4 branches, choose a most excellent chef to participate in the God of Food competition, the selection rules are as follows.
First PK: two chefs are selected from each branch. The first PK is in-store and the winner is selected from the in-store
Second PK: Then the winner of the store will represent the store and challenge the winner of another store (semi-final)
Game 3 PK: The final two winners will compete to choose the final winner.
The schematic diagram is as follows
We should be familiar with the above example, in fact, we merge sort and Gusteau audition process is somewhat similar, let’s take a look at it
Merge the word means merge, merge meaning, and in our data structure definition is to combine two or more ordered tables into a new ordered table. And the merge sort that we’re talking about here is a sort method that uses the idea of merge.
Merge sort uses the divide-and-conquer idea. Divide and conquer as the name implies, to solve a big problem by breaking it down into several smaller sub-problems. When small sub-problems are solved, big problems are solved. A special article will be written to describe it later, so I will briefly mention it here.
Let’s use a picture to describe the data transformation of merge sort, as shown in the following figure.
So we’ve looked briefly at the idea of merge sort, and from the description above, we can see that merging is a very difficult algorithm to implement, which is the main point of this algorithm, and we’ll get the general idea after watching this video.
Do you understand the merging steps in the video? Don’t worry if you don’t understand them. Let’s break them down.
Step 1: Create an extra large set to store the merge result, and the length is the sum of the two smaller sets, as you can see in the video
Step 2: We compare the values pointed to by the two Pointers from left to right, store the smaller one in the larger set, then move the pointer, and continue to compare until all the elements of a small set are stored in the larger set. See below
Step 3: When all the elements of a small set are put into the large set, all the remaining elements of another small set should be put into the large set, as shown in the following figure
Okay, so if you look at the videos and the diagrams and you can kind of get a general idea, it’s pretty easy to write the code once you know how the algorithm works,
The recursive implementation
Let’s look at the code.
class Solution {
public int[] sortArray(int[] nums) {
mergeSort(nums,0,nums.length-1);
return nums;
}
public void mergeSort(int[] arr, int left, int right) {
if (left < right) {
int mid = left + ((right - left) >> 1);
mergeSort(arr,left,mid);
mergeSort(arr,mid+1,right); merge(arr,left,mid,right); }}/ / merge
public void merge(int[] arr,int left, int mid, int right) {
// First, define a new temporary array
int[] temparr = new int[right -left + 1];
int temp1 = left, temp2 = mid + 1;
int index = 0;
// For the second step, compare the values pointed to by each pointer, and store the smaller values into the larger set
while (temp1 <= mid && temp2 <= right) {
if (arr[temp1] <= arr[temp2]) {
temparr[index++] = arr[temp1++];
} else{ temparr[index++] = arr[temp2++]; }}// For the third step, the remaining elements of a small set are stored in the large set
if (temp1 <= mid) System.arraycopy(arr, temp1, temparr, index, mid - temp1 + 1);
if (temp2 <= right) System.arraycopy(arr, temp2, temparr, index, right -temp2 + 1); // Copy the elements of the large collection back to the original array
System.arraycopy(temparr,0,arr,0+left,right-left+1); }}Copy the code
Merge sort time complexity analysis
We merge, we need to put the length of the two smaller sets into the larger set, we need to scan all the records in the sequence to be sorted so the time is O(n). Merge sort divides the set into half groups layer by layer. Then, according to the depth of the complete binary tree, the whole sorting process needs logn (rounded up) times, so the total time complexity is O(nlogn). In addition, the execution efficiency of merge sort has nothing to do with the order degree of the original array to be sorted, so the time complexity is O(nlogn) in the best, worst and average cases. Although merge sort time complexity is very stable, but its application scope is not as wide as quicksort, this is because merge sort is not in place sorting algorithm, the space complexity is not O(1), so what is its space complexity?
Spatial complexity analysis of merge sort
All temporary combinations created by merge sort are released at the end of the method. The maximum space for a single merge sort is N, so the space complexity of merge sort is O(n).
Stability analysis of merge sort
Merge sort of stability, to look at the merge of our function, we set up the code for arr (temp1) < = arr [temp2], when two elements at the same time, the first in the arr (temp1) value to the collection, so the relative position of two same elements did not change, So merge sort is a stable sort algorithm.
The algorithm name | Best time complexity | Worst time complexity | Average time complexity | Spatial complexity | Is stable |
---|---|---|---|---|---|
Merge sort | O(nlogn) | O(nlogn) | O(nlogn) | O(n) | stable |
We’re not done yet. Don’t go.
Iteration implement
Merge sort recursive implementation is more common, is also easier to understand, let’s take a look at merge sort iterative writing. Let’s see how he did it.
Let’s do a video on the idea of an iterative approach,
Is it through the video to understand the general, let’s analyze the video.
Iteratively implemented merge sort combines small sets into large sets with sizes of 1,2,4,8,….. . Iterate in sequence, as shown in the figure below
Let’s say I have a small set of size 1. The two small sets are [3] and [1] respectively. Then we merge [3] and [1] into a temporary array according to the rules of merge (see the first video), so small is better, so sort is achieved, and then copy the elements of the temporary array into the original array. A merge is implemented.
The merging continues below [4],[6]. The specific steps are the same. After all the small collections are merged, the size of the small collection becomes 2. Continue with the previous steps, as shown in the figure below.
At this time, the size of the subset is 2, then is [2,5],[1,3] continue to merge into the temporary array according to the above rules to complete the sorting. This is how the iterative method works,
Let’s go straight to the code.
Note: Recursive and iterative merge functions have the same code.
class Solution {
public int[] sortArray (int[] nums) {
// represents subset size, 1,2,4,8,16.....
int k = 1;
int len = nums.length;
while (k < len) {
mergePass(nums,k,len);
k *= 2;
}
return nums;
}
public void mergePass (int[] array, int k, int len) {
int i;
for (i = 0; i < len-2*k; i += 2*k) {
/ / merge
merge(array,i,i+k-1,i+2*k-1);
}
// merge the last two sequences
if (i + k < len) {
merge(array,i,i+k-1,len-1); }}public void merge (int[] arr,int left, int mid, int right) {
// First, define a new temporary array
int[] temparr = new int[right -left + 1];
int temp1 = left, temp2 = mid + 1;
int index = 0;
// For the second step, compare the values pointed to by each pointer, and store the smaller values into the larger set
while (temp1 <= mid && temp2 <= right) {
if (arr[temp1] <= arr[temp2]) {
temparr[index++] = arr[temp1++];
} else{ temparr[index++] = arr[temp2++]; }}// For the third step, the remaining elements of a small set are stored in the large set
if (temp1 <= mid) System.arraycopy(arr, temp1, temparr, index, mid - temp1 + 1);
if (temp2 <= right) System.arraycopy(arr, temp2, temparr, index, right -temp2 + 1);
// Copy the elements of the large collection back to the original array
System.arraycopy(temparr,0,arr,0+left,right-left+1); }}Copy the code
In addition, I will update you later on the topic that you can use the sorting algorithm to kill in seconds. If you need it now, you can go to my warehouse.Warehouse Address:Github.com/chefyuan/al…
There are many other animation solutions in it, you are welcome to read.