Program ape positive energy
Measuring development progress by lines of code is like measuring aircraft progress by weight. “– Bill Gates
preface
As a developer, especially one with more than three years of experience, if you spend all your time in CRUD, do you need to think about life after 35? Retire to deliver food or return home to farm? Because you in addition to CRUD, other you are not, so this time, you need to change yourself, such as: learning data structure and algorithm as the saying goes, algorithm is the soul of programmers, only to touch the depths of the soul, you can see the poem and the distance, come on, SAO years!
@TOC
1. What are we talking about in this article?
Since the all-or-nothing method is the soul of a programming ape, then of course we are going to shock the soul, yes, in this article we will algorithm: merge sort and quicksort.
Merge sort
1. What is merge sort
Merge Sort is an efficient and stable sorting algorithm based on Merge operations. It is a typical application of Divide and Conquer. The ordered subsequences are combined to obtain a fully ordered sequence. That is, each subsequence is ordered first, and then the subsequence segments are ordered. If two ordered tables are merged into one ordered table, it is called binary merge.Based on the concept and giFs, can you see it? Maybe it’s a little confusing, but merge is actually using the idea of divide and conquer, breaking up a big problem into a lot of smaller problems, and then solving the smaller problems, and then solving the smaller problems, and then the bigger problems are solved, and you might be a little confused, but isn’t that recursion? Why did they treat it again? Here’s one thing you need to understand:Divide-and-conquer is an idea, and recursion is an implementation technique
. If the GIF above is not easy to understand, take a look at the image belowFirst, disassemble the array to be sorted, sort it until it has only two elements, and then merge it again and again to get a completely ordered array.
Now let’s use the divide-and-conquer idea, recursive scheme to achieve merge sort.
2. Use recursion to achieve merge sort
package com.liuxing.sort;
import com.liuxing.util.Print;
/ * * *@author liuxing007
* @ClassName MergeSort
* @DescriptionThe core idea of Merge Sort is pretty simple. If we want to sort an array, * we divide the array into two parts in the middle, sort the two parts separately, and * merge the sorted parts together so that the entire array is sorted. * * Time complexity: O(nlogn) * space complexity: O(n) * Not in place sorting algorithm *@date2020/9/17 15:04 * /
public class MergeSort {
public static void main(String[] args) {
int[] arr = new int[] {6.4.1.7.2.5.8.3};
int length = arr.length;
System.out.println("Pre-sort array ===========");
Print.print(arr, length);
sort(arr, length);
System.out.println("Sorted array ===========");
Print.print(arr, length);
}
/** ** sorting algorithm *@paramArray arr *@paramL Array length */
private static void sort(int[] arr,int l){
sortMerge(arr,0,l-1);
}
/** * recursive *@paramArray arr *@paramThe starting position of p is shown in table *@paramR End position */ in the following table
private static void sortMerge(int[] arr,int p,int r){
if(p >= r){
return ;
}
// The divide-and-conquer index is the index between p and r.
int index = p + (r-p)/2;
// Left recursion
sortMerge(arr,p,index);
// Right side recursion
sortMerge(arr, index + 1, r);
merge(arr, p, index,r);
System.out.println("Sorted array ===========");
Print.print(arr, length);
}
/** ** merge the calculation *@paramArr primal group *@paramL The starting position of the left array is subscript *@paramIndex The index of the left end of the array@paramR end of the array subscript */
private static void merge(int[] arr, int l,int index, int r) {
// Temporary arrays can be optimized. Frequent array creation will reduce the efficiency of the program.
// So you can pass in the temporary array as a parameter, and perform the efficiency change in large numbers
int[] temp = new int[r-l+1];
// start the left subscript
int i= l;
// Start subscript on the right
int j = index+1;
// Temporary array subscript
int k=0;
// Compare the left array with the right array, putting the small elements into the temporary array
while(i<=index && j<=r){
if(arr[i]<arr[j]){
temp[k++] = arr[i++];
}else{ temp[k++] = arr[j++]; }}// After the comparison is complete, we need to add the data in the two arrays that have not been compared to the temporary array
// Add the left element to the temporary array
while(i<=index){
temp[k++] = arr[i++];
}
// Add the remaining elements to the temporary array
while(j<=r){
temp[k++] = arr[j++];
}
// Copy the elements of the temporary array into the original array
for(int x=0;x<temp.length;x++){
arr[x+l] = temp[x];
}
}
}
Copy the code
package com.liuxing.util;
/ * * *@author liuxing007
* @ClassName Print
* @DescriptionPrint *@date2020/9/17 11:13 * /
public class Print {
/*** * Print data *@paramArray arr *@paramLength Indicates the array length */
public static void print(int[] arr, int length) {
for (int i = 0; i < length; ++i) {
System.out.print(arr[i] + "");
}
System.out.println(""); }}Copy the code
Pre-sort array ===========6 4 1 7 2 5 8 3The merged data4 6 1 7 2 5 8 3The merged data4 6 1 7 2 5 8 3The merged data1 4 6 7 2 5 8 3The merged data1 4 6 7 2 5 8 3The merged data1 4 6 7 2 5 3 8The merged data1 4 6 7 2 3 5 8The merged data1 2 3 4 5 6 7 8Sorted array ===========1 2 3 4 5 6 7 8
Process finished with exit code 0
Copy the code
3. Time-space complexity analysis
Let’s say it takes T(n) to merge n elements, so it takes T(n/2) to split into two subarrays. We know that merge() merges two ordered subarrays in O(n) time. So, using the previous formula, the time complexity of merge sort can be calculated by:
T(1) = C; n=1, only constant-level execution time is required, so it is denoted by C. T(n) =2*T(n/2) + n; n>1
Copy the code
So how do WE solve for T(n)? Not intuitive enough? So let’s break it down a little bit more.
T(n) = 2*T(n/2) + n
= 2* (2*T(n/4) + n/2) + n = 4*T(n/4) + 2*n
= 4* (2*T(n/8) + n/4) + 2*n = 8*T(n/8) + 3*n
= 8* (2*T(n/16) + n/8) + 3*n= 16*T(n/16) + 4*n
......
= 2^k * T(n/2^k) + k * n
......
Copy the code
So by doing this step by step, we get T(n) = 2k2 to the k2k T(n/ 2K2 to the k2k)+kn. When T(n/ 2kN /2^kn/2k)=T(1), which is 2K2 ^k2k=1, we get k= log2nlog 2nlog2n. We substitute the value of k into the formula above and get T(n)=Cn+ log2nlog_2nlog2n. If we use big O notation, T of n is equal to order nlogn. So merge sort is order nlogn. From our principle analysis and pseudo code, you can see that merge sort of execution efficiency has nothing to do with the original array orderly degree to sort, so its time complexity is very stable, whether it’s the best situation, the worst case scenario, or average, the time complexity is O (nlogn) — — — — — – geek time – data structure and algorithm, the king
Merge sort time complexity is excellent, but why do we rarely see it in our daily development? Let’s first analyze the spatial complexity of merge sort.
We need to pay attention to the merge method. In this method, we use a temporary array to store data, but after the merge, the temporary array will be released, and because the maximum length of the temporary array will not exceed the original array length N, so the space complexity of merge sort is: O(n).
Why is merge sort rarely used in development? And the reason is simple, because it’s not an in-place sorting algorithm, and at this point you might be wondering, what is an in-place sorting algorithm? In a nutshell: sorting through no other space, we call it in place sorting algorithm, but merge sort obviously borroys a temporary array, so it is not an in place sorting algorithm, even though its time complex is very stable, fewer people use it.
Quicksort
1. What is quicksort
If we want to sort a set of data in an array with subscripts between P and r, we select any data between P and R as the pivot (partition point). We go over the numbers between P and R, putting anything less than pivot on the left, anything greater than pivot on the right, and pivot in the middle. After this step, the data in the array p through R is divided into three parts, with p through Q-1 being less than Pivot in the front, pivot in the middle, and q+1 through R being greater than Pivot in the back. According to the idea of divide-and-conquer and recursion, we can recursively sort the data with subscripts from P to Q-1 and the data with subscripts from q+1 to R until the interval is reduced to 1, indicating that all data are in order. (From – Geek Time – Data Structures and Algorithms – Wang Zheng)
Quick sort of thought and merge sort is similar, is through the thought of divide and conquer, sorting, using recursive implementation details vary, but fast row (quick sort) need a partition point, can literally go to one of the elements in an array as a partition point, behind is said before, the concept of the core of the merge is merged, The core of quicksorting is partition points, so let’s take a look at what quicksorting is doing to get partition points.
Said before, fast row after selecting a partition (pivot), will be less than the partition (pivot) elements on the left side, the partition points (pivot) put in the middle, is greater than the partition points (pivot) put on the right, this is a good solution to a see, and merge sort, I apply for two temporary array, a deposit of less than the partition element array, An array of elements larger than the partition point, and that’s a perfect solution, very simple, but this faces the same problem as merge sort: == it’s not an in-place sorting algorithm ==, so what if I want quicksort to be an in-place sorting algorithm? How do we do that? In fact, it is not difficult, we can refer to the selection sort: 【 data structure and algorithm 】 common three kinds of sort (bubble sort, insert sort, select sort)
We define a cursor I (array subscript) to divide array A[p-(r-1)] into two parts, a[P -(I -1)] that is less than the pivot point (pivot point). We call it “== Sorted interval ==”. A [(I +1) – (r-1)] is greater than the pivot element, so we call it “== unsorted interval ==”, as long as the unsorted interval is compared with the pivot point, if less than the pivot point, then append the element to the sorted interval (a[I]), No need to change.
So I’ve prepared a picture for you to look at, maybe you’ll get the idea.This is the result of a partition swap, and when all the partitions have been swapped, the entire array is in order, so now that we’re done with the idea of quicksort, let’s see what the code does
Fast row code implementation
package com.liuxing.sort;
import com.liuxing.util.DataUtil;
import com.liuxing.util.Print;
/ * * *@author liuxing007
* @ClassName Quicksort
* @DescriptionQuicksort * * If we want to sort a set of data in an array with subscripts between P and r, * we select any data between P and R as the pivot (partition point). * We go over the data between p and r, putting anything less than pivot on the left, * putting anything greater than pivot on the right, and putting pivot in the middle. After this step, the data in the array p to R is divided into three parts, with each part smaller than pivot from p to Q-1 in the front, pivot in the middle, and q+1 to R in the back greater than Pivot. * We can recursively sort the data from p to Q-1 and the data from q+1 to r, * until the interval is reduced to 1, indicating that all the data is ordered (from - Geek time - Data structures and algorithms - King) * * Time complexity: O(nlogn) * Spatial complexity: O(1) * In place sorting algorithm, but not stable sorting algorithm *@date2020/9/18 10:22 * /
public class Quicksort {
public static void main(String[] args) {
int[] arr = new int[] {6.5.4.3.2.1};
// int[] arr = DataUtil.createIntArrData();
int length = arr.length;
System.out.println("Pre-sort array ===========");
Print.print(arr, length);
sort(arr, length);
System.out.println("Sorted array ===========");
Print.print(arr, length);
}
/** ** sorting algorithm *@paramArray arr *@paramL Array length */
private static void sort(int[] arr,int l){
sortRec(arr,0,l-1);
}
/** * recursive *@param arr
* @param p
* @param r
*/
private static void sortRec(int[] arr, int p, int r) {
// Recursive termination condition
if (p >= r){
return;
}
// Get the partition point
int q = partition(arr, p, r);
// Left recursion
sortRec(arr, p, q-1);
// Right side recursion
sortRec(arr, q+1, r);
}
/** * partition *@paramArr Raw array *@paramP array starts with index *@paramR array ends with subscript *@returnPartition subscript */
private static int partition(int[] arr, int p, int r) {
// Take the last element as the partition value
int pivot = arr[r];
int i = p;
for(int j = p; j < r; ++j) {
if (arr[j] < pivot) {
if (i == j) {
++i;
} else {
// Switch places
inttmp = arr[i]; arr[i++] = arr[j]; arr[j] = tmp; }}}// Switch places
int tmp = arr[i];
arr[i] = arr[r];
arr[r] = tmp;
returni; }}Copy the code
Pre-sort array ===========6 5 4 10 2 1Sorted array ===========1 2 4 5 6 10
Process finished with exit code 0
Copy the code
2. Time-space complexity analysis
Quicksort is implemented in the same way as merge sort, by recursion, so their time complexity is the same: O(nlogn), but it is an unstable sorting algorithm, when extreme conditions, its time complexity will decay to O(n^2), why is that? If our original array is an ordered array and the partition point is the last one, then we have data on one side and no data on the other, and performance degrades rapidly, so partition points are important when using quicksort.
Then let’s look at the space complexity of quicksort. Quicksort does not use other arrays or Spaces when swapping, so it is an in-place sorting algorithm. Only two elements are involved when swapping, so it is not difficult to analyze the space complexity: O(1),
Therefore, the time complexity of fast scheduling: O(nlogn), dragon sword complexity: O(1), in extreme cases, the time complexity will degenerate to O(n^2), and the optimized fast is excluded.
conclusion
Merge sort is a stable, non-in-situ sorting algorithm with time complexity O(nlogn) and space complexity O(n). Quick sorting time is an unstable, in situ sorting algorithm, complexity: O(nlogn), space complexity: O(1), in extreme cases, the time complexity will degenerate to O(n^2), the optimized fast exclusion.
Although merge sort algorithm is more stable than quicksort algorithm, but in daily development, it is used more or quicksort, why? The reason is simple, merge sort is not sort in place. Source code address: github.com/361426201/j…
Good SAO nian, finished reading, don’t like collection?