This article is based on JDK 1.8.0_211, based on java.util.arrays. Sort () method to discuss the current use of Java sorting algorithm, only personal views and note, if there is a question of TimSort, which comes from Python, Java was introduced in JDK1.7 to replace merge sort.
The introduction of
The sorting algorithms used in the arrays. Sort method mainly involve the following three kinds: DualPivotQuicksort, MergeSort and TimSort, as well as some sorting algorithms that are not based on comparison, such as counting Sort. The final sorting algorithm is usually dynamically determined by type and input length.
- When the input array type is the basic type, biaxial quicksort is used, supplemented by counting sort.
public static void sort(int[] a) {
DualPivotQuicksort.sort(a, 0, a.length - 1.null.0.0);
}
Copy the code
- When the input array type is wrapper, use either merge sort or TimSort (the default is TimSort unless specially configured)
public static void sort(Object[] a) {
if (LegacyMergeSort.userRequested)
legacyMergeSort(a);
else
ComparableTimSort.sort(a, 0, a.length, null.0.0);
}
public static <T> void sort(T[] a, Comparator<? super T> c) {
if (c == null) {
sort(a);
} else {
if (LegacyMergeSort.userRequested)
legacyMergeSort(a, c);
else
TimSort.sort(a, 0, a.length, c, null.0.0); }}/** To be removed in a future release. */
private static <T> void legacyMergeSort(T[] a, Comparator<? super T> c) {
T[] aux = a.clone();
if (c==null)
mergeSort(aux, a, 0, a.length, 0);
else
mergeSort(aux, a, 0, a.length, 0, c);
}
Copy the code
Sorting stability
It is assumed that there are multiple records with the same keywords in the sequence of records to be sorted. If sorted, the relative order of these records remains unchanged, that is, in the original sequence, r[I] = r[j], and r[I] precedes r[j], and in the sorted sequence, r[I] still precedes r[j]. The sorting algorithm is stable. Otherwise it is called unstable.
Meaning of stability: If we want to keep the meaning of the original sorting after the second sorting, we need to use a stability algorithm
Example: We want to rank a set of goods that have two attributes: price and volume. When we sort them by price from high to low, we need to sort them by sales volume. At this time, if we want to ensure that goods with the same sales volume still maintain the order of price from high to low, we must use the stability algorithm.
Quicksort and biaxial quicksort
Quicksort Introduction
Single-axis quicksort is the most common kind of quicksort. We select a reference value (Pivot) and divide the array to be sorted into two parts: greater than pivot and less than Pivot, and then conduct single-axis quicksort on these two parts… But this approach is less efficient when there are many repeating elements in the input array.
As the optimized version of the single-axis quicksort, the single-axis triple-sampling quicksort mainly optimizes the quicksort efficiency in the case of too many same elements. A reference value is also selected, but the array to be sorted is divided into three parts: greater than Pivot, equal to Pivot, and greater than Pivot.
Biaxial quicksort selects two reference values, pivot1 and Pivot2, and ensures that PivoT1 <= Pivot2. Its specific implementation is similar to three-sampling segmentation, but the array to be sorted is divided into the following three parts: smaller than Pivot, between Pivot1 and Pivot2, and larger than Pivot2.
DualPivotQuicksort is optimized
When Java implements DualPivotQuickSort, it does not blindly use biaxis quicksort, but selects sorting algorithm based on input length.
(1) For int, long, float and double, the sorting algorithm is as follows:
- When the number of sorts to be sorted is less than 47, insert sort is used
- When the number of sorting is less than 286, double-axis fast sorting is adopted
- When the number of sorts to be sorted is greater than 286, merge sort is used
Let’s call it a standard two-axis quickrow
static void sort(int[] a, int left, int right,
int[] work, int workBase, int workLen) {
// Use Quicksort on small arrays
if (right - left < QUICKSORT_THRESHOLD) {
sort(a, left, right, true);
return;
}
// Use MergingSort
doMerge();
}
private static void sort(int[] a, int left, int right, boolean leftmost) {
// Use insertion sort on tiny arrays
if (length < INSERTION_SORT_THRESHOLD) {
doInertionSort();
}
doDualPivotQuickSort();
}
Copy the code
(2) For the two types of short and char, the sorting algorithm selected by length is as follows:
- When the number of sorting is less than 3200, the standard biaxial fast sorting is adopted.
- When the number of sorts to be sorted is greater than 3200, CountingSort is used.
(3) For byte type, the sorting algorithm selected according to length is as follows:
- When the number of sorts to be sorted is less than 29, insert sort is used.
- When the number of sorts is greater than 29, use CountingSort.
Non – comparison – based sorting algorithm – counting sort
Counting sort is different from the traditional comparison – based sorting algorithm, which does not sort by comparison. There are three common non-comparison sorting algorithms: bucket sort, count sort (special bucket sort), radix sort, interested in one by one to understand, here mainly explains count sort.
Use the premise
Using counting sort The content to be sorted must be a bounded sequence and can be represented as an integer
The introduction of
Counting sort, as its name implies, counts the number of elements in an array, and the value of the input element is represented by the address of another array, which represents the number of elements. Finally, reverse the array to complete the sorting, as shown in the following example:
Suppose: a set of numbers ranging from 0 to 10, 9, 3, 5, 4, 9, 1, 2, 7, 8,1,3, 6, 5, 3, 4, 0, 10, 9, 7, 9
Create an array to Count the number of occurrences of each element, since 0~10, create the following array Count:
Step 2: Iterate through the input array and place the retrieved contents at the corresponding position in the Count array so that the current position is +1, for example, 9:
And so on, iterating through the entire input array yields a full state Count:
In this case, Count records the number of occurrences of all elements in the input array, and outputs the Count array according to the number of occurrences to get the final sort output:
0, 1, 1, 2, 3, 3, 3, 4, 4, 5, 5, 6, 7, 7, 8, 9, 9, 9, 9, 10
Stability and final implementation of counting sort
According to the above definition of stability, it is not difficult to find that the above implementation process can not guarantee the stability of radix sort, and in fact counting sort is a stable sorting algorithm, we will lead to the final implementation of counting sort through the above example.
Example: Enter an array [0, 9, 5, 4, 5] in the range 0 to 9
Recorded in the first step: will the Count array of content changes by a record amount of the current element to the number of occurrences of the current index and current index before the and the number of all elements, so that the values stored in the index is its true sort of digits, and the back 9 has a value of 5, for example, is nine sorted position is 5:
Step 2: We traverse the original sequence from back to front, now 5, then place 5 at position 4 (index 3) of the final sorted sequence, and Count the value -1 at index 5 of the sequence.
And so on: at the second 5, the value in the Count array is 3 (the index is 2), which guarantees the stability of the insertion sort.
The source code to achieve
/** The number of distinct byte values. */
private static final int NUM_BYTE_VALUES = 1 << 8;
static void sort(byte[] a, int left, int right) {
int[] count = new int[NUM_BYTE_VALUES];
// Create the count array
for (int i = left - 1; ++i <= right;
count[a[i] - Byte.MIN_VALUE]++
);
// Start traversal from the tail
for (int i = NUM_BYTE_VALUES, k = right + 1; k > left; ) {
while (count[--i] == 0);
byte value = (byte) (i + Byte.MIN_VALUE);
int s = count[i];
do {
a[--k] = value;
} while (--s > 0); }}Copy the code
TimSort
Timsort is an industrial-grade algorithm, which combines insertion sort, merge sort, binary search and other algorithms. The highlight is that it makes full use of the fact that the data to be sorted may be partially ordered, and dynamically changes the sorting strategy according to the content of the data to be sorted — selective merge and galloping. In addition, TimSort is a stable algorithm, whose best performance is better than ordinary merge sort, and its worst performance is similar to merge sort:
Optimized for small data
For array sorting with short input length, many lightweight sorts are sufficient. Therefore, TimSort makes the following optimization based on the input array length:
When the length of the input sequence is less than the reference value, insertion sort is used to sort the input sequence directly. Base values: (1) Python: 64; (2) Java: 32
In addition to the insertion sort mentioned above, the Java implementation has been optimized for this part, strictly speaking, using binary insertion sort. Insert sort is combined with binary search. Because the left side of the insert sort index is an ordered sequence:
- Traditionally, you need to compare them individually until you find where to place the newly inserted elements.
- Binary insertion sort, where binary lookup is used to confirm the placement of inserted elements.
TimSort Brief process
TimSort is a hybrid sorting algorithm that uses insertion sort and merge sort.
The algorithm reorders the input array from left to right by finding consecutive (disjoint) sorted segments (called “Runs” from hereon). If the run is too short, it is extended using insertion sort. The lengths of the generated runs are added to an array named runLen. Whenever a new run is added to runLen, a method named mergeCollapse merges runs until the last 3 elements in runLen satisfy the following two conditions (the The “invariant”) :
- runLen [n-2] > runLen [n-1] + runLen [n]
- runLen [n-1] > runLen [n]
Here n is the index of the last run in runLen. The intention is that checking this invariant on the top 3 runs in runLen in fact guarantees that all runs satisfy it. At the very end, all runs are merged, yielding a sorted version of the input array.
The TimSort algorithm does this by finding contiguous (disjoint) sorted segments (hereafter called “runs”), augmented by insertion sort if the Run is too short. The length of the generated Run is added to an array called runLen, and each time a new Run is added to runLen, a method called mergeCollapse attempts to merge runs until the element elements in runLen satisfy two constant inequalities. Eventually, all runs are merged into one Run, generating a sorted version of the input array.
Basic concept – Run
In TimSort algorithm, ordered subsequences (ascending sequence and strictly descending sequence) are called Run, such as sorting sequence 1,2,3,4,5,4,3,6,8,10
Then all the runs in the sequence above are as follows: 1,2,3,4,5,5,3,2,2,6,8,10
TimSort will distinguish whether the sequence is in ascending or descending order. If it is in descending order, Run will be forcibly reversed to make it in ascending order. Then the above sequence will be: 1,2,3,4,5,5,2,3,2,6,8,10
Note:Run must be in ascending (can be equal) and strictly descending (can’t be equal),The reason for strict descending order is to ensure TimSort stability, since descending order requires inversion.
Basic concept – MinRun
When two arrays merge, efficiency is highest when the number of runs in the array is equal to or slightly less than the power of 2 (reference for basic math concepts: listsort.txt).
On the other hand, we need to get the minimum length of a Run so that the number of runs divided by it meets the above criteria, that is, the range 32-64 (16-32) is selected as the range of minruns, so that the original sorted array can be divided into N parts by minruns, and the N is equal to or slightly less than the power of 2.
- If the current Run length is less than MinRun, try to put the following elements into Run by insertion sort to reach the MinRun length as much as possible (if the remaining length is sufficient);
- If the current Run length is greater than MinRun: no action is taken.
We get a bunch of runs, put them on stack runLen, and try merging them:
/** * A stack of pending runs yet to be merged. Run i starts at * address base[i] and extends for len[i] elements. It's always * true (so long as the indices are in bounds) that: * * runBase[i] + runLen[i] == runBase[i + 1] * * so we could cut the storage for this, but it's a minor amount, * and keeping all the info explicit simplifies the code. */
private int stackSize = 0; // Number of pending runs on stack
private final int[] runBase;
private final int[] runLen;
/* * initialization section intercept * allocation runs-to-be-merged stack (cannot be enlarged) */
int stackLen = (len < 120 ? 5 : len < 1542 ? 10 : len < 119151 ? 24 : 49);
runBase = new int[stackLen];
runLen = new int[stackLen];
/**
* Pushes the specified run onto the pending-run stack.
*
* @param runBase index of the first element in the run
* @param runLen the number of elements in the run
*/
private void pushRun(int runBase, int runLen) {
this.runBase[stackSize] = runBase;
this.runLen[stackSize] = runLen;
stackSize++;
}
Copy the code
Merge principle – inequality
For the following reasons:
(1) If the memory footprint of stack runLen is minimized, it has a fixed size on the line and does not consider expansion (refer to the above source code notes);
(2) Let the number of the two merged runs be as close as possible to the common merging mode, so as to improve the merging efficiency.
TimSort formulates two inequalities on runLen to make runLen as close as possible to the conditions above:
- Run[n-1] > Run[n] + Run[n+1]
- Run[n] > Run[n+1]
If the current runLen satisfies these two inequalities, then no merge is performed, otherwise merge is performed, because TimSort is a stable sorting algorithm, then only two adjacent runs are allowed to merge:
- If inequality 1 is not satisfied, merge Run[n] with Run[n-1] and Run[n+1] with shorter lengths
- Merge Run[n] with Run[n+1]
The meaning of invariants:
Invariant 1: reading the length from right to left, the runLen to be processed grows at least as fast as Fibonacci, making the two merged Runs closer in size.
Invariant 2: The runLen to be processed is sorted in descending order.
It can be inferred from the above two corollaries that the number of runs in runLen will always converge to a certain number, which proves that only a very small runLen stack can sort a very long input array, and just because of this, expansion is not considered in implementation. If detailed mathematical examples are needed, please refer to the Reference.
Java, Python, and Android ensure inequality by checking whether the three elements at the top of the stack are satisfied. That is, n of the above inequality is the second element at the top of the stack. If it is not satisfied, merge.
/** * Examines the stack of runs waiting to be merged and merges adjacent runs * until the stack invariants are reestablished: * * 1. runLen[i - 3] > runLen[i - 2] + runLen[i - 1] * 2. runLen[i - 2] > runLen[i - 1] * * This method is called each time a new run is pushed onto the stack, * so the invariants are guaranteed to hold for i < stackSize upon * entry to the method. */
private void mergeCollapse(a) {
while (stackSize > 1) {
int n = stackSize - 2;
if (n > 0 && runLen[n-1] <= runLen[n] + runLen[n+1]) {
if (runLen[n - 1] < runLen[n + 1])
n--;
mergeAt(n);
} else if (runLen[n] <= runLen[n + 1]) {
mergeAt(n);
} else {
break; // Invariant is established}}}Copy the code
Merge sort
Merge optimization one: memory optimization
Due to the need to ensure the stability of TimSort, merging sort cannot be in place. TimSort introduces a temporary array for merging and puts the smaller of the two runs involved in merging into the temporary array to save memory footprint. And make a distinction between merging from small and merging from big.
Merge optimization 2: Shorten merge Run length
Two to participate in the merge of the Run by many parts without actually involved in the merge (starting from a certain position of the Run or in front of the back of the elements are less than or greater than the other all elements of a Run, this part can not participate in merge), in addition to merge two Run is in the original input array are adjacent to each other, we can consider whether the removed part end element, To achieve the purpose of shortening the merge length:
(1) Find a position I in RunA to place RunB[0], then the data before I need not participate in merging, and do not need to change in situ;
(2) If RunA[LEN-1] is located in RunB, then the data after J need not be merged, and there is no need to change in place;
Merge sort optimization three: GallopingMode
The following extreme cases can occur when merging:
All elements of RunA are less than RunB, which is definitely not ideal if you use normal merging
Therefore, TimSort introduces Galloping mode to solve the above problems, that is, when merging, the elements of successive N of one Run are all less than that of the other Run, then it considers whether there are more elements less than that of the other Run, then it jumps out of normal merging and enters Galloping mode. When the Galloping condition is not met, Jump back to normal merge (forcing Galloping when the Galloping condition is not met is inefficient). If many elements of RunA are less than RunB, then it is possible that RunA will continue to have values less than RunB (and vice versa), at which point TimSort will jump out of the normal merge sort and enter Galloping Mode, which specifies a threshold MIN_GALLOP, with a default value of 7.
Let’s simulate Galloping, using mergeHi as an example (merge from large) :
(1) For example, at this time RunA has won 7 consecutive merges, but RunB’s elements have not been selected once, then the threshold has been reached and GallopingMode enters:
When entering GallopingMode, it indicates that there are already 7 digits of RunA less than the end of RunB, and TimSort guesses that there will be more digits of RunA less than RunB, then the following operation is performed:
This completes Galloping, in which we move all RunA elements greater than RunB[len-1] at once (including RunB[len-1] after the moved RunA element) without further merging.
And then there’s the concept of do we continue Galloping, or do we go back to normal merge?
We determine whether the number of elements moved this time still meets the threshold (yellow), and if so continue, looking for RunB[len-2] in RunA, otherwise return to normal merge.
The Java version implementation changes the threshold for entering Galloping mode every time you enter and exit Galloping. It should be some sort of punishment or reward mechanism, which is not explained here.
Implementation defects of TimSort
The implementation of TimSort in Java8 is flawed and can trigger exceptions in extremely complex cases. The main reason is that if you check the inequality of only three runs at the top of the stack, there is no way to verify that the inequality holds for the entire runLen stack. See the following example:
Inequalities are destroyed, which is a merge of the Run time are destroyed, resulting in some extreme cases, can lead to Run more than we introduced by inequality in the stack to the convergence value led to a spill: ArrayOutOfBoundsException, this Bug in the subsequent versions have repair:
The fix provided is to check for four runs at the top of the stack instead of three:
private void newMergeCollapse(a) {
while (stackSize > 1) {
int n = stackSize - 2;
if ( (n >= 1 && runLen[n-1] <= runLen[n] + runLen[n+1])
|| (n >= 2 && runLen[n-2] <= runLen[n] + runLen[n-1]) {if (runLen[n - 1] < runLen[n + 1])
n--;
} else if (runLen[n] > runLen[n + 1]) {
break; // Invariant is established} mergeAt(n); }}Copy the code
For those interested in manually debugging this issue, see the following Git operations: github.com/Rekent/java…
Partial source comments
Post this part is just for the convenience of readers after looking at the concept of the source when there is a corresponding, convenient follow-up to see the details of the entry method:
static <T> void sort(T[] a, int lo, int hi, Comparator<? super T> c, T[] work, int workBase, int workLen) {
assertc ! =null&& a ! =null && lo >= 0 && lo <= hi && hi <= a.length;
int nRemaining = hi - lo;
if (nRemaining < 2) {
return; // Arrays of size 0 and 1 are always sorted
}
// If it is less than 32 bits, instead of TimSort, binary insertion sort is used to sort the entire array directly
if (nRemaining < MIN_MERGE) {
// Get the first sequence (Run)
int initRunLen = countRunAndMakeAscending(a, lo, hi, c);
// Binary insertion sort (sort forward from the first Run, the first Run is already incremented, no longer needed)
binarySort(a, lo, hi, lo + initRunLen, c);
return;
}
MyTimSort<T> ts = new MyTimSort<>(a, c, work, workBase, workLen);
// Generate the minimum Run length based on the current sorted array length
int minRun = minRunLength(nRemaining);
do {
// Identify the next Run sequence and return the length of the Run
int runLen = countRunAndMakeAscending(a, lo, hi, c);
// If the current Run sequence length is less than the MinRun length, then try to extend to the MinRun length (select elements from behind for binary sorting).
if (runLen < minRun) {
// If the remaining length is smaller than the current MinRun, the remaining length is completed
int force = nRemaining <= minRun ? nRemaining : minRun;
binarySort(a, lo, lo + force, lo + runLen, c);
runLen = force;
}
// Record the start Index of the current Run and the Run length
ts.pushRun(lo, runLen);
// Try merging
ts.mergeCollapse();
// Start the next Run search
lo += runLen;
nRemaining -= runLen;
} while(nRemaining ! =0);
// All runs have been found and must be merged
assert lo == hi;
ts.mergeForceCollapse();
assert ts.stackSize == 1;
}
Copy the code
Merge method 1: Here we shorten the merge length and determine the temporary array to start the final merge:
/** * merge I with I +1. Run I must be the penultimate or penultimate Run **@paramI Merge the stack index */
private void mergeAt(int i) {
In other words, I must be stacksize-2 or stacksize-3
assert stackSize >= 2;
assert i >= 0;
assert i == stackSize - 2 || i == stackSize - 3;
int base1 = runBase[i];
int len1 = runLen[i];
int base2 = runBase[i + 1];
int len2 = runLen[i + 1];
assert len1 > 0 && len2 > 0;
assert base1 + len1 == base2;
Len1 +len2 = len1+len2 = len1+len2 = len1+len2
// [RUN1,RUN2,RUN3] -> [RUN1+RUN2,RUN3] ; [RUN1,RUN2] -> [RUN1+RUN2]
runLen[i] = len1 + len2;
if (i == stackSize - 3) {
runBase[i + 1] = runBase[i + 2];
runLen[i + 1] = runLen[i + 2];
}
stackSize--;
// Shorten the merge length: Find the starting node of Run2 in Run1 (the first element of Run2 should be placed in Run1), ignore the previous elements of Run1 (because they are already sorted)
int k = gallopRight(a[base2], a, base1, len1, 0, c);
assert k >= 0;
/ /!!!!!! Run1 can be omitted and not sorted, since all elements of Run2 are greater than that position
base1 += k;
len1 -= k;
// If the remaining sort length is 0, it is sorted and no longer needs to be sorted.
if (len1 == 0) {
return;
}
// Shorten the merge length: find where the last element of Run1 should be placed in Run2. This position of Run2 can be omitted and not sorted because all subsequent elements are greater than Run1
len2 = gallopLeft(a[base1 + len1 - 1], a, base2, len2, len2 - 1, c);
assert len2 >= 0;
// If the remaining sort length is 0, it is sorted and no longer needs to be sorted.
if (len2 == 0) {
return;
}
// Select the shorter side as the temporary array length
// mergeLo: put LEN1 into the temporary array, mergeHi: put Len2 into the temporary array
if (len1 <= len2) {
mergeLo(base1, len1, base2, len2);
} else{ mergeHi(base1, len1, base2, len2); }}Copy the code
Low-level merging and Galloping:
/** * similar to mergeLo, except this method should only be used when len1 >= len2; If len1 <= len2, mergeLo should be called. * (Both methods can be called when len1 == len2.) * *@paramBase1 Run1 the team head element *@paramThe length of len1 Run1 *@paramThe base2 Run2 team head element (?? must be aBase + aLen) *@paramThe length of len2 Run2 */
private void mergeHi(int base1, int len1, int base2, int len2) {
assert len1 > 0 && len2 > 0 && base1 + len1 == base2;
T[] a = this.a; // For performance
T[] tmp = ensureCapacity(len2);
int tmpBase = this.tmpBase;
System.arraycopy(a, base2, tmp, tmpBase, len2);
int cursor1 = base1 + len1 - 1;
int cursor2 = tmpBase + len2 - 1;
int dest = base2 + len2 - 1;
// Move last element of first run and deal with degenerate cases
a[dest--] = a[cursor1--];
if (--len1 == 0) {
System.arraycopy(tmp, tmpBase, a, dest - (len2 - 1), len2);
return;
}
To simplify operations, if Run2 is merging only one element, that element is not greater than the maximum value of Run1 or the minimum value of the current Run1 index (the position of base1 is greater than the position of the first element of Run2), the element Run2 should be placed first in Run1
if (len2 == 1) {
dest -= len1; // dest = dest-len1, dest:Run1 should start merging :[....{1},6,16,0,27]
cursor1 -= len1; // cursor1 = cursor1-len1, dest: cursor1: [...{X},1,6,16,0,27]
System.arraycopy(a, cursor1 + 1, a, dest + 1, len1); / / [...,6,16} {1, 0, 27] - > [27]... 1,,6,16 {1}, and will Run to ward a sorting parts
a[dest] = tmp[cursor2]; / / 1,1,6,16,27 [...] - > [... 0,1,6,16,27], will be the only element of the TMP into the team first
return;
}
Comparator<? super T> c = this.c; // Use local variable for performance
int minGallop = this.minGallop; // "" "" "
outer: while (true) {
// Start normal merge and record the number of times Run1 and Run2 win the selection. If it is greater than minGallop, jump into GallopingMode
int count1 = 0; // Number of times in a row that first run won
int count2 = 0; // Number of times in a row that second run won
/* * Do the straightforward thing until (if ever) one run * appears to win consistently. */
do {
assert len1 > 0 && len2 > 1;
if (c.compare(tmp[cursor2], a[cursor1]) < 0) {
a[dest--] = a[cursor1--];
count1++;
count2 = 0;
if (--len1 == 0) {
breakouter; }}else {
a[dest--] = tmp[cursor2--];
count2++;
count1 = 0;
if (--len2 == 1) {
breakouter; }}}while ((count1 | count2) < minGallop);
/* * If a party has merged more than minGallop times in a row, it enters GallopingMode */
do {
assert len1 > 0 && len2 > 1;
count1 = len1 - gallopRight(tmp[cursor2], a, base1, len1, len1 - 1, c);
if(count1 ! =0) {
dest -= count1;
cursor1 -= count1;
len1 -= count1;
System.arraycopy(a, cursor1 + 1, a, dest + 1, count1);
if (len1 == 0) {
break outer;
}
}
a[dest--] = tmp[cursor2--];
if (--len2 == 1) {
break outer;
}
count2 = len2 - gallopLeft(a[cursor1], tmp, tmpBase, len2, len2 - 1, c);
if(count2 ! =0) {
dest -= count2;
cursor2 -= count2;
len2 -= count2;
System.arraycopy(tmp, cursor2 + 1, a, dest + 1, count2);
if (len2 <= 1) {
break outer;
}
}
a[dest--] = a[cursor1--];
if (--len1 == 0) {
break outer;
}
// Modify the threshold after completing Galloping and determine whether to continue Galloping
minGallop--;
} while (count1 >= MIN_GALLOP | count2 >= MIN_GALLOP);
if (minGallop < 0) {
minGallop = 0;
}
minGallop += 2; // Penalize for leaving gallop mode
} // End of "outer" loop
this.minGallop = minGallop < 1 ? 1 : minGallop; // Write back to field
if (len2 == 1) {
assert len1 > 0;
dest -= len1;
cursor1 -= len1;
System.arraycopy(a, cursor1 + 1, a, dest + 1, len1);
a[dest] = tmp[cursor2]; // Move first elt of run2 to front of merge
} else if (len2 == 0) {
throw new IllegalArgumentException("Comparison method violates its general contract!");
} else {
assert len1 == 0;
assert len2 > 0;
System.arraycopy(tmp, tmpBase, a, dest - (len2 - 1), len2); }}Copy the code
Reference
www.envisage-project.eu/proving-and…
Svn.python.org/projects/py…