This paper mainly describes the idea, implementation and complexity of eight common sorting algorithms. Including bubble sort, quick sort, insert sort, Hill sort and so on, the article explains very detailed!

Bubble sort

1, the main points

Bubble sort is a commutative sort.

What is commutative sort?

Swap sort: Compare the keys to be sorted in pairs and swap the logarithms that do not meet the order requirement until the entire table meets the order requirement.

2. Algorithm idea

It repeatedly walks through the sequence to be sorted, comparing two elements at a time and swapping them if they are in the wrong order. The task of visiting an array is repeated until no more exchanges are needed, that is, the array has been sorted.

The algorithm gets its name from the fact that smaller elements slowly “float” to the top of the list through swapping.

Let’s say I have an unordered sequence of size N. Bubble sort is to find the ith smallest (largest) element by pairwise comparison in each sorting run and rank it up.

The above picture shows an example of the actual flow of bubble sort:

Suppose there is an unordered sequence {4. 3. 2, 5}

  • First sort: Find the first smallest value 1 by comparing it in pairs and place it at the top of the sequence.
  • Second sort: By pairwise comparison, find the second smallest value, 2, and place it in the second place of the sequence.
  • Third sort: By pairwise comparison, find the third smallest value, 3, and place it in the third place of the sequence.

At this point, all elements are in order, and the sorting is done.

To translate this process into code, we need to think like a machine, otherwise the compiler won’t understand it.

  • Suppose you want to sort an unordered sequence of size N in ascending order (that is, from small to large).
  • We need to find the ith smallest element in every sort by comparison.
  • So, we need an external loop that starts at the beginning of the array (subscript 0) and scans all the way to the penultimate element (subscript n-2), leaving the last element, which must be the largest.
  • Let’s say it’s the ith order, and we know that the first I minus 1 elements are already in order. Now to find the ith element, you just start at the end of the array, scan to the ith element, and compare them pairwise.
  • So, you need an internal loop that starts at the end of the array (subscript N – 1) and scans to (subscript I + 1).

The core code

public void bubbleSort(int[] list) { int temp = 0; // The temporary number to swap // the number of times to traversefor(int i = 0; i < list.length - 1; I ++) {// Compare the size of two adjacent numbers from back to front. After traversing once, place the ith smallest number in the array at the ith positionfor(int j = list.length - 1; j > i; J --) {// Compare adjacent elements and swap if the preceding number is greater than the following numberif (list[j - 1] > list[j]) {
                temp = list[j - 1];
                list[j - 1] = list[j];
                list[j] = temp;
            }
        }

        System.out.format("Train % D: \t", i);
        printAll(list); }}Copy the code

3. Algorithm analysis

Performance of bubble sort algorithm

4. Time complexity

If the initial state of the file is positive order, a scan can complete the sorting. The required keyword comparison times C and record movement times M both reach the minimum values: Cmin = n-1, Mmin = 0. Therefore, the optimal time complexity of bubble sort is O(N).

If the initial file is in reverse order, N -1 sorting is required. N-i keyword comparisons (1 ≤ I ≤ n-1) are performed for each sort, and records must be moved three times for each comparison to reach the swap record position. In this case, both the comparison and the movement times reach the maximum:

Cmax = N(N-1)/2 = O(N2)

Mmax = 3N(N-1)/2 = O(N2)

The worst time complexity of bubble sort is O(N2).

Therefore, the average time complexity of bubble sort is O(N2).

In summary, there is a simple sentence: bubble sort performance is better when the data is closer to positive order.

5. Algorithm stability

Bubble sort is to move small elements forward or large elements back. A comparison is a comparison of two adjacent elements, and an exchange occurs between the two elements.

So the order of the same elements doesn’t change, so bubble sort is a stable sort algorithm.

6, to optimize

A common improvement to bubble sort is to include a signature variable, exchange, that indicates whether data has been exchanged during a particular sort run.

If data is not exchanged during a sort, all data is in order. You can end the sort immediately to avoid unnecessary comparison.

The core code

Public void bubbleSort_2(int[] list) {int temp = 0; public void bubbleSort_2(int[] list) {int temp = 0; // A temporary number to exchange Boolean bChange =false; // Switch flags // The number of times to traversefor (int i = 0; i < list.length - 1; i++) {
        bChange = false; // Compare the size of two adjacent numbers from back to front. After traversing once, place the ith smallest number in the ith positionfor(int j = list.length - 1; j > i; J --) {// Compare adjacent elements and swap if the preceding number is greater than the following numberif (list[j - 1] > list[j]) {
                temp = list[j - 1];
                list[j - 1] = list[j];
                list[j] = temp;
                bChange = true; }} // If the flag isfalse, indicating that there is no exchange in this round of traversal, it is already an ordered sequence, and the sorting can be finishedif (false == bChange)
            break;

        System.out.format("Train % D: \t", i);
        printAll(list); }}Copy the code

Quick sort

1, the main points

Quicksort is a commutative sort.

Quicksort was proposed by C. A. R. Hoare in 1962.

2. Algorithm idea

Its basic idea is:

Through a sorting, the sorted data is divided into two separate parts: to the left of the split point is the number smaller than it, and to the right is the number larger than it.

Then according to this method, the two parts of the data are sorted quickly, the whole sorting process can be carried out recursively, so as to achieve the whole data into an ordered sequence.

A detailed illustration is often more illustrative than a large volume of text, so go straight to the picture above:

In the figure above, the quicksort process is shown:

  • The initial state is an unordered array: 2, 4, 5, 1, 3.
  • After the above steps, the first sorting is done, resulting in new arrays: 1, 2, 5, 4, 3.
  • So in this new array, we have 2 as the partition point, and everything on the left is less than 2, and everything on the right is greater than 2.
  • Because 2 has already found its proper place in the array, it doesn’t have to move.
  • The array to the left of 2 has only one element, 1, so obviously we don’t have to sort any more, and the position is determined. Note: In this case, the left pointer and the right pointer are obviously overlapped. So in the code, we can set the criterion that left must be less than right, if not, no sorting.
  • For arrays 5, 4, and 3 to the right of 2, set left to point to 5 and right to point to 3, and continue to repeat steps 1, 2, 3, and 4 to sort the new array.

The core code

Public int division(int[] list, int left, int right) {// Base = list[left];while(left < right) {// Start at the right end of the sequence and traverse left until you find a number less than basewhile(left < right && list[right] >= base) right--; List [left] = list[right]; list[left] = list[right]; // Start at the left end of the sequence and work right until you find something greater than basewhile(left < right && list[left] <= base) left++; List [right] = list[left]; } // Finally put base in left position. In this case, the left value of the left position should be smaller than the left value; // The right side of the left position should be larger than the left. list[left] = base;returnleft; } private void quickSort(int[] list, int left, int right) {// The left subscript must be less than the right subscriptifInt base = division(list, left, right); int base = division(list, left, right); System.out.format("base = %d:\t", list[base]);
        printPart(list, left, right); QuickSort (list, left, base-1); QuickSort (list, base + 1, right); }}Copy the code

3. Algorithm analysis

Performance of quicksort algorithms

4. Time complexity

When the data is in order, it is divided into two subsequences based on the first keyword, and the former subsequence is empty, which results in the worst execution efficiency.

When the data is randomly distributed, it is divided into two sub-sequences based on the first keyword, and the number of elements in the two sub-sequences is nearly equal. In this case, the execution efficiency is the best.

Therefore, the more randomly distributed the data, the better the performance of quicksort; The closer the data is to order, the worse quicksort performance is.

5. Spatial complexity

Quicksort requires 1 space to store reference values during each partition. Quicksort takes Nlog2N partitions, so it takes Nlog2N partitions.

6. Algorithm stability

In quicksort, equal elements may swap order due to partitioning, so it is an unstable algorithm.

Insertion sort

1, the main points

Direct insert sort is the simplest kind of insert sort.

Insert sort: Each time a record to be sorted is inserted into the appropriate position in the ordered queue according to the size of its keywords until all inserts are complete.

2. Algorithm idea

Before we talk about direct insert sort, let’s imagine what we’re playing cards for.

First take a 5 in your hand, then touch a 4, smaller than 5, put it in front of 5, touch a 6, well, bigger than 5, put it behind 5, touch an 8, bigger than 6, put it behind 6, and finally look, damn, I got a flush, this is great.

The above procedure is a typical direct insertion sort, inserting new data into the appropriate position in the ordered queue one at a time.

Easy enough. Now, we’re going to translate this algorithm into a programming language.

Suppose there is an unordered sequence R0, R1… RN – 1.

  • Let’s start by treating the element with subscript 0 in this sequence as an ordered sequence of 1 elements.
  • Then, we’re going to take R1, R2… Rn-1 inserts into this ordered sequence. So, we need an external loop that scans from subscript 1 to N minus 1.
  • The insertion process is described next. Suppose you want to insert Ri into a previously ordered sequence. So we know from what we’ve seen, that when I insert Ri, I minus one must already be in order.

So we need to compare Ri with R0 to RI-1 to determine the right place to insert. This requires an internal loop, and we generally compare from back to front, that is, we scan from subscript I minus 1 to zero.

The core code

Public void insertSort(int[] list) {// Print system.out.format ("i = %d:\t", 0);
   printPart(list, 0, 0); // The first number must be ordered, starting with the second number, insert the ordered sequencefor(int i = 1; i < list.length; i++) { int j = 0; int temp = list[i]; // List [j] = temp; // List [j] = temp; // List [j] = tempfor (j = i - 1; j >= 0 && temp < list[j]; j--) {
           list[j + 1] = list[j];
       }
       list[j + 1] = temp;

       System.out.format("i = %d:\t", i);
       printPart(list, 0, i); }}Copy the code

3. Algorithm analysis

Algorithm performance for direct insertion sort

4. Time complexity

When the data is in positive order, the execution efficiency is the best, each insertion does not need to move the previous element, and the time complexity is O(N).

When the data is in reverse order, the execution efficiency is the worst, and the previous element is moved back for each insertion, and the time complexity is O(N2).

Therefore, the closer the data is to positive order, the better the performance of the algorithm of direct insertion sort is.

5. Spatial complexity

According to the direct insertion sort algorithm, we need a temporary variable to store the value to be inserted in the sorting process, so the space complexity is 1.

6. Algorithm stability

In direct insertion sort, there is no need to change the position of equal numeric elements, so it is a stable algorithm.

Hill sorting

1, the main points

Hill (Shell) sort, also known as reduced increment sort, is an insertion sort. It’s a more powerful version of the direct insert sort algorithm.

This method gets its name from the fact that DL. Shell proposed it in 1959.

2. Algorithm idea

The basic idea of Hill sorting is:

The records are grouped according to step size GAP, and each group of records is sorted by direct insertion sort method. As the step size decreases gradually, the group divided into contains more and more records. When the step size value decreases to 1, the whole data is combined into a group to form a group of ordered records, and then the sorting is completed.

Let’s take a closer look at this process using a demo diagram.

In the picture above:

We start with an unordered sequence of size 10.

  • In the first sorting, we might as well assume that gap1 = N / 2 = 5, that is, the elements separated by 5 form a group, which can be divided into 5 groups.
  • Next, sort each group by direct insertion sort.
  • In the second sorting, we reduce the gap by half, i.e. Gap2 = gap1/2 = 2 (integer). Each group of elements separated by 2 can be divided into 2 groups.
  • Sort each group by direct insertion sort.
  • In the third order, the gap is halved again, i.e. Gap3 = GAP2/2 = 1. The elements separated by 1 thus form a group, that is, there is only one group.
  • Sort each group by direct insertion sort. At this point, the sorting is done.

Notice that there are two elements 5 and 5 with equal values. We can clearly see that in the sorting process, the two elements are swapped.

So hill sort is an unstable algorithm.

The core code

public void shellSort(int[] list) {
   int gap = list.length / 2;

   while(1 <= gap) {// Group the gap elements into a group, scan all groupsfor(int i = gap; i < list.length; i++) { int j = 0; int temp = list[i]; // Sort the group of elements whose distance is gapfor (j = i - gap; j >= 0 && temp < list[j]; j = j - gap) {
               list[j + gap] = list[j];
           }
           list[j + gap] = temp;
       }

       System.out.format("gap = %d:\t", gap);
       printAll(list); gap = gap / 2; // Decrease increment}}Copy the code

3. Algorithm analysis

Algorithm performance of Hill sort

4. Time complexity

Step size selection is an important part of Hill sorting. Any sequence of steps will work as long as the final step is 1.

The algorithm starts by sorting by a certain step size. It then continues to sort at a certain step size, and eventually the algorithm sorts at a step size of 1. When the step size is 1, the algorithm changes to insertion sort, which guarantees that the data will always be sorted.

Donald Shell initially suggested choosing a step size of N/2 and taking half the step size until it reached 1. Although this can be better than the O(N2) class algorithm (insertion sort), there is still room to reduce the average and worst times. Probably the most important thing about Hill sort is that when you sort with smaller steps, the larger steps you used before are still sorted.

For example, if a sequence is sorted by step 5 and then by step 3, then the sequence is not only sorted by step 3, but also by step 5. If this were not the case, the algorithm would scramble the previous order during iteration, and the sorting would not be done in such a short time.

The best known sequence of step sizes was proposed by Sedgewick (1, 5, 19, 41, 109…). , the terms of the sequence come from these two formulas.

The study also shows that “comparison is the dominant operation in Hill sorting, not interchange.” Hill sort with this step size sequence is faster than insert sort and heap sort, and even faster than quicksort in small arrays, but still slower than quicksort when large amounts of data are involved.

5. Algorithm stability

As can be seen from the illustration of Hill sorting algorithm above, in Hill sorting, equal data may change positions, so Hill sorting is an unstable algorithm.

Comparison of direct insertion sort and Hill sort

  • Direct insertion sort is stable; Hill ordering is unstable.
  • Direct insertion sort is more suitable for basically ordered sets of original records.
  • Hill sort has fewer comparisons and moves than direct insertion sort, and the effect is more obvious when N is larger.
  • In Hill sorting, the gap sequence must be taken in such a way that: ** The last step must be 1. **
  • Direct insert sort also works with chained storage; Hill ordering does not apply to chain structures.

Simple selection sort

1, the main points

Simple selection sort is a selection sort.

Select sort: Select the record with the smallest keyword from the records to be sorted and place it at the end of the sorted record sequence until all sorted records are finished.

2. Algorithm idea

From the sequence to be sorted, find the element with the smallest keyword; If the smallest element is not the first element in the sequence to be sorted, swap it with the first element; From the remaining n-1 elements, find the element with the smallest keyword, and repeat 1 or 2 steps until the end of the sorting. As shown in the figure, place the current ** ith smallest element at position I ** in each sort.

The core code

3. Algorithm analysis

Simple selection sorting algorithm performance

4. Time complexity

The number of comparisons for simple selection sort is independent of the initial sort of the sequence. Assuming that the sequence to be sorted has N elements, the ** comparison is always N (n-1) / 2 **.

The number of moves is related to the initial order of the sequence. When the sequence is in positive order, the minimum number of moves is 0.

When the sequence is in reverse order, the maximum number of moves is 3N (n-1) / 2.

So, to sum up the above, the time complexity of simple sorting is O(N2).

5. Spatial complexity

Simple selection sort takes up a temporary space that is used when swapping values.

Heap sort

1, the main points

Before WE get into heap sorting, let’s first explain what a heap is.

A heap is a complete binary tree of sequential storage.

A heap in which the key of each node is no larger than the key of its children is called a small root heap. A large root heap is a heap in which the key of each node is no less than that of its children.

For example, for a sequence of n elements {R0, R1… Rn} is called a heap if and only if one of the following relationships is satisfied:

  • Ri <= R2i+1 and Ri <= R2i+2 (small root heap)
  • Ri >= R2i+1 and Ri >= R2i+2 (large root heap)

I = 1, 2,… N /2 is rounded down.

As shown in the figure above, the sequence R{3, 8,15, 31, 25} is a typical small root heap.

There are two parents in the heap, element 3 and element 8.

Element 3 is represented in the array as R[0], and its left child is R[1] and its right child is R[2].

Element 8 is represented in the array as R[1], its left child is R[3], its right child is R[4], and its parent is R[0]. As you can see, they satisfy the following rules:

Let the current element be represented by R[I] in the array, then,

  • Its left child is:R[2*i+1];
  • Its right child node is:R[2*i+2];
  • Its parent is:R[(i-1)/2];
  • R[i] <= R[2*i+1]andR[i] <= R[2i+2].

2. Algorithm idea

First, the array R[0..n] is adjusted to the heap as defined by the heap (this process is called creating the initial heap), swapping R[0] and R[n]; Then, R[0..n-1] is adjusted to the heap, and R[0] and R[n-1] are swapped; And so on until R[0] and R[1] are swapped.

The above ideas can be summarized into two operations:

Construct the initial heap from the initial array (build a complete binary tree, making sure all the parents are larger than their children).

Swap the first and last elements each time, output the last element (the maximum), and resize the remaining elements to the large root heap.

When the last element is printed, the array is sorted from smallest to largest.

Let’s take a look at how to build the initial heap with a detailed example diagram.

Suppose there is an unordered sequence {1, 3,4, 5, 2, 6, 9, 7, 8, 0}.

With the initial heap constructed, let’s look at the complete heap sort processing:

Again, for the previously mentioned unordered sequence {1,3, 4, 5, 2, 6, 9, 7, 8, 0} to illustrate.

I believe that through the above two figures, should be able to demonstrate the operation of heap sort very intuitively.

The core code

public void HeapAdjust(int[] array, int parent, int length) { int temp = array[parent]; Int child = 2 * parent + 1; // Get the left child firstwhile(child < length) {// If there is a right child and the value of the right child is greater than that of the left child, select the right childif(child + 1 < length && array[child] < array[child + 1]) { child++; } // If the value of the parent node is already greater than the value of the child node, the end is immediateif (temp >= array[child])
            break; Array [parent] = array[child]; Parent = child; // Select the left child of the child. child = 2 * child + 1; } array[parent] = temp; } public void heapSort(int[] list) {// Loop to create the initial heapfor(int i = list.length / 2; i >= 0; i--) { HeapAdjust(list, i, list.length); } // loop n-1 times to complete the sortfor(int i = list.length - 1; i > 0; Int temp = list[I]; int temp = list[I]; list[i] = list[0]; list[0] = temp; HeapAdjust(list, 0, I); HeapAdjust(list, 0, I); System.out.format("Train % D: \t", list.length - i);
        printPart(list, 0, list.length - 1); }}Copy the code

3. Algorithm analysis

Heapsort algorithm in general

4. Time complexity

The storage representation of the heap is sequential. Because the binary tree corresponding to heap is a complete binary tree, and the complete binary tree usually adopts the sequential storage mode.

Heap sort is best used when you want a partially sorted sequence before the KTH smallest element in a sequence.

Because the time complexity of heap sort is O(n+klog2n), if K ≤ n/log2n, the time complexity can be obtained is O(n).

5. Algorithm stability

Heap sort is an unstable sort method.

Because the key comparison and exchange takes a path from this node to the leaf node during heap adjustment,

Therefore, for the same keyword, it is possible to swap the last keyword to the first one.

Merge sort

1, the main points

Merge sort is an effective sorting algorithm based on merge operation, which is a very typical application of Divide and Conquer.

The ordered subsequences are combined to obtain a fully ordered sequence. That is, each subsequence is ordered first, and then the subsequence segments are ordered. If two ordered tables are merged into one ordered table, it is called binary merge.

2. Algorithm idea

The sequence R[0…n-1] is regarded as n ordered sequences of length 1, and the adjacent ordered lists are merged in pairs to obtain N /2 ordered lists of length 2. These ordered sequences are merged again to obtain n/4 ordered sequences of length 4; So go ahead, and finally got an orderly sequence of length n.

To sum up:

Merge sort actually does two things:

  • “Decompose” — Split the sequence in half at a time.
  • “Merge” — sort the divided sequence segments by merging them in pairs.

So let’s think about step two, how do we merge?

In each merge, two ordered sequence segments are merged and then sorted.

These two ordered sequence segments are R[low, mid] and R[mid+1, high] respectively.

Merge them into a local staging array, R2, and copy R2 back into R when the merge is complete.

For ease of description, we call R[low, mid] the first segment and R[mid+1, high] the second segment.

One record at a time is taken from the two segments for keyword comparison, and the smaller one is put into R2. Finally, copy the rest of each paragraph directly into R2.

After this process, R2 is already an ordered sequence, which is copied back into R in a merge sort.

The core code

public void Merge(int[] array, int low, int mid, int high) { int i = low; Int j = mid + 1; // j is the subscript of the second sequence int k = 0; Array2 = new int[high-low + 1]; // array2 = new int[high-low + 1]; Array2 is a temporary merge sequence // Scan the first and second sequences until a scan is completedwhile(I <= mid &&j <= high) {// Determine which number is smaller in the first segment and the second segment, store it in the merge sequence, and continue scanning downif (array[i] <= array[j]) {
            array2[k] = array[i];
            i++;
            k++;
        } else{ array2[k] = array[j]; j++; k++; }} // If the first sequence has not been scanned, copy all of it to the merged sequencewhile(i <= mid) { array2[k] = array[i]; i++; k++; } // If the second sequence has not been scanned, copy it all to the merged sequencewhile(j <= high) { array2[k] = array[j]; j++; k++; } // Copy the merged sequence into the original sequencefor(k = 0, i = low; i <= high; i++, k++) { array[i] = array2[k]; }}Copy the code

Now that we know how to merge, let’s look at how to break it down.

In a merge, if the length of each subtable is gap, then R[0…gap-1] has n/ GAP ordered subtables: R[0… Gap-1], R[gap…2* Gap-1],… , R[(n/gap)*gap… n-1].

When Merge is called to Merge adjacent child tables, special handling must be done for special cases of the table.

If the number of subtables is odd, the last subtable does not need to be merged with other subtables. If the number of subtables is even, notice that the upper limit of the last subtable range in the last pair of subtables is n-1.

The core code

public void MergePass(int[] array, int gap, int length) { int i = 0; // Merge two contiguous child tables of gap lengthfor(i = 0; i + 2 * gap - 1 < length; i = i + 2 * gap) { Merge(array, i, i + gap - 1, i + 2 * gap - 1); } // There are two child tables left, the latter having a length less than gapif (i + gap - 1 < length) {
        Merge(array, i, i + gap - 1, length - 1);
    }
}

public int[] sort(int[] list) {
    for (int gap = 1; gap < list.length; gap = 2 * gap) {
        MergePass(list, gap, list.length);
        System.out.print("gap = " + gap + ":\t");
        this.printAll(list);
    }
    return list;
}
Copy the code

3. Algorithm analysis

Performance of merge sort algorithm

4. Time complexity

The form of merge sort is a binary tree, the number of traversals is the depth of the binary tree, and the time complexity of a complete binary tree is O(n*log2n).

5. Spatial complexity

According to the previous algorithm description, a temporary storage space of size N is needed to store the merged sequence during algorithm processing.

6. Algorithm stability

In merge sort, the order of equal elements does not change, so it is a stable algorithm.

Comparison of merge sort with heap sort and quicksort

In terms of space complexity, heap sort is the first choice, fast sort is the second, and merge sort is the last.

In terms of stability, merge sort should be selected, because heap sort and quicksort are unstable.

Considering the average sorting speed, you should choose quicksort.

Radix sort

1, the main points

Radix sort differs from the previous seven sorting methods in this series in that it does not compare keyword sizes.

It sorts N elements by making several “allocations” and “collections” based on the values of the bits in the keyword.

Let’s take a concrete example to show how radix sort works.

Suppose an initial sequence is R {50, 123, 543, 187, 49, 30,0, 2, 11, 100}.

We know that the cardinal number of any Arabic number is 0 to 9.

So let’s think of 0 to 9 as 10 buckets.

We first classify them according to the number of units in the sequence and divide them into the specified bucket. For example, R[0] = 50, where the units digit is 0, is stored in bucket 0.

After classification, we take out all the numbers from each bucket in order from 0 to 9.

In this case, the resulting sequence is an increasing sequence in the units digit.

Sort by single digit: {50, 30, 0, 100, 11, 2, 123,543, 187, 49}.

Then, you can sort tens and hundreds in the same way, and finally get the sorted sequence.

2. Algorithm analysis

Performance of radix sort

3. Time complexity

As can be seen from the above, assuming that in radix sort, R is the radix and D is the digit. Then the time complexity of radix sort is O(d(n+r)).

We can see that the efficiency of radix sort is not related to whether the initial sequence is ordered.

4. Spatial complexity

In radix sort, n+ R temporary Spaces are required to “bucket” the radix on any digit number.

5. Algorithm stability

In the radix sort process, elements of the same value in the current digit are “bucket” each time, and there is no need to change positions. So radix sort is a stable algorithm.

This article is only for learning, copyright belongs to the original author, if there is infringement, please contact delete.

I’ve compiled the interview questions and answers in PDF files, as well as a set of learning materials covering, but not limited to, the Java Virtual Machine, the Spring framework, Java threads, data structures, design patterns and more.

Follow the public account “Java Circle” for information, as well as quality articles delivered daily.