Original address:Summary of ten classical algorithms

In the traditional field of computer algorithms and data structures, the default language for most professional textbooks and books is C/C+ +, but today we are still using JavaScript!

How it came about: A 9th-century Persian mathematician called al-Khowarizmi, which is shown here. Just kidding, the Arabs’ contribution to the history of mathematics is admirable.

The body of the

Sorting algorithm description

(1) Sorting definition: sort a sequence of objects according to a certain keyword;

Input: n Number: A1, A2, A3… Output: n permutations: A1 ‘, A2 ‘, A3 ‘… ,an’, such that a1′<=a2′<=a3′<=… < = an ‘.

Again, the image point is the rows of seats, adjust the seat, the tall stand in the back, the short stand in the front.

(3) The description of terms for evaluating the advantages and disadvantages of algorithms

Stability: if a is in front of B, and a= B, a is still in front of B after sorting; Unstable: If a is in front of B and a= B, a may appear behind B after sorting;

Internal sort: All sort operations are done in memory; External sort: Because the data is too large, the data is placed on disk, and the sort can only be carried out by disk and memory data transfer;

Time complexity: The amount of time an algorithm takes to execute. Space complexity: The amount of memory required to run a program.

For more information on time and space complexity, please click here, or read cheng Jie’s “Big Talk Data Structure” is still great, easy to understand.

(4) Summary of sorting algorithm picture (picture from network):

Sort contrast:

N: data size K: number of buckets In-place: occupies constant memory, does not occupy extra memory out-place: occupies extra memory

Sort by category:

1. Bubble Sort

Okay, so let’s summarize the first sort algorithm, bubble sort. I think it’s familiar to anyone who has studied C, and this is probably the first sorting algorithm that a lot of people have ever encountered.

(1) Algorithm description

Bubble sort is a simple sorting algorithm. It repeatedly walks through the sequence to be sorted, comparing two elements at a time and swapping them if they are in the wrong order. The task of visiting an array is repeated until no more exchanges are needed, that is, the array has been sorted. The algorithm gets its name because smaller elements slowly “float” to the top of the list through swapping.

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. Compare adjacent elements. If the first one is larger than the second, swap them both;

<2>. Do the same for each pair of adjacent elements, starting with the first pair and ending with the last pair, so that the last element should be the largest;

<3>. Repeat the above steps for all elements except the last;

<4>. Repeat steps 1 to 3 until the sorting is complete.

JavaScript code implementation:

Improved bubble sort: Set a token variable pos that records the last position swapped in each sort run. Since all the records after the POS position have been swapped in place, it is only necessary to scan the POS position in the next sorting.

The improved algorithm is as follows:

In traditional bubble sort, only one maximum value or minimum value can be found in each sort operation. We consider that we can get two final values (the largest value and the smallest value) at one time by bubbling forward and backward twice in each sort operation, thus reducing the number of sort trips almost by half.

The improved algorithm is implemented as follows:

Time comparison of the three methods:

It can be seen from the figure that the improved bubble sort obviously has lower time complexity and shorter time. Readers can try it themselves. The blogger has created a library on Github that readers can Clone to try locally. This blog post with source code experience better oh ~~~

Bubble sort GIF demo:

(3) Algorithm analysis

Best case: T(n) = O(n)

When the input data is already in positive order (already in positive order, why still sort….)

Worst case: T(n) = O(n2)

When the input data is in reverse order (god, I directly reverse order not over….)

Average case: T(n) = O(n2)

2. Selection Sort

One of the most stable sorting algorithms in the world, because whatever data goes in is O(n²) time complexity….. So when you use it, the smaller the data, the better. The only benefit might be that it doesn’t take up any extra memory. Theoretically speaking, the choice sort may also be the usual sort ordinary people think of the most sorting method.

(1) Introduction to the algorithm

Selection-sort is a simple and intuitive sorting algorithm. It works by first finding the smallest (largest) element in the unsorted sequence and storing it at the beginning of the sorted sequence, then continuing to find the smallest (largest) element from the remaining unsorted elements and placing it at the end of the sorted sequence. And so on until all the elements are sorted.

(2) Algorithm description and implementation

Direct selection sort of N records can get ordered results after n-1 direct selection sort. The specific algorithm is described as follows:

<1>. Initial state: the disordered region is R[1..n], and the ordered region is empty;

<2>. (I =1,2,3… N-1), the current ordered region and disordered region are R[1..i-1] and R(I.. N). This sorting selects the record R[k] with the smallest keyword from the current unordered region and exchanges it with the first record R in the unordered region, so that R[1.. I] and R[I +1..n) become the new ordered region with 1 more records and the new unordered region with 1 less records, respectively.

<3>. N -1 runs end, array sorted.

Javascript code implementation:

Selection sort GIF demo:

(3) Algorithm analysis

Best case: T(n) = O(n2)

Worst case: T(n) = O(n2)

Average case: T(n) = O(n2)

1. Insertion Sort

Insert sort code implementation although not bubble sort and selection sort as simple and crude, but its principle should be the most easy to understand, because as long as the poker people should be able to understand seconds. Of course, if you say that you never arrange cards according to the size of the card when playing poker, it is estimated that you will never have any interest in insertion sorting algorithm…..

(1) Introduction to the algorithm

Insertion Sort algorithm is a simple and intuitive sorting algorithm. It works by building an ordered sequence, and for unordered data, scanning backwards in the sorted sequence to find the appropriate location and insert. In the implementation of insertion sort, in-place sort is usually adopted (that is, the sort only needs to use O(1) extra space). Therefore, in the process of scanning from back to front, the sorted elements need to be moved backward step by step repeatedly to provide insertion space for the latest elements.

(2) Algorithm description and implementation

In general, insert sorts are implemented on arrays using in-place. The specific algorithm is described as follows:

<1>. Starting with the first element, the element can be considered sorted;

<2>. Fetch the next element and scan back to front in the sorted element sequence;

<3>. If the element (sorted) is larger than the new element, move the element to the next position;

<4>. Repeat step 3 until you find a position where the sorted element is less than or equal to the new element;

<5>. After inserting the new element into the position;

<6>. Repeat steps 2 to 5.

Javascript code implementation:

Improved insert sort: binary search is used to find insert locations

Comparison before and after improvement:

Insert sort GIF demo:

(3) Algorithm analysis

Best case: The input array is sorted in ascending order. T(n) = O(n)

Worst case: The input array is sorted in descending order. T(n) = O(n2)

Average case: T(n) = O(n2)

4. Shell Sort

1959 Shell invention; The first breakthrough O(n^2) sorting algorithm; It’s an improved version of simple insert sort; It differs from insertion sort in that it compares distant elements first. Hill sort is also called reduced increment sort

(1) Introduction to the algorithm

The core of Hill sequence is the setting of interval sequence. You can either set the interval sequence in advance or define the interval sequence dynamically. The algorithm for dynamically defining interval sequences was developed by Robert Sedgewick, co-author of Algorithms (4th Edition).

(2) Algorithm description and implementation

Firstly, the whole record sequence to be sorted is divided into several sub-sequences for direct insertion sorting. The specific algorithm is described as follows:

<1>. Select an incremental sequence T1, T2… , where Ti >tj, tk=1;

<2>. Sort the sequence k times according to the number of incremental sequences k;

<3>. In each sort, according to the corresponding increment TI, the sequence to be sorted is divided into several sub-sequences with length m, and the direct insertion sort is carried out for each sub-table respectively. Only when the increment factor is 1, the whole sequence is treated as a table, and the length of the table is the length of the whole sequence.

Javascript code implementation:

Hill sorting diagram (image source network) :

(3) Algorithm analysis

Best case: T(n) = O(nlog2 n)

Worst case: T(n) = O(nlog2 n)

Average case: T(n) =O(nlog n)

5. Merge Sort

Like selection sort, merge sort performs independently of the input data, but performs much better than selection sort because it is always O(n log n) time. The trade-off is extra memory.

(1) Introduction to the algorithm

Merge sort is an efficient sorting algorithm based on merge operation. This algorithm is a very typical application of Divide and Conquer. Merge sort is a stable sorting method. The ordered subsequences are combined to obtain a fully ordered sequence. That is, each subsequence is ordered first, and then the subsequence segments are ordered. If two ordered tables are joined into one ordered table, it is called 2-way merge.

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. The input sequence of length N is divided into two sub-sequences of length N /2;

<2>. Merge sort is used for these two subsequences respectively;

<3>. Merge the two sorted subsequences into a final sorted sequence.

Javscript code implementation:

Merge sort GIF demo:

(3) Algorithm analysis

Best case: T(n) = O(n)

Worst case: T(n) = O(nlogn)

Average case: T(n) = O(nlogn)

6. Quick Sort

Quicksort’s name is simple and rude, because when you hear the name you know it is fast, and efficient! It’s one of the fastest sorting algorithms for big data.

(1) Introduction to the algorithm

The basic idea of quicksort is that the records to be sorted are separated into two independent parts through a sort. If the keywords of one part of the records are smaller than those of the other part, the two parts of the records can be sorted separately to achieve the order of the whole sequence.

(2) Algorithm description and implementation

Quicksort uses divide-and-conquer to divide a list into two sub-lists. The specific algorithm is described as follows:

<1>. Pick an element from the sequence, called “pivot”;

<2>. Reorder the array so that all elements smaller than the base value are placed in front of the base value and all elements larger than the base value are placed behind the base value (the same number can go to either side). After the partition exits, the benchmark is in the middle of the array. This is called a partition operation;

<3>. Recursively sorts subsequences of elements less than the reference value and those greater than the reference value.

Javascript code implementation:

Quick sort GIF demo:

(3) Algorithm analysis

Best case: T(n) = O(nlogn)

Worst case: T(n) = O(n2)

Average case: T(n) = O(nlogn)

7. Heap Sort

Heap sort can be said to be a selection sort that uses the concept of heap to sort.

(1) Introduction to the algorithm

Heapsort refers to a sort algorithm designed by using heap data structure. Heap is a nearly complete binary tree structure, and also satisfies the property of heap: that is, the key value or index of the child node is always smaller (or larger) than its parent node.

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. Initialize the keyword sequence to be sorted (R1,R2…. Rn) is constructed into the big top heap, which is the initial unordered region;

<2>. The top element R[1] is swapped with the last element R[n], and a new unordered region (R1,R2,……) is obtained Rn-1) and a new ordered region (Rn), and R[1,2…n-1]<=R[n];

<3>. Since the new heap top R[1] after the swap may violate the heap properties, it is necessary for the current unordered region (R1,R2,……) Rn-1) adjusts to the new heap, and then swaps R[1] again with the last element of the unordered region to get the new unordered region (R1,R2….) Rn-2) and the new ordered region (Rn-1,Rn). This process is repeated until the number of elements in the ordered region is N-1, then the whole sorting process is complete.

Javascript code implementation:

Heap sort GIF:

(3) Algorithm analysis

Best case: T(n) = O(nlogn)

Worst case: T(n) = O(nlogn)

Average case: T(n) = O(nlogn)

8. Counting Sort

The core of counting sort is to convert input data values into keys and store them in extra array space. As a sort of linear time complexity, counting sort requires that the input data must be integers with a definite range.

(1) Introduction to the algorithm

Counting sort is a stable sorting algorithm. Counting sort uses an extra array C, where the ith element is the number of elements in array A whose value is equal to I. We then arrange the elements in A into the correct position according to array C. It can only sort integers.

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. Find the largest and smallest elements in the array to be sorted;

<2>. Count the number of occurrences of each element with value I in array and store it in the i-th item of array C;

<3>. Add up all counts (starting with the first element in C, adding each term to the previous one);

<4>. Reverse populating the target array: place each element I in the C(I) item of the new array, subtracting C(I) by 1 for each element.

Javascript code implementation:

GIF demo:,

(3) Algorithm analysis

When the input element is n integers between 0 and k, it runs O(n + k). Counting sort is not comparison sort, which is faster than any comparison sort algorithm. Since the length of the array C used to count depends on the range of data in the array to be sorted (equal to the difference between the maximum and minimum value of the array to be sorted plus 1), counting sort takes a lot of time and memory for arrays with a large range of data.

Best case: T(n) = O(n+k)

Worst case: T(n) = O(n+k)

Average case: T(n) = O(n+k)

9. Bucket Sort

Bucket sort is an updated version of counting sort. It makes use of the mapping relation of functions, and the key of efficiency lies in the determination of the mapping function.

(1) Introduction to the algorithm

Bucket sort works by dividing the data into a finite number of buckets, assuming that the input data is evenly distributed. Each Bucket is sorted separately (it is possible to use another sorting algorithm or to continue sorting using Bucket sort recursively

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. Set a quantitative array as an empty bucket;

<2>. Iterate over the input data and put the data one by one into the corresponding bucket;

<3>. Sort each bucket that is not empty;

<4>. Splicing together sorted data from never empty buckets.

Javascript code implementation:

Bucket sorting diagram:

(3) Algorithm analysis

The optimal linear time for bucket sorting is O(n). The time complexity of bucket sorting depends on the time complexity of sorting data between buckets, because the time complexity of all other parts is O(n). Obviously, the smaller the bucket, the less data there is between the buckets, and the less time it takes to sort. But the corresponding space consumption will increase.

Best case: T(n) = O(n+k)

Worst case: T(n) = O(n+k)

Average case: T(n) = O(n2)

10. Radix Sort

Radix sort is also a non-comparative sort algorithm, sorting each bit, sorting from the least, complexity is O(kN), is the length of the array, k is the largest number of the number in the array;

(1) Introduction to the algorithm

Radix sort is sorted by low order and then collected; And then sort it in order, and then collect it; And so on, all the way to the highest place. Sometimes some attributes are prioritized, first by low priority, then by high priority. The final order is that the higher-priority ones come first, and the lower-priority ones come first. Radix sort is based on sorting separately, collecting separately, so it’s stable.

(2) Algorithm description and implementation

The specific algorithm is described as follows:

<1>. Get the largest number in the array, and get the number of digits;

<2>. Arr is the original array, and each bit is taken from the lowest level to form the RADIX array;

<3>. Radix counting sorting (taking advantage of the fact that counting sorting is suitable for small range of numbers);

Javascript code implementation:

Radix sort LSD GIF demo

(3) Algorithm analysis

Best case: T(n) = O(n * k)

Worst case: T(n) = O(n * k)

Average case: T(n) = O(n * k)

There are two ways to sort radix:

MSD sorts from the top

LSD sorts from the low end

Radix sort vs counting sort vs bucket sort

These three sorting algorithms all use the concept of buckets, but there are obvious differences in the way buckets are used:

Radix sort: Buckets are allocated based on each digit of the key value

Counting sort: Each bucket stores a single key value

Bucket sort: Each bucket stores a range of values

One last word:

** From an experienced point of view, it is really important for beginners to learn programming methods, otherwise it will lead to high consumption and low efficiency. ** If you want to improve your core programming ability (internal work), the following information is also recommended to see, for basic improvement quite helpful.

C language C++ programming learning exchange circle, QQ group [914504899] wechat official number: C language programming learning base

Organize and share (years of learning source code, project actual combat video, project notes, basic introduction tutorial)

Welcome to change careers and learn programming partners, use more information to learn and grow faster than their own thinking oh!