1. Bubble sort algorithm
1.1 Algorithm idea
The basic idea of Bubble Sort is:
I (I = 1, 2…) Start from the first element of the first n – I + 1 element in the sequence, and compare the two adjacent elements. If the former is greater than the latter, they exchange positions, otherwise they do not exchange positions.
1.2 Algorithm Steps
- So let’s take the number one in the sequence
1
The elements and the2
If the former is greater than the latter, the two will exchange their positions, otherwise they will not exchange; - Then the first
2
The elements and the3
If the former is greater than the latter, the two will exchange their positions, otherwise they will not exchange; - And so on up to number one
n - 1
The elements and then
Elements are compared (or swapped). After such a sequence, so thatn
The element with the highest value is placed at the beginning of the sequencen
In one place. - After that, before
n - 1
The elements perform the same process such that then - 1
The element with the highest value of the element is placed at the firstn - 1
In one place. - And then go ahead
n - 2
The above process is repeated for each element until the sequence ends when no element changes position.
1.3 Animation Demonstration
1.4 Algorithm Analysis
- In the best case, the initial sequence is already ordered from smallest to largest (ascending), so it only takes one trip
n - 1
The algorithm can end the sorting without moving the elements. In this case, the time complexity of the algorithm is
. - The worst case is when the initial sequence involved in sorting is in reverse order, or when the least valued element is at the end of the sequence
n - 1
So we sort all of them
Therefore, the average time complexity of the bubble sort algorithm is
. - Bubble sort requires moving elements more times during sorting. Therefore, bubble sorting method is more suitable for the small amount of data participating in sorting sequence, especially when the initial state of the sequence is basically ordered. For the general case, this method is the least efficient way to sort time.
- Since the exchange of elements is carried out between adjacent elements without changing the relative positions of elements with the same value, bubble sorting is a stable sorting method.
1.5 Code Implementation
class Solution:
def bubbleSort(self, arr) :
for i in range(len(arr)):
for j in range(len(arr) - i - 1) :# If the former is greater than the latter, the two switch places
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.bubbleSort(nums)
Copy the code
2. Select the sorting algorithm
2.1 Algorithm idea
Basic idea of Selection Sort:
The i-th sort starts from the last n − I + 1 (I = 1, 2… , n − 1) select the element with the smallest value to swap places with the uppermost element of the n-i + 1 element, that is, swap places with the element at the ith position of the whole sequence. And so on, until I == n − 1, the sorting ends.
It can be summarized as: in each sort, select the smallest element from the remaining unsorted elements and swap places with the first element of the unsorted elements.
2.2 Algorithm Steps
- Sets integer variables in the algorithm
i
, can be used as the calculation of the number of sort runs, but also as the execution of thei
When sorting, join the last of the sortsN − I + 1
The first digit of the element1
The position of the element. - Integer variables
min_i
Record theN − I + 1
The position of the least valued element in. - Each time you sort, you start with the other
min_i = i
(That is, assume the order of the sequence for the momenti
The value of the element is the smallest, and the position of the minimum element can be determined formally after comparison according to the actual situation). - The first
i
At the end of the sorting comparison, thisN − I + 1
The true value of the smallest element is the subscriptmin_i
The corresponding element. At this point, if there aremin_i == i
That means that the least valued element is right hereN − I + 1
The first digit of the element1
Number of elements, meaning that there is no element swap for this sort.
2.3 Animation Demonstration
2.4 Algorithm Analysis
Selecting a sequence of n elements requires n-1 sorting.
- When the original sequence is a value-increasing sequence (ascending), the element has the least number of moves, which is
0
Times. The element moves the most when the sequence starts as a descending sequence of values (in reverse order)
Time (3
Is the switchingarr[i]
与arr[min_i]
Number of executions). - But no matter what the initial state of the elements in the sequence is, no.
i
Sorting is all about finding the smallest elementN - i.
Comparison between subelements. Therefore, the number of comparisons between elements required in the whole sorting process is the same, and is
Times. - This shows that the number of comparisons between elements carried out by the selection sorting method has nothing to do with the original state of the sequence, and the time complexity of the algorithm can be determined as O(n2)O(n^2)O(n2).
- Because the value of the smallest element is the first in the unsorted element
1
The exchange action of 2 elements is carried out between non-adjacent elements, so it is likely to change the front and back positions of elements with the same value. Therefore, the selective sorting method is a kind of unstable sorting method.
2.5 Code Implementation
class Solution:
def selectionSort(self, arr) :
for i in range(len(arr) - 1) :# Record the index of the smallest number in an unsorted sequence
min_i = i
for j in range(i + 1.len(arr)):
if arr[j] < arr[min_i]:
min_i = j
If the minimum number is found, swap the element at position I with the element at position min
ifi ! = min_i: arr[i], arr[min_i] = arr[min_i], arr[i]return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.selectionSort(nums)
Copy the code
3. Insert sorting algorithm
3.1 Algorithm Idea
Insertion Sort
The whole sequence is divided into two parts: the first I-1 element is an ordered sequence, and the last N-i + 1 element is an unordered sequence. Each time, the first element of the unordered sequence is inserted into the ordered sequence.
It can be summarized as: in each sort, the first element of the remaining unordered sequence is inserted into the appropriate position of the ordered sequence.
3.2 Algorithm Steps
- Will be the first
1
Element as an ordered sequence, the first2
~n - 1
Element as unordered sequence. - Take the first element from an unordered sequence and scan back to front in an already ordered sequence.
- If the scanned element is larger than the retrieved element, the scanned element is moved to the right
1
Bit, and then continue moving forward until you find the appropriate place in the ordered sequence that is less than or equal to the extracted element. - Insert the fetched element into the appropriate position.
- Repeat 2 to 4 steps until all elements are in an ordered sequence.
3.3 Animation Demonstration
3.4 Algorithm Analysis
- Those who have
n
The insertion sort method has to be carried out altogethern - 1
The sorting. - For the insertion sort algorithm, the whole sorting process requires only one auxiliary space
temp
. - When the original sequence is a value-increasing sequence (ascending), the corresponding each
i
Value is compared between elements only once, so the total number of comparisons is the least, is
, there is no need to move elements (records), which is the best case scenario. - In the worst case, the sequence starts as a decrement sequence (in reverse order) corresponding to each
i
All values are going to be carried outi - 1
The total number of comparisons between elements reaches the maximum value, which is
. - If the initial condition of the sequence is random, that is, the various permutations of the elements in the sequence participating in the sorting have the same probability, then the average of the minimum and maximum values above is taken as the number of comparisons between the elements in the insertion sorting, which is about n2/4N ^2/4n2/4. Thus, the time complexity of the insertion sort algorithm O(n2)O(n^2)O(n2).
- Insertion sort method belongs to stability sort method.
3.5 Code Implementation
class Solution:
def insertionSort(self, arr) :
for i in range(1.len(arr)):
# temp is the first element fetched from the unordered sequence
temp = arr[i]
j = i
Move the scanned element to the right and find the appropriate insertion position
while j > 0 and arr[j - 1] > temp:
arr[j] = arr[j - 1]
j -= 1
Insert the fetched element into the appropriate position
arr[j] = temp
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.insertionSort(nums)
Copy the code
4. Hill sorting algorithm
4.1 Algorithm Idea
Basic idea of Shell Sort:
The whole sequence is divided into several sub-sequences according to certain interval values, and each sub-sequence is inserted and sorted separately. Then gradually narrow the interval for the next round of molecular sequence and insertion sort. Insertion sort is performed on the whole sequence until the last sort interval is 1.
4.2 Algorithm Steps
- First determine an element spacing number
gap
, and then the sequence participating in the sort is numbered from the first to the second1
The elements are divided into subsequences at a time, that is, all positions are separated bygap
The element of is regarded as a subsequence, and insertion sort is carried out in each subsequence using some sort method. - Then reduce the number of intervals, and re-divide the whole sequence into several sub-sequences according to the new number of intervals, and then sort each sub-sequence separately, and so on, until the number of intervals
gap = 1
.
4.3 Graphic Demonstration
4.4 Algorithm Analysis
- The speed of Hill’s sorting method is a series of intervals
It’s not easy to figure out how to compare The Times of phi with phigap
And gives a complete mathematical analysis. - In the above algorithm, due to the adoption
The method reduces the number of intervals for havingn
The sequence of elements, if
, after
After one sort
, therefore, the total number of sort lies of Hill’s sorting method is
. - It can also be seen from the algorithm that the outermost while loop is
Order of magnitude, the mid-level do-while loop isn
Orders of magnitude. When the subsequence is divided more, there are fewer elements in the subsequence, and the innermost for loop has fewer times. On the contrary, when the number of subsequences decreases, the elements in the subsequence also increase, but the whole sequence gradually approaches the order, and the number of cycles does not increase. Therefore, the time complexity of Hill sorting algorithm is in
与
In between. - Hill sorting method is an unstable sorting algorithm.
4.5 Code Implementation
class Solution:
def shellSort(self, arr) :
size = len(arr)
gap = size // 2
while gap > 0:
# Sort each subsequence separately
for i in range(gap, size):
temp = arr[i]
j = i
while j >= gap and arr[j - gap] > temp:
arr[j] = arr[j - gap]
j -= gap
arr[j] = temp
# Reduce the number of intervals
gap = gap // 2
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.shellSort(nums)
Copy the code
5. Merge sort algorithm
5.1 Algorithm Idea
Merge Sort (Merge Sort)
Using the classic divide-and-conquer strategy, the current sequence is divided into two halves recursively. The ordered sequence is then combined in pairs to form an ordered sequence.
5.2 Algorithm Steps
- Initially, the sequence to be sorted
n
As a recordn
Is an ordered subsequence (every subsequence is always ordered), and each subsequence is of length1
. - Merge the ordered subsequences in the current sequence group in pairs, and halve the number of sorted sequences and double the length of each subsequence.
- Repeat the above operation for an ordered subsequence of double length, resulting in a subsequence of length
n
An ordered sequence of.
5.3 Animation Demonstration
5.4 Algorithm Analysis
- The time complexity of merge sort algorithm is equal to the number of merge runs and the time complexity of each merge run. The child algorithm
merge(left_arr, right_arr):
The time complexity of is
, therefore, the total time complexity of merge sort algorithm is
. - The merge sort method requires the same amount of auxiliary space as the collated sequence. Therefore, the spatial complexity of the algorithm is O(n)O(n)O(n).
- Because in the merging of two ordered subsequences, if the same element appears in both ordered sequences,
merge(left_arr, right_arr):
The algorithm can ensure that the relative order of the two elements does not change by copying the same element in the previous sequence first. So merge sort isStable sorting algorithm.
5.5 Code Implementation
class Solution:
# merge sort
def merge(self, left_arr, right_arr) :
arr = []
while left_arr and right_arr:
if left_arr[0] <= right_arr[0]:
arr.append(left_arr.pop(0))
else:
arr.append(right_arr.pop(0))
while left_arr:
arr.append(left_arr.pop(0))
while right_arr:
arr.append(right_arr.pop(0))
return arr
def mergeSort(self, arr) :
# segmentation
size = len(arr)
if size < 2:
return arr
mid = len(arr) // 2
left_arr, right_arr = arr[0: mid], arr[mid:]
# Recursively shard and merge
return self.merge(self.mergeSort(left_arr), self.mergeSort(right_arr))
def sortArray(self, nums: List[int]) - >List[int] :
return self.mergeSort(nums)
Copy the code
6. Quicksort algorithm
6.1 Algorithm Idea
Quick Sort (Quick Sort)
The unordered sequence is divided into two independent sequences by a sequence sorting, the value of the first sequence is smaller than the value of the second sequence. The two subsequences are then recursively arranged so that the entire sequence is ordered.
6.2 Algorithm Steps
- Find a base number from the array.
- We then split the array into left and right parts by moving the elements larger than the base number to the right and the elements smaller than the base number to the left.
- Repeat the second step for the left and right parts respectively until each part has only one number, then the sorting is finished.
6.3 Animation Demonstration
6.4 Algorithm Analysis
-
The quicksort method takes the longest when the sorting elements are already ordered at the beginning. At this point, the first element is still determined in the original position after the first sorting through n-1 comparison, and a sub-sequence with length n-1 is obtained. After the second sorting through n-2 comparisons, the second element is fixed in its original position, and a subsequence of length N-2 is obtained. And so on, the total number of comparisons is (n−1)+(N −2)+… +1= N (n−1)2(N −1) + (n− 2) +… + 1 = \frac{n(n −1)}{2}(n−1)+(n−2)+… + 1 = 2 n (n – 1). So the time complexity is O(n2)O(n^2)O(n2).
-
In another case, if after each sorting, the boundary element is positioned in the middle of the sequence, so as to divide the sequence currently participating in sorting into two sub-sequences of equal size, then the time required for quick sorting of the sequence with length N is:
Therefore, the time complexity of the quicksort method is O(Nlog2N)O(Nlog_2 N)O(nlog2n), which is obviously better than the sorting algorithms discussed above.
-
Whether the quicksort algorithm is recursive or not, the auxiliary space of the stack or other structures is needed to store the beginning and end of the current sequence to be sorted. In the worst case, the space is O(n)O(n)O(n).
-
If the algorithm is rewritten, the length of the two subsequences is compared after a sorting, and the shorter subsequence is quickly sorted first, the required space complexity can reach O(log2n)O(log_2 n)O(log2n).
-
Quicksort is not only an unstable sorting algorithm, but also a sorting algorithm not suitable for linked list structure.
6.5 Code Implementation
import random
class Solution:
Select the base number at random and move the element to the correct position based on the base number
def randomPartition(self, arr: [int], low: int, high: int) :
i = random.randint(low, high)
arr[i], arr[high] = arr[high], arr[i]
return self.partition(arr, low, high)
# take the high level as the base number and move the element to the correct position based on the base number
def partition(self, arr: [int], low: int, high: int) :
i = low - 1
pivot = arr[high]
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
Fast sort recursively
def quickSort(self, arr, low, high) :
if low < high:
pi = self.randomPartition(arr, low, high)
self.quickSort(arr, low, pi - 1)
self.quickSort(arr, pi + 1, high)
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.quickSort(nums, 0.len(nums) - 1)
Copy the code
7. Heap sort algorithm
7.1 Algorithm Idea
Heap sort (Heap sort)
Borrow “heap structure” design sort algorithm. The array is converted to the big top heap, and the node with the largest value is repeatedly removed from the big top heap, and the remaining heap remains the big top heap.
7.2 Definition of heap
1. A complete binary tree that meets either of the following two conditions:
- Large top heap: root node value ≥ child node value.
- Small top heap: root node value ≤ child node value.
7.3 Algorithm Procedure
- First, an unordered sequence is constructed as a no
1
A large top heap (initial heap), so thatn
Is at the top of the sequence1
A location. - And then swap the ordinances of the sequence
1
Number of elements (maximum element) and position of the last element. - After that, put the sequence before
n - 1
The subsequence of the elements is adjusted to a new big top heap, which again yields the number2
Is the maximum element of the subsequence1
Number element (maximum element) and numbern - 1
Zero elements swap places. - And then put the sequence in front
n - 2
The elements are adjusted into a new big top heap… And so on, until the whole sequence transforms into an ordered sequence.
It can be seen that heap sort algorithm mainly involves “initial heap building method” and “heap adjustment method”.
7.3.1 Heap adjustment method
The heap adjustment method is to reconstruct the sequence of remaining elements after removing the maximum element into a new heap. The specific steps are as follows:
- Starting at the root node, adjust the position of the nodes from top to bottom to make it a heap. That is, the serial number is
i
And its left subtree node (serial number is2 * i
), right subtree node (serial number is2 * i + 1
) the node swap position with the largest value. - Because the position is changed, the original accumulation characteristics of the left and right subtrees of the current node are destroyed. Then, similar adjustments continue top-down, starting with the left and right subtree nodes of the current node.
- And so on until the whole binary tree is a big top heap.
7.3.2 Initial heap establishment method
- If the depth of the complete binary tree (not necessarily the heap) corresponding to the original sequence is
d
, fromd - 1
The right-most branch node of layer (serial number is
(The beginning of the season
Call the heap adjustment algorithm. - Each time the heap adjustment algorithm is called, it is executed
i = i - 1
Until thei == 1
, which adjusts the original sequence to an initial heap.
7.4 Animation Demonstration
7.4.1 Demonstration of heap adjustment method
7.4.2 Demonstration of initial heap establishment method
7.4.3 Demonstration of heap sort method
7.5 Algorithm Analysis
- The time of heap sorting is mainly spent in two aspects:
- The original sequence is constructed as an initial heap.
- The sorting process continually removes the maximum element and then rearranges the remaining elements into a new heap.
- Assume that the depth of the complete binary tree corresponding to the original sequence is D, and the algorithm consists of two independent cycles:
- In the first
1
A cycle constructs the initial stack fromi = d - 1
Layer start, goi = 1
Layer is called once for each branch nodeadjust
Algorithm, every timeadjust
Algorithm, for the firsti
Layer a node to the firstd
The maximum distance that all nodes can move is when the root node of the subheap moves to the last layer (nod
The distance of layer) isd - i
. - And the first
i
Nodes at the layer have at most
So every timeadjust
The maximum moving distance of the algorithm is
. Therefore, the heap-sort algorithm of the first1
The time required for each cycle should be the sum of the product of the number of nodes on each layer and the maximum distance that nodes on the layer can move, namely:
. This part of the time is going to take 1, 0
. - At the top of the algorithm
2
Each call in the loopadjust
Algorithm once, the maximum distance of node movement is the depth of the complete binary tree
, called altogethern - 1
次adjust
Algorithm, so, no2
The time cost of each cycle is zero
.
- In the first
- Therefore, the time complexity of heap sort is O(Nlog2n)O(Nlog_2 n)O(nlog2n).
- Since only one record size auxiliary space is required in heap sort, the space complexity of heap sort is: O(1)O(1)O(1).
- Heap sort is an unstable sort algorithm. Heap sort is also a sort that is not suitable for implementation on linked lists.
7.6 Code Implementation
class Solution:
# Adjust to big top heap
def heapify(self, arr: [int], index: int, end: int) :
left = index * 2 + 1
right = left + 1
while left <= end:
The current node is a non-leaf node
max_index = index
if arr[left] > arr[max_index]:
max_index = left
if right <= end and arr[right] > arr[max_index]:
max_index = right
if index == max_index:
# if no exchange is required, the exchange is complete
break
arr[index], arr[max_index] = arr[max_index], arr[index]
Continue to adjust the subtree
index = max_index
left = index * 2 + 1
right = left + 1
Initialize the big top heap
def buildMaxHeap(self, arr: [int]) :
size = len(arr)
# (size-2) // 2 is the last non-leaf node
for i in range((size - 2) / /2, -1, -1):
self.heapify(arr, i, size - 1)
return arr
# ascending heap sort
# 1. Build the big top heap first
# 2. Swap the top element with the last, then adjust the first element to the penultimate element to get the maximum value
# 3. Swap the top element with the penultimate element, and then adjust the first element to the penultimate element, which gets the second largest value
# 4. And so on until the last element is swapped.
def maxHeapSort(self, arr: [int]) :
self.buildMaxHeap(arr)
size = len(arr)
for i in range(size):
arr[0], arr[size - i - 1] = arr[size - i - 1], arr[0]
self.heapify(arr, 0, size - i - 2)
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.maxHeapSort(nums)
Copy the code
8. Counting sort algorithm
8.1 Algorithm Idea
The basic idea of Counting Sort is
Use an additional array counts, where the ith element counts[I] is the number of elements in the array arr to be sorted with a value equal to I. We then rank the elements in arR in the correct position according to the array COUNTS.
8.2 Algorithm Steps
- Find the largest and smallest element in the array to be sorted.
- Each value in the statistics array is
i
The number of occurrences of the element in the arrayi
Items. - The sum of all counts (from
counts
Start with the first element in, adding each term to the previous one. - Reverse populates the target array: places each element
i
Put it at the top of the new arraycounts[i]
Term, every time you put an element, you put thecounts[i] -= 1
.
8.3 Animation Demonstration
8.4 Algorithm Analysis
- When the input element is
n
个0 ~ k
When the integer between, the time complexity of counting sort is
. - Because of the array used to count
counts
The length depends on the range of data in the array to be sorted (equal to the maximum of the array to be sorted minus the minimum plus1
). So counting sort takes a lot of time and memory for a large array. - Counting sort is generally used to sort integers, not names alphabetically.
- Counting sort is a stable sort algorithm.
8.5 Code Implementation
class Solution:
def countingSort(self, arr) :
The maximum and minimum elements in the array to be sorted
arr_min, arr_max = min(arr), max(arr)
size = arr_max - arr_min + 1
counts = [0 for _ in range(size)]
# Count the number of occurrences of each element in the array with the value 'num', and store the num-arr_min item in the array
for num in arr:
counts[num - arr_min] += 1
# All counts add up
for j in range(1, size):
counts[j] += counts[j - 1]
# Reverse populate the target array
res = [0 for _ in range(len(arr))]
for i in range(len(arr) - 1, -1, -1):
res[counts[arr[i] - arr_min] - 1] = arr[i]
counts[arr[i] - arr_min] -= 1
return res
def sortArray(self, nums: List[int]) - >List[int] :
return self.countingSort(nums)
Copy the code
9. Bucket sorting algorithm
9.1 Algorithm Idea
The basic idea of Bucket Sort is
The unsorted array is divided into buckets, and the elements of each bucket are sorted separately.
9.2 Algorithm Procedure
- Let’s divide the interval into two
n
Subintervals of the same size, each called a bucket. - Iterate through the number group and put each element into the corresponding bucket.
- Sort the elements within each bucket individually (using insert, merge, quicksort, and so on).
- Finally, merge the elements in the bucket in order.
9.3 Graphic Demonstration
9.3.1 Delimit the molecular interval
9.3.2 Putting array elements into a bucket and sorting the elements in the bucket separately
9.3.3 Merge elements in a Bucket and sort them
9.4 Algorithm Analysis
- Bucket sorting can be done in linear time when the number of input elements is
n
The number of buckets is 1m
Is, the bucket sorting time complexity is
. - Since bucket sort uses auxiliary space, the space complexity of bucket sort is
o(n + m)
. - If stable sorting algorithms such as insert sort algorithm are used in buckets, bucket sort is also stable sorting algorithm.
9.5 Code Implementation
class Solution:
Insert sort
def insertionSort(self, arr) :
for i in range(1.len(arr)):
temp = arr[i]
j = i
while j > 0 and arr[j - 1] > temp:
arr[j] = arr[j - 1]
j -= 1
arr[j] = temp
return arr
def bucketSort(self, arr, bucket_size=5) :
# delimit the numerator and create a bucket
arr_min, arr_max = min(arr), max(arr)
bucket_count = (arr_max - arr_min) // bucket_size + 1
buckets = [[] for _ in range(bucket_count)]
Load each element into the corresponding bucket
for num in arr:
buckets[(num - arr_min) // bucket_size].append(num)
res = []
Insert sort into the bucket and store the sorted result into the target array
for bucket in buckets:
self.insertionSort(bucket)
res.extend(bucket)
return res
def sortArray(self, nums: List[int]) - >List[int] :
return self.bucketSort(nums)
Copy the code
10. Radix sort algorithm
10.1 Algorithm Idea
Basic idea of Radix Sort:
Integers are cut into different numbers by number of digits, and then sorted by comparing each number individually.
10.2 Algorithm Steps
The cardinality sorting algorithm can be “Least Significant Digit First” or “Most Significant Digit First”. The most common is the “least important first method”.
Let’s take the least significant first method as an example to explain the algorithm steps.
- Iterate over the array element, get the largest element in the array, and get the number of digits.
- Sort the elements of an array with the bits element as the index.
- Merge arrays.
- Then in the tens place, hundreds place,… , until the highest value of the maximum element is the index, sort, merge the array, and finally finish sorting.
10.3 Animation Demonstration
10.4 Algorithm Analysis
- The time of radix sort is zero
. Among themn
Is the number of elements to be sorted,k
It’s the number of digits.k
The size of depends on the choice of numeric bits (decimal bits, binary bits) and the size of the complete set of data types to which the element to be sorted belongs. - Radix sort is only good for positive element sort.
- The spatial complexity of radix sort is O(n+k)O(n +k)O(n +k).
- Radix sort is a stable sort algorithm.
10.5 Code Implementation
class Solution:
def radixSort(self, arr) :
Count the longest number of digits
size = len(str(max(arr)))
The number of digits traversed from the ones digit to the highest digit
for i in range(size):
buckets = [[] for _ in range(10)]
for num in arr:
buckets[num // (10 ** i) % 10].append(num)
arr.clear()
Create a new array
for bucket in buckets:
for num in bucket:
arr.append(num)
return arr
def sortArray(self, nums: List[int]) - >List[int] :
return self.radixSort(nums)
Copy the code