1.1 data
Data: it is the symbol that describes objective things, the object that can be operated in the computer, and the symbol set that can be recognized by the computer and input to the computer. Data includes not only numeric types such as integer and real, but also characters and non-numeric types such as sound, image and video.
1.2 Data structure concepts
Data structures are the way computers store and organize data. A data structure is a collection of data elements that have one or more specific relationships with each other. More often than not, a carefully chosen data structure can lead to higher running or storage efficiency. Data structures are often associated with efficient retrieval algorithms and indexing techniques.
1.3 Concept of algorithm
Algorithm is a description of the steps of solving a specific problem, which is represented as a finite sequence of instructions in a computer.
It’s not the language that matters to the algorithm, it’s the idea.
1.3.1 Differences between algorithms and data structures
Data structure is a static description of the relationship between data elements, efficient programs need to design and select algorithms on the basis of data structure.
L algorithm is designed to solve practical problems.
L Data structure is the problem carrier that the algorithm needs to deal with.
L Data structures and algorithms complement each other
1.3.2 Comparison of algorithms
Now we need to write a 1 + 2 + 3 +… Plus 100, what do you write?
Most people write the following code in C (or whatever)
var i int
sum := 0
n := 100
for i = 0; i <= n; i++ {
sum = sum + i
}
fmt.Printf("%d", sum)
Copy the code
Of course, if the problem were left to Gauss, he might write the following code:
sum := 0
n := 100
sum = (1+n)*n/2
fmt.Printf("%d",sum)
Copy the code
Obviously, the following algorithms are much more efficient from both a human and a computer point of view, and a good algorithm will make your program much more efficient.
1.3.3 Characteristics of the algorithm
The algorithm has five basic characteristics: input, output, fineness, determinacy and feasibility
Input/output: An algorithm has zero or more inputs and at least one or more outputs.
Fineness: An algorithm that executes a finite number of steps, terminates automatically without an infinite loop, and each step completes in an acceptable amount of time.
Deterministic: Each step of the algorithm has a definite meaning, without ambiguity.
Feasibility: Each step of the algorithm must be feasible, that is, each step can be performed a finite number of times.
1.4 Data structure classification
According to the point of view, we divide data structure into logical structure and physical structure.
1.4.1 Logical Structure
1.4.1.1 Set Structure: Data elements in a set structure have no other relationship except that they belong to the same set. Each data element is equal. They all belong to the same set, and the set relationship in the data structure is similar to the set in mathematics, as shown in the figure below
1.4.1.2 Linear Structure: There is a one-to-one relationship between data elements in a linear structure. As shown in figure:
1.4.1.3 Tree Structure: In the tree structure, there is a one-to-many hierarchical relationship between data elements, as shown in the figure:
1.4.1.4 Graph structure: The data elements of graph structure are many-to-many relationship if:
1.4.2 Physical Structure
Having said logical structure, let’s talk about physical structure, there are books called storage structure.
Physical structure: The logical structure of data stored in a computer in two forms: sequential storage and chain storage.
1.4.2.1 Sequential Storage: Data elements are stored in storage units with consecutive addresses, and the logical and physical relationships of data are consistent, as shown in the figure:
This would be easy if all the data structures were simple and orderly, but in reality, there are always people who want to jump the queue, or give up the queue, so members are added and removed from the set of elements. Obviously, sequential storage is not scientific in the face of such constantly changing structures
1.4.2.2 Chained Storage structure: Data elements are stored in any storage unit, which can be continuous or discontinuous. The storage relationship of data elements does not reflect their logical relationship, so a pointer is needed to store the address of data elements, so that the location of relevant data can be found through the address. As shown in figure:
2.1 Basic concepts of linear tables
Linear structure is one of the simplest and most commonly used data structures. The basic characteristic of linear structure is that the relationship between nodes is linear. The dynamic arrays, linked lists, stacks, and queues discussed in this chapter are all linear structures. They have one and only one start node and one end node in common. In this relation, all their nodes can be arranged in a linear sequence. However, they belong to several different implementations of abstract data types, and the differences between them are mainly operational differences.
A linear table is a finite sequence of zero or more data elements. There is an order between data elements, a finite number of data elements, and the types of data elements must be the same
Example: Let’s start with an interesting topic, the list of constellations in a year, is it a linear list? As shown in the figure:
Properties of linear tables:
1) A0 is the first element in a linear list, with only one successor. 2) An is the last element in a linear table and has only one precursor. 3) Other elements ai except A0 and AN have both predecessors and successors. 4) Linear tables can be accessed item by item and sequential access.
Abstract data type definitions for linear tables:
2.2 Sequential storage of linear tables
Usually linear tables can be stored sequentially and chained. In this lesson, we mainly discuss the sequential storage structure and the implementation of the corresponding algorithm.
The simplest way to represent a linear table is to store the elements of a linear table one by one in a contiguous storage area. The linear table represented in this order is also called a sequential table.
2.2.1 Design and implementation of sequential storage of linear tables (dynamic array)
Operation points:
L Insert element algorithm
N Check whether the linear table is valid
N Check whether the insertion position is valid
N Determine whether the space is sufficient
N Moves the last element one position after the element at the insertion position
N Inserts the new element
N Linear list length plus 1
L Get element operation
N Check whether the linear table is valid
N Check whether the position is valid
N Gets elements directly by array subscript
L Delete element algorithm
N Check whether the linear table is valid
N Check whether the deleted location is valid
N Pull the element out
N Moves the deleted element forward by one position
N The length of a linear table is reduced by 1
L element insertion
L element deletion
Note: The capacity of a list and the length of a list are two different concepts
2.2.2 Advantages and disadvantages
Ø advantages:
Ø faults:
2.3 Chained Storage of Linear Lists (Unidirectional Linked Lists)
Before we write linear table sequential storage (dynamic array) case, the biggest disadvantage is to insert and delete the need to move a large number of elements, this obviously needs to consume time, can we solve it? A linked list.
In order to represent the logical relationship between each data element and its immediate successors, each element needs to store information indicating its immediate successors in addition to its own information.
L singly linked list
N In the chain storage structure of a linear list, each node contains only one pointer field. Such a list is called a singly linked list.
N Links the data elements of a linear table together in their logical order through the pointer field of each node (see figure).
L Concept explanation:
N Table header
The first node in the list contains a pointer to the first data element and some information about the list itself
N Data node
The node representing a data element in a linked list, containing a pointer to the next data element and information about the data element
N the end nodes
The last data node in a linked list whose next element pointer is null, indicating no heirs.
2.3.1 Chain storage of linear list (single necklace list) design and implementation
ø Insertion operation
ø Delete operation
2.3.2 Advantages and disadvantages
Ø advantages:
Ø faults:
3.1 the Stack (Stack)
3.1.1 Basic concepts of stacks
Ø concept:
First of all, it is a linear table, that is, the stack elements have a linear relationship, that is, before and after. It’s just a special kind of linear list. Insert and delete operations are performed at the bottom of a linear table, which means the top of the stack, not the bottom.
Ø features
It is special in that it restricts the insertion and deletion positions of this linear table, and it always occurs only at the top of the stack. So the bottom of the stack is fixed, and the most advanced stack is at the bottom of the stack.
Ø operation
N The operation of inserting a stack, called pushing, is also called pushing. Similar to the bullet into the magazine (as shown below)
The deletion operation of n stack is called making stack, and some are called projetiles stack, kicking stack. Like a bullet in a clip (as shown below)
3.1.2 Sequential storage of stacks
ø Basic Concepts
The sequential storage structure of stack is called sequential stack, which is a sequence table with limited operation. The storage structure of sequential stack is: a group of storage units with consecutive addresses are used to store the data elements from the bottom of the stack to the top of the stack, and the pointer top is only the position of the top element in the sequence table.
ø Design and implementation
Since a stack is a special linear table, the sequential storage of stacks can be realized by sequential linear tables.
3.1.3 Stack chain storage
ø Basic Concepts
The chain storage structure of stack is called chain stack.
Consider the following questions:
The stack is just the top of the stack for insertion and deletion. Is the top at the head or the end of the list?
Since a single list has a head pointer, and the top of the stack pointer is also necessary, why not combine the two, so it is better to put the top of the stack at the head of the single list. In addition, since the top of the stack is already in the header, the common headers in a single linked list are meaningless. Usually, they are not needed for linked stacks.
ø Design and implementation
Linked stack is a special kind of linear list, which can be realized by chained linear list.
3.1.4 Stack Application (Case)
3.1.4.1 Function call model
The stack is the basis for the implementation of the nested call mechanism
3.1.4.2 Nearby Match
Almost all compilers have the ability to detect matching parentheses, so how to implement symbol pair detection in compilers? The following character string:
(6) + 5 + 5 * 9/3 * 1) – (1 + 3 (
ø Algorithm idea
Scan from the first character
Ignore when normal characters are encountered,
Push onto the stack when an open parenthesis is encountered
Pops the top of stack symbol from the stack when a close bracket is encountered and matches
Match successful: proceed to read the next character
Failed to match: Stop immediately with an error
The end:
Success: All characters are scanned and the stack is empty
Failed: Match failed or all characters scanned but stack is not empty
Ø summary
N The “last in, first out” feature of the stack can be used when detecting pairs of things that are not contiguous
The n stack is ideal for situations where you need to “match nearby.
3.1.4.3 Infix expressions and suffix expressions
L-suffix expression (developed by Polish scientists in the 1950s)
N puts operators after numbers === “conforms to computer operations
N The mathematical expression we are used to is called infix expression ===, which conforms to the human thinking habit
L instance
n 5 + 4 => 5 4 +
n 1 + 2 * 3 => 1 2 3 * +
N 8 +(3-1) * 5 => 8 3 1-5 * +
L Infix to suffix algorithm:
Iterate over numbers and symbols in infix expressions:
N For numbers: Directly output
N For symbols:
U Open bracket: push
U operation symbol: compares priority with top of stack symbol
ø If the symbol at the top of the stack has a low priority: this symbol is pushed onto the stack
(If the default top of the stack is left parenthesis, the left parenthesis has the lowest priority)
ø If the priority of the top of the stack is not low: pop up the top of the stack and output it, then push the stack
L Close bracket: Pops the top of the stack symbol and prints it until the open bracket matches, discarding both
End of traversal: Pops all symbols on the stack and prints them
L Hands-on practice
Convert the infix expression that we like to read into the suffix expression that the computer likes
Infix expression: 8 + (3-1) * 5
Postfix expressions: 8 3 1-5 * +
3.1.4.4 Calculation based on suffix expression
L think
How do computers calculate based on postfix expressions?
For example, 8 3 1-5 * +
L Calculation rules
Iterate over numbers and symbols in postfix expressions
N For numbers: push
N For symbols:
U pops the right-hand operand from the stack
U pops the left-hand operand from the stack
U operates in terms of signs
U pushes the result of the operation onto the stack
End of traversal: the only number in the stack is the result of the calculation
3.2 the Queue (Queue)
3.2.1 Basic Concepts of queues
A queue is a special kind of restricted linear table.
A queue is a linear list that allows only inserts at one end and deletes at the other.
A queue is a linear table of t (First In First Out), or FIFO for short. The end that allows insertion is the end of the team, and the end that allows deletion is the head of the team. Queues are not allowed to operate in the middle! Suppose the queue is q= (A1, A2… , an), so A1 is the head element and an is the tail element. So that when we delete, we always start at A1, and when we insert, we always end up in the queue. This is also more in line with our usual habits in life, the first to get out of the first line, the last to come of course at the end of the line. The diagram below:
3.2.3 Sequential storage of queues
ø Basic Concepts
A queue with a sequential storage structure is called a sequential queue.
Sequential queues have false overflow, which is not caused by insufficient storage space, but because the storage unit of sequential queues has no reuse mechanism. The solution is to design the sequential queue as a circular structure.
ø Sequential loop queues
Sequential loop queue Constructs all storage units into a logical loop queue end to end.
front = (front + 1)% length
rear = (rear + 1)% length
3.2.4 Chain storage of queues
ø Basic Concepts
Queues are also a special kind of linear list; The chained storage of queues can be simulated with linear table chained storage.
4.1 Basic Concepts of trees
ø Definition of tree:
A finite set T composed of one or more (n≥0) nodes has only one node called root. When n>1, the remaining nodes are divided into m(m≥0) finite sets T1,T2… , Tm. Each set is itself a tree, called a subtree of the root.
ø Structural characteristics of trees
N nonlinear structure, with one direct precursor, but possibly multiple direct successors (1:n)
N trees are recursively defined, and there are trees within trees.
N The tree can be empty, that is, the number of nodes is 0.
ø Some terms
N root A is the root (no precursor)
N Leaf A is the terminal node (no successor)
N Forest A refers to the set of M disjoint trees (such as the number of subtrees after deleting A)
N Ordered tree each subtree of node A is ordered from left to right and not interchangeable (the left is the first)
N The subtrees of node A of unordered tree are interchangeable.
N Parent A is the parent of the parent
N Child A is the subtree of the lower node child
N sibling A Sibling of the same parent
N Cousin A is a node with two parents in the same tier (but not the same parent)
N Ancestor A is all nodes from the root to the branch through this node
N descendant A is any node in the sub-tree of the node
N node A is the data element of the tree
Degree of n node number of subtrees attached to A node
The number of layers from the root to this node (the root is the first layer)
N Terminal node A is the node with degree 0, namely leaf
N Branch node A Node other than the root (also called internal node)
N degree of tree A Maximum degree of all nodes (Max{degree of each node})
N Depth (or height) of the tree A refers to the maximum number of layers of all nodes (Max{level of each node})
In the figure below (c), the number of nodes = 10, the degree of the tree = 3, and the depth of the tree = 3
4.2 Tree representation
4.2.1 Graphical representation
The logical relationship between things can be intuitively expressed in the form of numbers, as shown in the following figure:
4.2.2 Generalized table representation
Express the figure above in generalized table notation:
China (Hebei (Baoding, Shijiazhuang), Guangdong (Guangzhou, Dongguan), Shandong (Qingdao, Jinan)
The root is written on the left side of the table as the name of the subtree forest.
4.2.3 Representation of left child and right brother
The left child and right brother notation converts a multi-fork tree to a binary tree:
Node structure:
A node has two pointer fields, one of which points to a child node and the other to its sibling node.
4.3 Binary tree concept
4.3.1 Basic Concepts of binary trees
Ø definition:
A binary tree is a finite set of N (n≥0) nodes, consisting of a root node and two disjoint binary trees called left and right subtrees respectively.
ø Logical structure:
A pair of two (1:2)
ø Basic Features:
N Each node has at most two subtrees (no node with a degree greater than 2);
N The order of left and right subtrees cannot be reversed (ordered tree).
ø Basic form:
L Binary tree properties
N Property 1: If the level of the root node is 1, there are at most 2i-1 nodes on the i-layer of the binary tree (I >0).
N Property 2: In a binary tree of height K, there are at most 2K-1 nodes (k≥0).
N Property 3: For any binary tree, if there are N2 nodes of degree 2, the number of leaves (N0) must be N2 +1 (that is, N0 =n2+1).
L Concept explanation:
Squared full binary tree
A binary tree with depth K and 2K-1 nodes.
Features: Each layer is “full” of nodes
Squared complete binary tree
The number of nodes on each layer reaches the maximum except the last layer. Only a few nodes on the right are missing at the last layer.
The k-1 node is exactly the same as the full binary tree, and the k-th node is as far to the left as possible
N Property 4: The depth of a complete binary tree with n nodes must be elog2nu ****+1
(Like log2 of 15, hit 15 log / 2 log =)
N Property 5: For a complete binary tree, if the node is numbered from top to bottom and left to right, its left child must be numbered 2i and its right child must be numbered 2i+1. The parent must be numbered I /2 (except when I = 1 is the root).
Using this property you can use a complete binary tree to achieve sequential storage of trees.
What if it’s not a complete binary tree??
—— convert it to a full binary tree
Disadvantages: ① Waste of space; ② Inconvenient to insert and delete
4.3.2 Representation of binary trees
Binary trees mainly use chain storage structure, sequential storage structure only applies to complete and full binary trees, and static linked list.
ø Sequential storage structure of binary trees
ø Chain storage structure of binary tree
The chain storage structure of binary tree mainly includes two kinds of binary list and trigonal list.
N binary linked list
The node structure of binary linked list is as follows:
N node data type definition:
type BinaryNode struct {
Ch byte
LChild *BinaryNode
RChild *BinaryNode
}
N trigonal linked list notation
The node structure of the trigeminal list is as follows:
Each node has three pointer fields, two of which point to the child (left child, right child) and a total of Pointers to the node’s parent.
N Node data type definition
type BinaryNode struct {
Ch byte
LChild *BinaryNode
RChild *BinaryNode
Parent *BinaryNode
}
4.3.3 Binary tree traversal
ø Iterate over the definition
To visit every node along a search route without repeating it (also called a tour).
ø Traversal purpose
It is the premise of tree structure insertion, deletion, modification, search and sorting operations, is the basis and core of all binary tree operations.
ø Traversal method
Remember the convention that every node is viewed “left first, then right”.
There are three implementation schemes for tree traversal:
DLR LDR LRD
First (root) order traversal in (root) order traversal after (root) order traversal
N DLR – Sequential traversal, that is, first root, then left and then right
N LDR – Middle order traversal, that is, left, root, and right
N LRD – Backward traversal: first left, then right, and then root
Note: “first, middle, and last” means whether the node D visited comes before or after the subtree.
From a recursive point of view, these three algorithms are exactly the same, or the access path of the three traversal algorithms is the same, but the timing of access to nodes is different.
Each node passes three times on the path from the dotted line’s starting point to the end point.
N Access on first pass = sequential traversal
N Access on second pass = middle order traversal
N Access for the third pass = sequential traversal
4.3.4 Binary tree programming practice
4.3.4.1 Calculating the number of binary tree leaf nodes
4.3.4.2 Calculating the Height (Depth) of a binary Tree
Thought:
L find the height of the left subtree of the root, the height of the right subtree of the root, the maximum height of the subtree compared, and then +1.
L If the left subtree is still a tree, repeat Step 1. If the right subtree is still a tree, repeat Step 1.
4.3.4.3 lookup
4.3.4.4 Obtaining the parent node
4.3.4.5 Copying a Binary Tree
Thought:
L malloc new node,
L copy the left subtree, copy the right subtree, let the new node join the left subtree, the right subtree.
If the left subtree is still a tree, repeat steps 1 and 2. If the right subtree is still a tree, repeat steps 1 and 2.
4.4.4 Non-recursive traversal of binary trees
Three recursive algorithms for binary tree traversal can be realized by setting up a stack and using non-recursive algorithms. Taking middle root order traversal as an example, its non-recursive algorithm description and stack change are shown in the figure below.
First, set a flag for each node, the default flag is false, according to the status of the node to carry out the following process.
Perform the above procedure to obtain the result of sequential traversal. To obtain the result of other binary tree traversal, modify step 2.4.
4.4.5 Hierarchical traversal of binary trees
The hierarchical traversal process and queue changes of binary trees are shown in the figure below.
4.4.6 Application of binary tree: Huffman tree – Huffman coding
**1. ** Path length of binary tree:
The sum of the path lengths from the root node to all nodes is called the path length of the binary tree, i.e
Complete binary trees have the shortest path length, and vice versa.
**2. ** External path length of binary tree
The sum of the path lengths from the root to all the leaves is called the outer path length of the binary tree. The total coding length of a coding scheme is the outer path length of the corresponding encoding binary tree, and the outer path length of the complete binary tree is the shortest.
**3. ** The weighted outer path length of the binary tree
In the case of different probabilities of character use, the probability of character use as the value of the leaf node in the binary tree is called weight. The length of the weighted path from root to X is the product of the weight of X and the length of the path from root to X. The sum of the weighted path length of all leaf nodes is called the weighted outer path length of the binary tree, i.e
Where, wi is the weight of the i-th leaf node, li is the path length from the root to the i-th leaf node.
**4. ** Construct Huffman tree and obtain Huffman coding
Huffman tree is defined as binary tree with shortest weight outpath length. If n leaf nodes and their weight sets are given, the corresponding Huffman tree is not unique.
The idea of constructing Huffman tree algorithm is to make the leaf node with larger weight closer to the root node, so that the binary tree constructed in this way has the minimum weight outer path length.
**5. ** Huffman coding decoding
Using Huffman tree to the process of decoding: a binary bit string S known, starting from the string S first bit by bit to match the 0 S and 1 S of binary tree beside the mark, the root node of the Huffman tree, meet with 0 to the left, meet with 1 to the right, if dry consecutive 1 S and 0 S to determine a path from the root to a leaf node. Once a leaf node is reached, a character is translated.
Construct Huffman tree and obtain Huffman code.
A) Use static linked lists to store Huffman trees
The node structure of a static linked list is as follows:
A one-dimensional array is used to store the above nodes. For a Huffman tree with N leaf nodes, the array length is 2n-1. The first n elements store leaf nodes, and the last N1 elements store 2-degree nodes. Suppose a Huffman tree is constructed from the weight set {5,29,7,8,14,23,3,11}, and the initial and final states of node array are shown in the figure below.
If the weight set {5,29,7,8,14,23,3,11} corresponds to characters A ~ H, Huffman tree and Huffman encoding are shown in the figure below.
5.1 Basic Concepts of Sorting
Sorting is very important in real life, for example: Taobao according to the conditions of the search results display.
L concept
Sorting is a common operation in a computer to rearrange a set of “disordered” data elements into “ordered” data elements.
L Sort mathematics definition:
Suppose the sequence of n data elements is {R1, R2… Rn}, and its corresponding keyword sequence is {K1, K2… Kn} these keywords can be compared with each other, that is, there is a relationship between them:
Kp1 Kp2 or less or less… Kpn or less
According to the inherent relation, the sequence of records above is rearranged as {Rp1, Rp2… The operation of Rpn} is called sorting
L Stability of sorting
If there are two data elements r[I] and r[j] in the sequence, their keyword k[I] == k[j], and the object r[I] before r[j] before sorting. The sorting method is said to be stable if, after sorting, r[I] still precedes r[j], otherwise it is said to be unstable.
L Multi-keyword sorting
When sorting, there is more than one keyword to be compared. The sorting result is sorted by keyword 1 first. When keyword 1 is the same as keyword 2, when keyword N-1 is the same as keyword N.
L Key operations in sorting
N Comparison: Determine the sequence of any two data elements by comparing them.
N Exchange: Data elements need to be exchanged to obtain the expected result.
L Inner sort and outer sort
N Internal sort: During sorting, all records to be sorted are stored in memory. Sorting is classified into internal sort and external sort.
N External sort: The number of sorted records is too large to be placed in the memory at the same time. The whole sorting process needs to exchange data between the internal and external memory for several times.
L Sort of trial
N Time performance: The key performance difference lies in the number of comparisons and swaps
N Secondary storage space: Additional storage space needed to complete the sorting operation. If necessary, “space for time” can be used.
N Algorithm implementation complexity: Overly complex sorting will affect the readability and maintainability of the code, and may affect the performance of sorting
L summarize
N Sorting is the process from unordered to ordered data elements
N sorting has stability and is one of the factors to choose sorting algorithm
N Comparison and swap are the basic operations for sorting
N There is no essential difference between multi-keyword sorting and single-keyword sorting
N The time performance of sorting is the main factor to distinguish the good sorting algorithm
5.2 Switching Sort
5.2.1 Bubble Sort
The basic idea of bubble sort is to compare the keyword values of two adjacent elements and swap them if they are in reverse order.
Bubbling summary:
N Bubble sort is an inefficient sorting method. It can be used when the data size is very small. When the data size is relatively large, it is better to use other sorting methods.
N The above examples are always optimized for bubbling, adding flag as a flag to record whether the sequence is in order and reduce the number of cycles.
N The average time complexity of bubble sort algorithm is O(n2), and the space complexity of the algorithm is O(1). The bubble sort algorithm is stable.
5.2.2 Quicksort
Quick Sort is a kind of partition swap sort algorithm. Its basic idea is as follows: A value is selected in the data sequence as the reference value for comparison, and each run starts from both ends of the data sequence alternately. The elements smaller than the reference value are exchanged to the front end of the sequence, and the elements larger than the reference value are exchanged to the back end of the sequence, and the positions in between become the final position of the reference value.
1. Description of quicksort algorithm:
The quicksort (ascending) process of the above data sequence is shown in the figure below.
2. Analysis of quicksort algorithm
The average time complexity of the quicksort algorithm is O(nlog2n), and the average space complexity of the algorithm is O(log2n). The quicksort algorithm is unstable.
5.3 Selection Sort
5.3.1 Direct selection sort
The basic idea of straight select sort is: In the first trip, select the element with the smallest (or largest) keyword from the data sequence of N elements and put it in the first (or last) position. In the next trip, select the smallest (large) element from n-1 elements and put it in the first (or last) position, and so on, after n-1 trip to complete the sorting.
1. Directly select the description of the sorting algorithm
2. Directly select sorting algorithm analysis
The comparison times of direct selection sort are
The time complexity of the direct selection sorting algorithm is O(n2), and the spatial complexity of the algorithm is O(1). The direct selection sorting algorithm is unstable.
5.3.2 heap sort
Heap sort is an application of a complete binary tree. The basic idea is to “heap” a sequence of data into a tree, traversing only one path in the tree at a time.
5.4 Insertion Sort
5.4.1 Direct Insert Sort
Insertion sort algorithm is a simple sorting algorithm, also known as direct insertion sort algorithm. It is a stable sorting algorithm with high efficiency for locally ordered data.
The insertion sort algorithm is an efficient algorithm for sorting a small number of elements. For example, playing cards is our most common example of insertion sorting. When we touch a card, we usually repeat the steps. At the beginning, we have no cards in hand, touch the first one, randomly placed on the left hand, after each touch row, will be in accordance with the color from small to large arrangement, until all the cards touch. Insertion sort takes a similar approach. Each time you take a piece of data from an unordered sequence, you place it in the right place in the sorted sequence, and so on until all of the unordered sequences are in the right place.
5.4.2 Hill sort
Shell sort is also known as reduced incremental sort. Its basic idea is direct insertion sort of groups.
1. Hill sorting algorithm Description:
2. Hill sorting algorithm analysis
The actual time required by Hill sorting algorithm depends on the specific incremental sequence, so its time complexity analysis is relatively complex, generally O(n(log2n)2), the spatial complexity of the algorithm is O(1). Hill sorting algorithm is not stable.