Data complexity solutions
- Violence to solve
- Modify redundant, complex code
- Change time complexity to space complexity
The basic operations of data processing
- What the parsing code does to the data in sequence
- According to the analysis of the operation, find a reasonable data deconstruction
The concept of data structures
How is the data organized
1. The linear table
-
Single-row linked list, bidirectional linked list, circular linked list
-
The end of the list is a null pointer,
-
Linked lists store values and point to the next node. A bidirectional linked list should store both the previous node and the next node. A circular linked list is a loop where the end node points to the beginning node
-
The time complexity of the addition and deletion of linked list is 0(1), that is, change the direction of the node to be deleted, change the direction of the node to be inserted,
-
Lists can only be queried by traversing the list, so complexity is 0(n),
-
The key operation of the linked list is the operation of the fast and slow pointer. The slow pointer runs one node at a time, and the fast pointer runs two nodes at a time, so when the fast pointer reaches the end, the slow pointer is only half.
-
To flip a list, you can iterate through the list, increase the prev of each node, form a list that points to the last node, and then flip it
2. The stack
A special linear table, the ordinary linear table to do the restriction
-
Delete and add can only be done from behind,(lifO)
-
The head of the stack is called the bottom of the stack and the bottom is called the top of the stack
-
The push operation is called push and the pop operation is called pop
-
The sequential structure of the stack can generally be implemented through arrays
- The first element of the array goes at the bottom of the stack and the last element of the array goes at the top of the stack
- Define a pointer to top, representing the top of the stack. If the array has only one element,top=0, the empty stack can be judged by whether top is -1
- When the maximum stacksize (stacksize) is defined, the top of the stack must be smaller than stacksize
The delete operation is top-1, the add operation is top+1, and the lookup operation is the same as a linked list
A stack is represented as a linked list,
The difference between a linked stack and a sequential stack
- Similarities: The operations are very similar, the time complexity is the same, the current position of the pointer under the data pair operation
- Objects added and deleted can only be data nodes at the top of the stack
Stack summary:
The stack is a good choice for frequent use of delete, new operations, and when new or deleted data has a come-behind relationship
Examples: browser forward and backward, parenthesis matching issues
3. The queue
Special linear structure
Add can only be added at the end, delete can only delete the first (first in, first out), meaning to delete the previous, because the previous is already in, equivalent to squeezing the previous
- Sequential queues, which are arrays, are stored consecutively in memory
- A linked queue is a linked list in the form of a pointer that points to a single list that is connected into a queue with the end in and the head out
An array of the queue
When the head pointer and the tail pointer are together, it means that the queue is empty. When the new node is added, the tail pointer should be moved back, and the head pointer should be moved back, but the array overflow operation will occur.
- This can be done by moving the data and changing the head pointer, sacrificing the time complexity O(n)
- Make enough memory so that you don’t get arrays out of bounds,
- Form a circular linked list. When the head pointer moves backward, the tail pointer crosses the boundary and points to the front of the head pointer to form a closed loop. (Optimal solution)
Chain queue
-
A linked queue is a single linked list with front and rear Pointers, so a linked queue usually adds a header, has a header pointer pointing to the header, and the header doesn’t store data, it’s just used for identification
-
When you add a new chain queue, you add a new node to the rear pointer, change the rear pointer to,
-
If you delete it, you delete the node after the head node, the front pointer points to the successor node of the node after the head node, and if you delete it, there are no nodes except the head node, then you have to put rear back, otherwise the rear pointer will become wild.
-
Headers exist so that the head and tail Pointers do not become wild Pointers if the queue is empty
The queue to summarize
Time complexity
- New circular queue and chain queue. Delete operations are O(1)
- In the lookup operation, the queue can only be traversed globally, requiring O(n) time complexity
Space performance
- The circular queue must have a fixed length, so there is a waste of storage elements and space
- Chain queues are a little more flexible
conclusion
- Circular queues are recommended when maximum queue length can be determined.
- If the queue length cannot be determined, consider using a chained queue
An array of 4.
A container that holds several data elements of the same type
- An array can arrange elements of the same type in an irregular order. The collection of the same data elements is called an array
- Arrays are contiguous in memory, and the data in the array can be retrieved directly through the index
- The index of an array is an array space
- Adding. Deleting. Queries can be performed based on indexes representing the array space by recording the first data location in the array header and then accumulating the space
Arrays store data sequentially and store contiguous memory, which makes it difficult to add and delete and easy to find
Stacks and queues, by contrast, are constrained linear tables that can add and delete data at specific locations, whereas arrays can add and delete data at arbitrary locations
New operations on arrays
- Adding a new element at the end of the array has no effect on the original data. You can directly add, assign, or insert a new element. Time complexity is O(1)
- If new data is added somewhere in the middle of the data, the situation is different, because the new data will affect the elements after the inserted element position, specifically, the position of the data needs to be moved one position later
Deleting the last element does not have much effect on the original array, but deleting the middle element will cause the entire array element to move.
An array lookup operation
- Because of the existence of indexes, it is easy to implement location-based data search. The location of an element can be directly found within O(1) time complexity through the index value
- Arrays don’t have the advantage of being able to find out, for example, whether the element of value 9 has been present, and if so, what the index value is, so that a numerically-based lookup requires a complete traversal, order n time complexity,
Summarize the time complexity of adding and deleting arrays
- Increase: If the data is inserted at the end, the time complexity is O(1); if the data is inserted somewhere in the middle, the time responsibility is O(n).
- Delete: delete the corresponding position, scan the full array, data complexity is O(n)
- Lookup: O(1) if only one lookup is performed based on the index value, but O(n) if only one lookup is performed based on the specified condition in the array.
The difference between linked lists and arrays
- The length of a list is variable, and the length of an array is fixed. When an ArrayList is not referenced, the size of an array is always estimated before it is executed
- When inserting data elements, Pointers can be used to make full use of memory space. Arrays are stored in order. If you want to make full use of memory space, you can only choose sequential storage and do not take data. This can only be done without deleting data
5. The string
An ordered whole of strings (n>=0)
s = "BEIJING"
Copy the code
Double quotes are used to distinguish strings from other structures. The logical structure of a string differs from that of a linear table in that the string is a character set, that is, the elements in the string are all characters
- An empty string is a string containing zero characters, such as s = “”
- Whitespace string, a string that contains only whitespace. The whitespace string contains whitespace and can contain more than one whitespace, for example, s = “”
- Substring, the string of any consecutive characters in the string is called the substring of the string
- The primary string is often called the main string, for example,a =”BEI” B =” BEIJING”, where a is a substring of B
Two strings are equal only if their strings are exactly the same, and they are not necessarily equal even if they contain exactly the same characters, such as A= ‘BEIJING’ B=’JINGBEI’
The storage of strings can be divided into chained strings and sequential strings
- String data is stored in linked lists, with each string forming a node ending with \0
- Generally, an array is used to store the contents of a string consecutively
Adding and deleting strings
- The insertion time complexity is O(n) in the middle and O(1) at the end, which is the connection string
- Delete, like new, is O(1) at the end and O(n) in the middle.
String lookup
You need to go through the string, order n time.
Child string lookup in parent string
- We need to determine whether the entire substring is present in the parent string, not just whether a single character is present, but whether the length is the same as the substring, and the content of the string is the same
- Iterate over the parent string and the child string, compare one by one, if the character is the same, continue to compare, if not, interrupt the child string cycle, start to compare the next character of the parent string, repeat the previous operation, until the parent string comparison is complete, the time complexity is O(nm).
- If it is to find whether two strings have the oldest string, it is necessary to add one step to the previous operation, record the global oldest string, and the number of its length, the final time complexity is O(nm2)m squared
function findStrMostShow() :void {
const a = "goodgoogle";
const b = "google";
let max_len = 0,
maxSubstr = "";
for (let i = 0; i < a.length; i++) {
for (let j = 0; j < b.length; j++) {
if (a[i] === b[j]) { // Find two strings with the same character
for (let m = i, n = j; m < a.length && n < b.length; m++, n++) { // If both sides are the same, the alignment will continue, which is a common string
if(a[m] ! == b[n]) {break;
}
if (max_len < m - i) { // Subtract the saved position from the new position to get the new string length
max_len = m - i;
maxSubstr = a.substring(i, m+1);
}
}
}
}
}
console.log(maxSubstr);
}
findStrMostShow();
Copy the code
String summary
- The logical structure of a string is very similar to that of a linear table, except that the string’s data object is constrained to a character set
- The basic operations on strings are quite different from those on linear tables:
- In linear tables, most use ‘single element’ as the operation object
- In strings, ‘whole of string’ is usually the operation object
- Deleting a string is like and just as complex as an array, but finding a string is much more complex
Tree of 6.
A data structure consisting of nodes and edges without rings
- A is the child parent of B and C, so B and C are the children of A
- B and C are siblings of each other
- If A has no parent, then a is the root
- If a or C has no children, it is called a leaf
- Calculation of layers: the root node is 1 layer, the child node of the root is 2 layers, and so on, the maximum number of layers of the tree is the layer depth of the tree
Binary tree
In a binary tree, each node has at most two branches, that is, each node has at most two children, called the left child and the right child respectively
- Full binary tree: All nodes except leaf nodes have two children
- Complete binary tree: the number of nodes reaches the maximum except the nodes in the last layer, and the leaves in the last layer are arranged to the left
Binary tree storage method
- Cursor-based chain storage method, like linked list, each node has three fields, respectively store data, left node, right node pointer
- By default, the root node is stored at subscript 1, then the left child of the root node is 2, and the right child of the root node is 3. And so on, it can be found that if the subscript of X is I, the left child of X is always stored at 2At I, the right child of X is always stored at 2The location of the I + 1
- Data structures are ‘one to one’ relationships, where the preceding data and the following data create a join relationship, such as a linked list. Stack, queue, etc
- The tree structure is a one-to-many relationship, where the parent is connected to the children below
There are three ways of traversing a tree: pre-traversal, mid-traversal, and post-traversal
The order here refers to the traversal order of the parent
- Preordering is traversing the parent node first
- The middle order is the intermediate traversal parent
- The trailing order is the last traversal of the parent
- Either traversal is done through recursive calls
Basic operations on trees
- In binary tree traversal, each node is accessed once, and its time complexity is O(n).
- Once the location is found, all you need to do to add and delete data is establish a connection through a pointer
- For a binary tree without any special relationship, the data complexity for actually adding and deleting is O(1). The search of the tree data, like a linked list, is O(n).
Binary search trees (also known as binary search trees) have the following characteristics:
- The value of each node in the left subtree of any binary search tree is less than the value of this node
- The value of each node in the right subtree of any binary search tree is greater than the value of this node
- In a binary search tree, the equal value of two nodes is avoided as much as possible
- A middle-order traversal of a binary lookup tree outputs an ordered queue from small to large
When performing a lookup with a binary lookup tree, you can make the following judgments:
- First determine whether the node is equal to the data to be searched, if so, return
- If the root node is larger than the data being looked for, the search is performed recursively down the left subtree to the leaf node
- If the root node is smaller than the data to look for, the search is performed recursively down the right subtree to the leaf node
- The time complexity of binary lookup can be reduced to order logn.
Insert operations in binary lookup trees
- Starting from the root node, if the inserted data is larger than the root node and the right child of the root node is not empty, the insertion operation continues in the right subtree of the root node until an empty child is found
- The time complexity of inserting data in binary search tree is O(logn), where the time complexity is more consumed in traversing the data to find the search location, and the time complexity of actually inserting action is O(1).
Delete operation of binary lookup tree
- Case 1: If the node to be deleted is a leaf node, time will be deleted and the parent pointer will point to null
- Case 2: If the node to be deleted has only one child, simply replace the pointer to the child that the parent points to with the pointer to its child
- Case 3: If the node to be deleted has two children, there are two possible operations
- Find the largest node in the left subtree of this node and replace it with the node you want to delete
- Find the smallest node in the right subtree of this node and replace it with the node you want to delete
The search operation time of ordinary binary tree is O(n), the search operation of binary search tree is O(logn).
7. A hash table
A hash table is a special kind of data structure that is related to an array. There are obvious differences between linked lists and trees
The core idea of a hash table
There is no relationship between the location of the data storage and the specific data of the data. In the face of searching for files, these data structures must be compared one by one. The design of hash table adopts the idea of function mapping, which associates the storage location of the record with the key word of the record, and does not need to search after comparing with the key word existing in the table
The mapping relation of “address = f(key)” is the core idea of hash table
In essence, hash conflicts can only be minimized, not completely avoided
Hash tables need to design reasonable hash functions and have a mechanism to deal with conflicts.
How do I design a hash function
- Direct customization method: Hash function to find the linear relationship of the address for the keyword, for example, H(key)= a * key+ B. Here a and b are set constants
- Number analysis method: Assume that each key in the keyword set is composed of S-digit numbers (k1, K2,ks), and lift several evenly spaced digits from the key to form the hash address
- Square the middle method: if each digit of the keyword has some digits repeated, and the frequency is high, you can first calculate the square value of the keyword, by square to enlarge the difference, and then take the middle several digits as the final storage address
- Folding method: If the keyword has a large number of bits, you can divide the keyword into several parts of equal length, and take the value of their superposition sum (excluding the carry) as the hash address
- Divisor remainder method: presets a key word P and performs mod operation on the key word, that is, set the address to key mod P
Any resolution of hash conflicts:
- Open addressing: When a keyword conflicts with another keyword, a sequence of probes is formed in the hash table using a probe technique, followed by a sequence of probes, and when an empty cell is encountered, it is inserted. The common method tree linear detection method
- Chain address method: Store records with identical hash values in a linear table
Advantages of hash tables:
It can provide very fast insert – delete – search operation, no matter how much data, insert and delete values need close to constant time, in search, hash table speed is faster than the tree, basically can instantly find the desired element
Disadvantages of hash tables:
Hash table data has no concept of order, so the elements cannot be iterated in a fixed way (such as from small to large). Hash table keys are not allowed to repeat
8. Recursion
Recursion refers to methods that use the function itself in the execution of a function
- The recursive problem must be able to be decomposed into several smaller sub-problems with the same form as the original problem, and these sub-problems can be solved by the same idea of solving the problem
- Recursive problem is a process of disassembling the original problem from large to small, and there is a clear end point (critical point), and finally from this critical point, the small problem according to the original way back, the original problem will be solved
- The basic idea of recursion is to solve large problems by transforming them into small, identical subproblems
- When a function is implemented, the solution to a big problem is the same as the solution to a small problem
- The function that solves the problem must have an explicit termination condition, otherwise there will be wireless recursion
- The implementation of recursion consists of two parts:
- Recursive subject
- Termination conditions
The mathematical model of recursion is mathematical induction
Recursive algorithm idea
- When faced with a large scale problem, how do you break it down into several small scale problems
- When the problem is decomposed through several rounds, how to define the final result, namely the termination condition
When a problem has the following two conditions, it can be solved recursively
- It can be disassembled into subproblems with exactly the same idea except for the data model
- Existence of termination condition
The key to writing recursive code is to write recursive formulas and find termination conditions
- Find the rule that big problems break down into small problems, and write the recursive formula based on this
- Finding the termination condition is when you find the simplest question, how do you write the answer
- Translate recursive formulas and termination conditions into actual code
/ / Hanoi
function main() {
let x = "x",
y = "y",
z = "z";
hanio(3, x, y, z);
}
function hanio(n: number, x: string, y: string, z: string) {
if (n === 1) {
console.log(` mobile${x}= >${z}`);
} else {
hanio(n - 1, x, z, y); // Use z to move n-1 from x to y
console.log(` mobile${x}= >${z}`);
hanio(n - 1, y, x, z); // Use x to move n-1 from y to z
}
}
main();
Copy the code
9. Divide and conquer
The core idea is “divide and conquer “, which is to divide a large-scale and difficult problem into several small-scale and low-difficulty problems
The same solution can be used to solve these sub-problems recursively, and then the solution of each sub-problem is combined to get the solution of the original problem
When the divide-and-conquer method is needed, the original problem generally needs to have the following characteristics:
- The difficulty is decreasing, that is, the difficulty of solving the original problem decreases as the size of the data shrinks
- Problems can be divided, the original problem can be decomposed into a number of smaller problems of the same type
- The solution can be combined, and the solution of the original problem can be combined by using the solutions of all the sub-problems
- Independent of each other, each subproblem is independent of each other, the solution of one subproblem does not affect the other subproblem
Binary search implementation, must meet the conditions, the search data source must be an ordered list
The idea of binary search is as follows:
- Select a flag to divide the set L into two subsets, usually using the median
- Determine whether flag L(I) can be equal to the value des to be searched. If the value is equal, the result is returned directly
- If not, determine the size of L(I) and DES
- The next step is to search left or right based on the judgment result. If the search space in a certain direction is 0, the result is not found
Binary search experience and law summary
- The time complexity of binary lookup is O(logn), which is also a common feature of divide-and-conquer
- The number of binary search cycles is not certain, generally when reaching a certain condition to jump out of the cycle coding, most of the code structure of while loop and break out
- The original problem of binary search must be ordered
Divide-and-conquer summary
Divide-and-conquer is often used in massive data processing. When faced with default problems, note that:
- Whether the data of the original problem is in order
- Whether the expected time complexity has logn terms
- Is it possible to combine the answers to the original questions with the answers to the smaller questions
Sort 10.algorithm
Sorting – The process of turning an unordered set of data into an ordered set of data. By default, the order is from smallest to largest
To measure the merits of a sorting algorithm:
- Time complexity – includes best time complexity, worst time complexity, and average time complexity
- Spatial complexity – If the spatial complexity is 1, it is called sort in place
- Stability – Whether the order of equal data objects can be guaranteed to remain the same after sorting
Bubble sort
Starting from the first, compare the size of adjacent elements, if the former is greater than the latter, then swap, swap the large elements back, through multiple iterations until there is no swap, each time passing the largest data to the end
Bubbling sort performance
- The best time for bubble sort is O(n), that is, when the input array is exactly in order, you just need to compare them one by one
- The worst time complexity of bubble sort is O(n*n), that is, when the array happens to be in full reverse order, each round needs to be compared n times and repeated n times
- When the input data is haphazard, its average time complexity is O(n by n).
- The space complexity is order one.
Insertion sort
Select the unsorted element and insert it into the appropriate position of the sorted interval until the unsorted interval is empty,
- The best time complexity for insertion sort is O(n), that is, when the array happens to be perfectly ordered, the correct position interval can be found only once at a time
- Insertion sortWorst time complexityYou need to O (nN), that is, when the array happens to be in complete reverse order, it takes n times to compare each time to find the correct position range,
- insertion-sortAverage time complexityIs O (nN). Because the average time complexity of inserting an element into an array is O(n), insertion sort can be understood as repeated insertion operations n times
- Insertion sort is order one in space.
function insertSort() {
let arr = [2.3.5.6.1.23.6.78.34];
for (let i = 1; i < arr.length; i++) {
let temp = arr[i];
let j = i - 1;
for (; j >= 0; j--) {
if (arr[j] > temp) {
arr[j + 1] = arr[j];
} else {
break;
}
}
arr[j + 1] = temp;
}
console.log(arr);
}
insertSort();
Copy the code
Similarities and differences between insert sort and bubble sort
Similarity: The average time complexity of insertion sort and bubble sort is O(n* N), and both are stable sorting algorithms, both belong to in-place sorting
In addition:
- Bubble sort swaps are dynamic in each round, so it takes three assignments to complete
- Insert sort Each round of swap fixes the data to be inserted, so only one assignment is required
Merge sort
The binary strategy is used for binary data processing, and the binary operation is carried out continuously until it cannot be carried out, and then the two pieces of data are merged.
function main() {
let arr = [59.38.64.32.12.33.55.31.11.59];
console.log("Raw data", arr);
coustomMergeSort(arr, 0, arr.length - 1);
console.log(Merge sort, arr);
}
function coustomMergeSort(arr: any, start: number, end: number) {
if (start < end) {
let mid = parseInt(Number((start+end)/2).toFixed(2));
coustomMergeSort(arr, start, mid);
coustomMergeSort(arr, mid + 1, end); coustomDoubleSort(arr, start, mid, end); }}function coustomDoubleSort(a: any, left: any, mid: any, right: any) {
let temp: Array<any> = [];
let p1 = left,
p2 = mid + 1,
k = left;
while (p1 <= mid && p2 <= right) {
if (a[p1] <= a[p2]) {
temp[k++] = a[p1++];
} else{ temp[k++] = a[p2++]; }}while (p1 <= mid) {
temp[k++] = a[p1++];
}
while (p2 <= right) {
temp[k++] = a[p2++];
}
for (let i = left; i <= right; i++) {
a[i] = temp[i];
}
}
main();
Copy the code
Merge sort performance
- Merge sort uses binary iterative method, logn complexity
- Each iteration requires merging two ordered arrays, which can be done in O(n) time
- Merge sort is order n logn times the product of these two.
- Merge sort last, worst. The average time is order nlogn.
- Merge sort is order n in space.
- It’s stable sorting
Quick sort
The principle of quicksort is also divide-and-conquer. Each iteration will select an element in the array as the partition point, put the data smaller than it on the left, and put the data larger than it on the right, and then use the idea of divide-and-conquer to operate on the left and right twice, until each interval is reduced to 1
Quicksort performance
- At the best time for quicksort, if every time you pick a partition, you can pick the median, split the array into two, the time is order n logn.
- Under the worst time complexity, that is, if the maximum or minimum value is selected for each partition and the two groups are not equal, n partitioning operations are required, and each partition scans n/2 elements on average. At this time, the time complexity degrades to O(n*n).
- Quicksort is order n logn on average.
- The spatial complexity of express sorting method is O(1).
Performance analysis of sorting algorithm
- The most violent way to sort is order n.
- Merge sort can get the time down to order logn.
- Merge sort requires additional temporary space:
- To ensure stability
- During merge, the insertion of elements in the array causes the problem of array displacement
- In order to avoid the time loss caused by this, quicksort is adopted at this time. The problem of data shifting caused by inserting elements can be solved through swap operation. At that time, the sorting result obtained from this is not stable due to the dynamic binary data exchange
conclusion
- When sorting small data, you can select the O(n*n) sorting algorithm
- When sorting large data, you can select the sorting algorithm whose time complexity is O(n*logn)
- The space complexity of merge sort is O(n), which also consumes a lot of space resources
- The average time complexity of quicksort is O(nlogn), and the worst time complexity may approach O(n*n).
Dynamic programming algorithm
-
The core idea of dynamic programming algorithm is to divide big problems into small problems to solve, so as to obtain the optimal solution step by step
-
Dynamic programming algorithm is similar to divide-and-conquer algorithm. Its basic idea is to decompose the problem to be solved into several sub-problems, solve the sub-problems first, and then obtain the solution of the original problem from the solution of these sub-problems
-
Different from divide-and-conquer, the problems suitable for solving by dynamic programming are usually not independent after decomposition (i.e. the solution of the next sub-stage is based on the solution of the previous sub-stage for further solution).
-
Dynamic programming can be advanced step by step by filling in the form to get the optimal solution
0 1 2 3 4 Notebook (1500G) 1 0 1500 1500 1500 1500 The guitar (3000 g) 4 0 1500 1500 1500 3000 Computer (2000 g) 3 0 1500 1500 2000 1500 + 200
The main algorithm idea of knapsack problem: to solve it by using dynamic programming, traverse to the I item every time, and determine whether the item needs to be put into the knapsack according to W [I] and V [I]. That is, for a given n items, let v[I], W [I] be the value and weight of the ith item respectively,C be the capacity of the backpack, and let V [I] [j] represent the maximum value that can be loaded into the backpack with capacity J in the first I items.
(1) v[i][0] =v[0][j] = 0;// fill in the first row and column with 0
(2When w[I]> J :v[I][j] = v[I -1][j] // When the capacity of the item to be added is larger than the capacity of the current backpack, the loading strategy of the previous cell is used directly
(3J >=w[I] :v[I][j] = Max {v[I -1][j] ,v[i]+v[i-1][j-w[i]] }
// When the capacity of the new item to be added is less than or equal to the current backpack capacity,
// Fill in mode
v[i-1[j]: is the maximum load of the last cell v[I]: represents the value of the current commodity v[I -1][j-w[I]] : load I -1Commodity, to the remaining space j-w[I] maximum value when j>=w[I] : v[I][j] = Max {v[I -1][j],v[i]+v[i-1][j-w[i]] }
Copy the code
// Dynamic programming knapsack problem
function getMax() {
let w: Array<number> = [1.4.3];
let val: Array<number> = [1500.3000.2000];
let m = 4;
let n = val.length;
let v: Array<any> = new Array(n + 1);
let path: Array<any> = new Array(n + 1);
for (let i = 0; i < v.length; i++) {
v[i] = new Array(m + 1);
path[i] = new Array(m + 1);
}
for (let i = 0; i < v.length; i++) {
for (let j = 0; j < v[0].length; j++) {
v[i][j] = 0; }}//
for (let i = 1; i < v.length; i++) {
for (let j = 1; j < v[0].length; j++) {
// The last item I picked up was more expensive than my current one
if (w[i - 1] > j) {
v[i][j] = v[i - 1][j];
} else {
// No I have to judge the current price and compare the combination price
// The price of the current device plus the previous device is greater than the price of the previous item
// v[i][j] = Math.max(v[i - 1][j], val[i - 1] + v[i - 1][j - w[i - 1]]);
if (v[i - 1][j] < val[i - 1] + v[i - 1][j - w[i - 1]]) {
v[i][j] = val[i - 1] + v[i - 1][j - w[i - 1]];
path[i][j] = 1;
} else {
v[i][j] = v[i - 1][j]; }}}}console.log(v);
console.log("= = = = = = = = = = = = = =");
let i = path.length - 1;
let j = path[0].length - 1;
while (j > 0 && i > 0) {
if (path[i][j] == 1) {
console.log(The first `${i}Put in backpack ');
j -= w[i - 1];
}
i--;
}
}
getMax();
Copy the code
KMP string matching algorithm
KMP is a classic algorithm for solving the problem of whether a pattern string has ever appeared in a text string and, if so, where it first appeared
The KMP algorithm uses the previously judged information to save the length of the longest common subsequence in the pattern string through a next array. Each time when backtracking, it finds the position matched by the front end through the next array, saving a lot of calculation time
function kmpMain() {
let s1 = "BBC ABCDAB ABCDABCDABDE";
let s2 = "ABCDABD";
let next: Array<number> = kmpNext(s2);
console.log(next);
let index = kmpSearch(s1,s2,next)
console.log(index);
}
function kmpSearch(s1: string, s2: string, next: Array<number>) :number {
let flag = -1;
for (let i = 0, j = 0; i < s1.length; i++) {
S1 [I]! == s2[j] to adjust j size
while (j > 0&& s1[i] ! == s2[j]) { j = next[j -1];
}
if (s1[i] === s2[j]) {
j++;
}
if (j === s2.length) {
flag = i - j + 1; }}return flag;
}
// Get a partial match table
function kmpNext(dest: string) :Array<number> {
let next: Array<any> = new Array(dest.length);
next[0] = 0; // Partial matches are 0 when the string length is 1
for (let i = 1, j = 0; i < dest.length; i++) {
// The core of the KMP algorithm
/ / when the dest [I]! == dest[j], we need to get the new j from next[j-1]
// Search the partial match table one by one to see if the current value already has partial matches
while (j > 0&& dest[i] ! == dest[j]) { j = next[j -1];
}
// Dest [I] === dest[j
if (dest[i] === dest[j]) {
j++;
}
next[i] = j;
}
return next;
}
kmpMain();
Copy the code
Greedy algorithm
- Greedy algorithm (greedy algorithm) refers to the algorithm that takes the best or optimal (i.e., the most favorable) choice in each step of the problem solving process, so as to lead to the best or optimal result
- Greedy algorithms produce results that are not necessarily optimal (sometimes they are), but approximate (close to) optimal