This is the 22nd day of my participation in Gwen Challenge

Data structures and algorithms (DSA) are often considered a daunting topic — a common misconception. They are the basis for some of the most innovative concepts in technology and are critical to the career advancement of both job/internship applicants and experienced programmers. Mastering DSA means you’re able to use your computational and algorithmic mind to solve previously unseen problems and contribute to the value of any tech company (including your own!). . By understanding them, you can make your code more maintainable, extensible, and efficient.

Having said that, I decided to put together what I knew about data structures and algorithms. This article aims to make DSA seem less intimidating than people think. It includes 15 of the most useful data structures and 15 of the most important algorithms to help you do well in your studies and interviews and improve your programming skills. I will continue to study these data structures and algorithms in more detail later.

1. Data structure

1. Arrays

Arrays are the simplest and most common data structures. They feature easy access to elements by index (location).

What are they for?

Imagine a row of theater chairs. Each chair is assigned a position (from left to right), so each viewer is assigned a number from the chair he will be sitting in. This is an array. Extend the problem to the entire theater (rows and columns of chairs) and you’ll have a two-dimensional array (matrix)!

features

  • The values of the elements are placed in order and accessed by an index from 0 to the length of the array;
  • Arrays are contiguous chunks of memory;
  • They usually consist of elements of the same type (depending on the programming language);
  • Elements are accessed and added quickly; Search and delete are not done in O(1).

2. Linked Lists

Linked lists are linear data structures, just like arrays. The main difference between a linked list and an array is that the elements of the list are not stored in contiguous memory locations. It consists of nodes — entities that store the value of the current element and the address reference of the next element. In this way, elements are linked by Pointers.

What are they for?

A related application of linked lists is the previous and next page implementations of browsers. Double-linked lists are the perfect data structure for storing pages that the user searches to display.

features

  • They fall into three types: single, double and round;
  • Elements are not stored in contiguous chunks of memory;
  • Excellent memory management (using Pointers means dynamic memory usage);
  • Inserts and deletes are fast; Accessing and searching elements is done in linear time.

3. Stack

Stack is an abstract data type that formalizes the concept of restricted access collections. This restriction follows LIFO (Last in, first out) rules. Therefore, the last element added to the stack is the first element you remove from it.

The stack can be implemented using arrays or linked lists.

What are they for?

The most common real-life example is stacking plates on top of each other in the cafeteria. The top plate is removed first. The bottom plate is the one that stays on the stack the longest.

The stack is most useful when you need to get the reverse order of a given element. Just push them all onto the stack and pop them.

Another interesting application is the valid parenthesis problem. Given a list of parentheses, you can use the stack to check if they match.

features

  • You can access only the last element (the top element) at a time;
  • One disadvantage is that once you pop elements from the top to access other elements, their values are lost from stack memory;
  • Other elements are accessed in linear time; Anything else is in O(1).

4. Queues

A queue is another data type in a restricted access collection, like the stack discussed earlier. The main difference is that queues are organized according to a FIFO (first-in, first-out) model: the first element inserted in a queue is the first element removed. Queues can be implemented using fixed-length arrays, looping arrays, or linked lists.

What are they for?

The best use of this abstract data type (ADT) is, of course, to simulate queues in real life. For example, in a call center application, queues are used to hold customers waiting to get help from a consultant — customers should get help in the order they call.

A special and very important type of queue is the priority queue. Elements are enqueued according to the “priority” with which they are associated: the element with the highest priority is enqueued first. This ADT is essential in many graph algorithms (Dijkstra, BFS, Prim, Huffman coding – more on them). It is implemented using a heap.

Another special type of queue is the deque queue (pun pronounced “deck”). Elements can be inserted/removed from both ends of the queue.

features

  • We can only access the “oldest” elements introduced directly;
  • Searching for elements removes all accessed elements from the memory of the queue;
  • The front end of the eject/push element or fetch queue is done in constant time. The search is linear.

5. Map and Hash Table

Maps is an abstract data type that contains a key collection and a value collection. Each key has a value associated with it.

A hash table is a special type of mapping. It uses a hash function to generate a hash code that is placed into an array of buckets or slots: the keys are hashed, and the resulting hash indicates where the value is stored.

The most common hash function (among many) is the modular constant function. For example, if the constant is 6, the value of key x is x%6.

Ideally, hash functions would assign each key to a unique bucket, but most of their designs employ imperfect functions that can cause conflicts between keys with the same generated value. The collision is always adapted in one way or another.

What are they for?

Maps is best known for being a language dictionary. Each word in the language has a definition assigned to it. It is implemented using an ordered map (the keys are arranged alphabetically).

The address book is also a Map. Each name has a phone number assigned to it.

Another useful application is the standardization of values. Suppose we want to assign an index from 0 to 1439 for every minute of the day (24 hours = 1440 minutes). The hash function will be h(x) = x. Hours *60+x. Minutes.

features

  • Keys are unique (no duplicates);
  • Collision resistance: It should be difficult to find two different inputs with the same key;
  • Preimage resistance: given H, it should be difficult to find the key X, such that H (x)=H;
  • The second preimage resistance: given a key and its value, it should be difficult to find another key with the same value;

Terms:

  • Map: Java, C++.
  • “Dictionary” : Python, JavaScript,.net;
  • “Associative array” : PHP

Because MAPS is implemented using self-balanced red-black trees (explained later in this article), everything is done in order (log n); All hash table operations are constants.

6. Graphs

A graph is a nonlinear data structure representing a pair of two sets: G={V, E}, where V is the set of vertices (nodes) and E is the set of edges (arrows). Nodes are values connected by edges – lines that describe the dependence (sometimes associated with cost/distance) between two nodes.

There are two main types of graphs: directed graphs and undirected graphs. In an undirected graph, edges (x, y) are available in both directions :(x, y) and (y, x). In a directed graph, edges (x, y) are called arrows, and the direction is given by the order of the vertices in their name: arrows (x, y) differ from arrows (y, x).

What are they for?

Graphs are the basis of all kinds of networks: social networks (such as Weixin, CSDN, Weibo) and even city street networks. Each user of a social media platform is a structure that contains all of his or her personal data — it represents a node in the network. A friend relationship on Weixin is an edge in an undirected graph (because it’s reciprocal), whereas on CSDN or Weibo, the relationship between an account and its followers/following account is an arrow in a directed graph (non-reciprocal).

features

Graph theory is a vast field, but we will highlight some of the best-known concepts:

  • The degree of a node in an undirected graph is the number of its associated edges.
  • The internal/external degree of a node in a directed graph is the number of arrows pointing to/from that node;
  • The chain from node X to node Y is the contiguous edge, where x is its left end and y is its right side;
  • A loop is a chain where x=y; Graphs can be cyclic/acyclic; If there is a chain between any two nodes of V, the graph is connected;
  • Can use the breadth first search (BFS) or depth first search (DFS) graph traversal and processing, both in O (| V | E | | +), which is set S | S | base; Check out the links below for additional basic information in graph theory.

7. Trees

A tree is an undirected graph, smallest in connectivity (if we eliminate an edge, the graph will no longer be connected) and largest in acyclic (if we add an edge, the graph will no longer be acyclic). So any acyclic connected undirected graph is a tree, but for simplicity we will call a tree with roots.

A root is a fixed node that determines the direction of the edges in the tree, so this is where it all “starts”. The leaf is the end node of the tree — that’s where it all “ends”. The child of a vertex is the vertex of events below it. A vertex can have multiple child nodes. The parent of a vertex is the event vertex above it — it is unique.

What are they for?

We use trees whenever we need to depict a hierarchy. Our own family tree is a perfect example. Your oldest ancestors are the roots of trees. The youngest generation represents the collection of leaves.

Trees can also represent the relationship between superiors and subordinates in the company you work for. That way you can find out who is your superior and who you should be managing.

features

  • Roots have no parent;
  • Leaf has no children;
  • The length of the chain between the root and node X indicates the level of x;
  • The height of a tree is its highest level (3 in our example);
  • The most commonly used traversal tree method is O (| V | | | + E) in the DFS, but we can also use BFS; The order of nodes traversed by DFS in any graph forms a DFS tree that indicates when we visited the nodes.

Binary Search Trees and Search Trees

A binary tree is a special type of tree: each vertex can have up to two child nodes. In a strict binary tree, there are two children at each node except for the leaves. A complete binary tree with n layers has all 2N -1 possible nodes.

A binary search tree is a binary tree in which the values of the nodes belong to a perfectly ordered set — the values of any arbitrarily selected node are greater than all values in the left subtree and less than all values in the right subtree.

What are they for?

An important application of BT is the representation and evaluation of logical expressions. Each expression can be decomposed into variables/constants and operators. This method of expression writing is called inverse Polish notation (RPN). In this way, they can form a binary tree where the inner nodes are operators and the leaves are variables/constants — this is called an Abstract syntax tree (AST).

BST is often used because they can quickly search key attributes. AVL trees, red-black trees, ordered sets, and mappings are implemented using BST.

features

BST has three types of DFS traversal:

  • First order (root, left, right);
  • Middle sequence (left, root, right);
  • Posterior sequence (left, right, root); All completed in O(n) time;
  • Middle-order traversal gives us all the nodes in the tree in ascending order;
  • The left-most node is the minimum in BST, and the right-most node is the maximum;
  • Note that RPN is the middle order traversal of AST;
  • BST has the advantage of sorting arrays, but has the disadvantage of logarithmic inserts — all operations are done in O(log n) time.

9. AVL Trees (Adelson-velsky and Landis Trees)

All of these types of trees are self-balanced binary search trees. The difference is the way they balance height over logarithmic time.

AVL trees are self-balanced after each insertion/deletion because the modular difference between the height of the left and right subtrees of the node is at most 1. AVL is named after its inventors: Adelson-Velsky and Landis.

In red-black trees, each node stores an additional bit representing the color, which is used to ensure balance after each insert/delete operation.

In the Splay tree, recently accessed nodes can be accessed quickly again, so the amortization time complexity of any operation is still O(log n).

What are they for?

AVL seems to be the best data structure in database theory.

RBT (red-black tree) is used to organize comparable fragments of data, such as text fragments or numbers. In the Java 8 version, HashMap is implemented using RBT. Data structures in computational geometry and functional programming are also built with RBT.

In Windows NT (in virtual memory, networking, and file system code), the Splay tree is used for caching, memory allocator, garbage collector, data compression, rope (replacing strings for long text strings).

features

  • The amortization time complexity of ANY operation in ANY self-balancing BST is O(log n).
  • In the worst case, the maximum height of AVL is 1.44 * log2n (why? == hint == : Consider the case where all levels of AVL are full, except for the last one with only one element);
  • AVLs is the fastest to search for elements in practice, but the cost of rotating subtrees for self-balancing is high;
  • At the same time, RBT offers faster inserts and deletions because there is no rotation;
  • Expanding the tree does not store any bookkeeping data.

10. Heaps

The minimum heap is a binary tree in which each node has a value greater than or equal to that of its parent: Val [PAR [x]] <= VAL [x], an XA node with a heap, where Val [x] is its value and PAR [x] is its parent.

There is also a maximum heap that implements the opposite relationship.

A binary heap is a complete binary tree (all of its layers are filled except the last one).

What are they for?

As we discussed a few days ago, priority queues can be effectively implemented using binary heap because it supports INSERT (), DELETE (), extractMax(), and reduceKey() operations in O(log n) time. Thus, the heap is also essential in graph algorithms (because of priority queues).

Any time you need quick access to the largest/smallest items, the heap is the best choice.

The heap is also the basis of the heapsort algorithm.

features

  • It’s always balanced: whenever we remove/insert an element in a structure, we just “filter”/” permeate “it until it’s in the right place;
  • The parent of the node k > 1 is [k/2] (where [x] is an integer part of x) and its child is 2K, and 2K + 1;
  • An alternative to setting up a priority queue, ordered_map (in C++) or any other ordered structure that easily allows access to the smallest/largest element;
  • Root takes precedence, so its access time complexity is O(1), and insertion/deletion is completed in O(log n); Creating a heap is done in O(n); Heap sort in order n log n.

11. Dictionary Tree (Tries)

Trie is an efficient data structure for information retrieval. Also known as a prefix tree, it is a search tree that allows insertion and search in O(L) time complexity, where L is the length of the key.

If we store the key in a well-balanced BST, it will take time proportional to L * log n, where n is the number of keys in the tree. Thus, TRIE is a faster data structure (using O(L)) than BST, but at the cost of TRIE storage requirements.

What are they for?

Trees are primarily used to store strings and their values. One of its coolest apps is typing autocomplete and autosuggest into the Google search bar. Terry is the best choice because it’s the fastest option: if we don’t use Terry, a faster search is worth more than the storage savings.

You can also use trie to complete orthographic autocorrection of typed words by looking up a word in a dictionary or other instances of that word in the same text.

features

  • It has a key-value association; A key is usually a word or its prefix, but it can be any ordered list;
  • The root has an empty string as a key;
  • The length difference between the node value and its child value is 1. In this way, the root’s children store a value of length 1; As a conclusion, we can say that the node X from the KTH layer has a value of length K;
  • As we said, the time complexity of the insert/search operation is O(L), where L is the length of the key, which is much faster than O(log n) for BST, but comparable to hash tables;

Spatial complexity is actually a disadvantage: O(ALPHABET_SIZE*L*n).

Segment Trees

A segment tree is a complete binary tree that can efficiently answer queries while still easily modifying its elements.

Each element on index I in a given array represents a leaf marked with an [I, I] interval. Nodes that label their children as [x, y] or [y, z] will have [x, z] intervals as labels. So, given n elements (0-indexed), the root of a line segment tree will be labeled [0, n-1].

What are they for?

They are useful for tasks that can be solved using divide-and-conquer (the first algorithmic concept we’ll discuss) and may require updating their elements. This way, when an element is updated, any interval containing it is also modified, so the complexity is logarithmic. For example, the sum/Max/min of n given elements is the most common application of line segment trees. Binary searches can also use segment trees if element updates are taking place.

features

  • As a binary tree, the node x will be 2X and 2X +1 is the child node, and [x/2] is the parent node, where [x] is the integer part of X.
  • An efficient way to update the entire range of segments in a tree is called “deferred propagation,” which is also done in O(log n) (see the link below for an implementation of the operation);
  • They can be k-dimensional: for example, if there are q queries to find the sum of the stator matrices of a matrix, we can use a two-dimensional line segment tree;
  • Updating elements/ranges takes O(log n) time; The answer to the query is constant (O(1));
  • The space complexity is linear, which is a great advantage: O(4*n).

13. Fenwick Trees

A Fenwick tree, also known as a binary index tree (BIT), is a data structure that is also used for efficient updates and queries. BITs requires less space and is easier to implement than Segment Trees.

What are they for?

BIT is used to calculate the prefix sum — the sum of the prefix sum of elements in the ith position from the first position to the ith element. They are represented in arrays, where each index is represented in a binary system. For example, index 10 is equivalent to index 2 in the decimal system.

features

  • The tree building is the most interesting part: first, the array should be 1-indexed to find the parent of node X, you should convert its index X to a binary system and flip the right-most significant bit; Ex. The parent of node 6 is 4;
6 = 1*2Squared plus1*2Creates a +0*2⁰ = >1"1"0 (flip) 
=> 100 = 1*2Squared plus0*2Creates a +0*2⁰ = 4;
Copy the code
  • Finally, the ANDing element, each node should contain an interval that can be added to the prefix and;
  • The time complexity of updates is still O(log n), the time complexity of queries is still O(1), but the spatial complexity has a bigger advantage over the O(4*n) of line segment trees: O(n).

14. Disjoint Set Union

We have n elements, and each element represents an individual set. A parallel lookup set (DSU) allows us to do two things:

1.UNION – combines any two sets (or unites sets of two different elements if they are not from the same set); 2.FIND – Finds the collection from which the element came.

What are they for?

DSU is very important in graph theory. You can check whether two vertices are from the same join component, or even unify the two join components.

Let’s take cities and towns. Because neighboring cities with growing populations and economies are expanding, they can easily create megacities. As a result, the two cities were merged and their inhabitants lived in the same metropolis. We can also check which city a person lives in by calling FIND.

features

  • They’re represented by trees; Once the groups are combined, one of the two roots becomes the taproot, and the parent of the other root is one of the leaves of the other tree;
  • One practical optimization is by highly compressing the tree; Thus, the union consists of the largest trees to easily update their two data (see implementation below);
  • All operations are completed in O(1) time.

15. Minimum Spanning Trees

Given a connected graph and an undirected graph, the spanning tree of the graph is a subgraph, which is a tree and connects all nodes together. A single graph can have many different spanning trees. The minimum spanning tree (MST) of weighted, connected, and undirected graphs is a spanning tree whose weight (cost) is less than or equal to the weight of all other spanning trees. The spanning tree weight is the sum of the weights assigned to each edge of the spanning tree.

What are they for?

Minimum spanning tree (MST) problem is an optimization problem and a minimum cost problem. With the route network, we can assume that one of the factors affecting the establishment of national roads between N cities is the minimum distance between two adjacent cities.

National routes are like this, represented by the MST of the road network graph.

features

As a tree, the MST of a graph with n vertices has n-1 edges; You can use the following methods to solve the problem:

  • Prim algorithm — Best choice for dense graphs (withnAnd the number of edges is closen(n-1)/2)The figure of);
  • Kruskal algorithm — mainly used; It is a greedy algorithm based on unintersecting unions (which we will also discuss later);
  • The time complexity to build it is O(n log n) or O(n log m) for Kruskal (depending on the graph) and O(n²) for Prim.

Second, the algorithm

1. Divide and Conquer

Divide and conquer (DAC) is not a specific algorithm in itself, but rather an important class of algorithms to know before delving into other topics. It is used to solve problems that can be divided into subproblems similar to the original problem but of a smaller scale. DAC then solves them recursively and finally merges the results to find a solution to the problem.

It is divided into three stages:

  • Partition – To break down a problem into sub-problems;
  • Solving subproblems by recursion;
  • Merge – The results of subproblems into the final solution.

What is it for?

A practical application of the divide-and-conquer algorithm (DAC) is to program in parallel using multiple processors, so that subproblems are executed on different machines.

DAC is the basis for many algorithms, such as quicksort, merge sort, binary search, or fast multiplication algorithms.

features

  • Each DAC problem can be written as a recursive relation; So you have to find the base case to stop recursion;
  • Its complexity is zeroT(n)=D(n)+C(n)+M(n), which means that each stage has a different complexity, depending on the problem.

2. The Sorting Algorithms

The sorting algorithm is used to rearrange a given element (from an array or list) based on the comparison operator on the element. When we refer to a sorted array, we usually think of ascending order (the comparison operator is “<“). There are many types of sorting, with varying temporal and spatial complexity. Some of them are based on comparison, some of them are not. Here are the most popular/efficient ways to sort:

Bubble Sort

Bubble sort is one of the simplest sorting algorithms. It is based on repeated exchanges between adjacent elements if they are in the wrong order. It’s stable, it’s O(n ^ 2) in time, and it needs O(1) auxiliary space.

Counting Sort

Counting sort is not sort based on comparison. It basically takes the frequency of each element (a hash), determines the minimum and maximum, and then iterates between them to place each element according to its frequency. It is done in O(n), the space is proportional to the range of data. It is valid if the input range is not significantly greater than the number of elements.

Quick Sort

Quicksort is an application of divide-and-conquer. It is based on selecting an element as the pivot (the first, last, or intermediate value) and then swapping elements to place the pivot between all elements smaller than it and all elements larger than it. It has no extra space and O(n*log n) time complexity — the optimal complexity for a comparison-based approach.

Merge Sort

Merge sort is also a divide-and-conquer application. It splits the array into two halves, sorts each half, and then merges them. It also has O(n*log n) time complexity, so it’s also super fast like Quick Sort, but unfortunately it requires O(n) extra space to store two subarrays at the same time and finally merge them.

Radix Sort

Radix sort uses count sort as a subroutine, so it is not a comparison-based algorithm. How do we know that CS is not enough? Suppose we have to sort the elements in [1, n ^ 2]. To use CS, we need order n squared. We need a linear algorithm — O(n+k), where the elements are in the range [1, k]. It sorts elements bitwise from the least important one (units) to the most important (tens, hundreds, etc.). Extra space (from CS) : O(n).

3. Searching Algorithms

Search algorithms are designed to check for the presence of an element in a data structure and even return it. There are several ways to search, but here are two of the most popular:

Linear Search

The algorithm’s approach is simple: You search for your value from the first index of the data structure. You compare them one by one until your value is equal to the current element. If that particular value is not in DS, -1 is returned.

Time complexity: O(n)

Binary Search

BS is an efficient search algorithm based on divide-and-conquer. Unfortunately, it only works with sorted data structures. As a DAC method, you continuously split the DS in half and compare the values in the search with those of the intermediate elements. If they are equal, the search ends. Either way, if your value is greater than/less than it, the search should continue in the right/left half.

Time complexity: O(log N)

Sieve of Eratosthenes

Given an integer n, print all primes less than or equal to n. The Eratosthenes sieve is one of the most efficient algorithms to solve this problem, and it is perfect for n less than 10.000.000.

The method uses a frequency list/map to mark the primes of each number in the range [0, n] : ok[x]=0 if x is prime, ok[x]=1 otherwise.

We start by selecting each prime from the list and mark the multiples in the list with 1 — thus, we select the unmarked (0) number. Finally, we can easily answer any number of queries in O(1).

Classical algorithms are essential in many applications, but we can make some optimizations. First, it is easy to notice that 2 is the only even prime, so we can examine its multiples separately and then iterate over the range to find primes from 2 to 2. Second, it is clear that for the number x, we have previously checked 2x, 3x, 4x, etc., in iterations 2, 3, etc. In this way, our multiplier check for loop can start at x squared every time. Finally, even though half of these multiples are even, and we’re also iterating over odd primes, we can easily iterate from 2x to 2X in a multiple checking loop.

Space complexity: O(n) Time complexity: O(n*log(log n)) for the classical algorithm, O(n) for the optimization algorithm.

5. String Matching algorithm (Knuth-Morris-Pratt)

Given text of length n and patterns of length m, find all patterns that occur in the text.

Knuth-Morris-Pratt algorithm (KMP) is an effective method to solve pattern matching problem. The naive solution is based on using a “sliding window” where each time a new starting index is set, we compare character to character, starting at index 0 of the text and ending at index nm. Thus, the time complexity is O(m*(n-m+1))~O(n*m).

KMP is an optimization of the naive solution: it is done in O(n) and works best when the pattern has many repeating subpatterns. Therefore, it also uses a sliding window, but instead of comparing all characters to substrings, it keeps looking for the longest suffix of the current subpattern, which is also its prefix. In other words, whenever we detect a mismatch after some match, we already know some character in the text of the next window. Therefore, it is useless to match them again, so we restart matching the same character in the text with the character following the prefix. How do we know how many characters we should skip? Well, we should build a preprocessed array that tells us how many characters we should skip.

6. Losers: Kings

The Greedy method is mostly used for problems that require optimization and where local optimizations lead to global optimizations.

That is, when Greedy is used, the optimal solution for each step leads to the overall optimal solution. In most cases, however, the decisions we make in one step affect the list of decisions we make in the next step. In this case, the algorithm must be proved mathematically. Greedy will also produce good solutions for some math problems, but not all (the best solution may not be guaranteed)!

Greedy algorithms usually have five components:

  • Candidate set — from which to create a solution;
  • Selection function – selects the best candidate;
  • Feasibility function — determines whether the candidate can contribute to the solution;
  • An objective function — to assign candidates to (partial) solutions;
  • A solution function – Builds a solution from a partial solution.

Fractional backpack problem

Given the weight and value of n items, we need to put these items into a backpack of capacity W to get the maximum total value in the backpack (item allowed: the value of an item is proportional to its weight).

The basic idea of the greedy approach is to rank all items according to their value/weight ratio. Then, we can add as many whole projects as possible. When we find an item that is heavier (w2) than the remaining weight (W1) in the backpack, we will split it: only take out W2-W1 to maximize our profit. Make sure this greedy solution is the right one.

7. Dynamic Programming

Dynamic programming (DP) is a divide-and-conquer approach. It also breaks the problem down into similar sub-problems, but they are actually overlapping and interdependent — they are not solved independently.

The results of each subproblem can be used at any time later, and are constructed using memory (pre-computed). DP is primarily used for (time and space) optimization and is based on finding loops.

DP applications include Fibonacci series, Hanoi Tower, Roy-Floyd-Warshall, Dijkstra, etc. Next we will discuss the DP solution to the 0-1 knapsack problem.

0-1 knapsack problem

Given the weight and value of n items, we need to put these items into a backpack of capacity W to get the maximum total value in the backpack (no splitting of items is allowed as in the greedy solution). The 0-1 attribute is given by the fact that we should select the entire project or not select it at all. We build a DP structure as the matrix DP [I][CW] to store the maximum profit we can make by selecting I objects with total weight cw. It is easy to notice that we should first initialize dp[1][w[I]] with v[I], where w[I] is the weight of the ith object and v[I] is its value. Reproduced as follows:

dp[i][cw] = max(dp[i-1][cw], dp[i-1][cw-w[i]]+v[i])
Copy the code

Let’s analyze it a little bit.

Dp [I-1][CW] describes the situation where we don’t add the current item to the backpack. Dp [i-1][CW-w [I]]+v[I] is the case where we add item. Having said that, DP [i-1][CW-W [I]] is the maximum profit from using i-1 elements: so their weight is the current weight, not our item weight. Finally, we add the value of the project to it.

The answer is stored in dp[n][W]. Optimized by a simple observation: in the loop, the current row is only affected by the previous row. Therefore, it is not necessary to store the DP structure in the matrix, so we should choose a more spatially complex array: O(n). Time complexity: O(n*W).

8. Longest Common Subsequence

Given two sequences, find the length of the oldest sequence that exists in them. Subsequences are sequences that occur in the same relative order, but are not necessarily continuous. For example, “BCD”, “abdg”, and “c” are all subsequences of “abcdefg”.

This is another application of dynamic programming. The LCS algorithm uses DP to solve the above problems.

The actual subproblem is to find the longest common subsequence from index I in sequence A and from index J in sequence B, respectively.

Next, we will construct the DP structure LCS [][] (matrix), where LCS [I][j] starts from index I in A and is the maximum length of the common subsequence of index J in B respectively. We will build it in a top-down fashion. Obviously, the solution is stored in LCS [n][m], where n is the length of A and m is the length of B.

The recursion is very simple and intuitive. For simplicity, we will consider both sequences indexed by 1. First, we initialize LCS [I][0], 1<= I <=n and LCS [0][j], 1<=j<=m, 0, as the base case (there is no subsequence starting at 0). Then, we will consider two main cases: if A[I] is equal to B[j], then LCS [I][j] = LCS [i-1][J-1]+1 (one more identical character than the previous LCS). Otherwise, it will be the maximum value between LCS [i-1][j] (if A[I] is not considered) and LCS [I][j-1] (if B[j] is not considered).

Time complexity: O(n*m) Additional space: O(n*m)

9. The Longest Increasing Subsequence

Given A sequence A containing n elements, find the length of the oldest sequence so that all its elements are sorted in ascending order. Subsequences are sequences that occur in the same relative order, but are not necessarily continuous. For example, “BCD”, “abdg”, and “c” are subsequences of “abcdefg”.

LIS is another classic problem that can be solved using dynamic programming.

Use array L [] as DP structure to find the maximum length of an increasing subsequence, where L [I] is the maximum length of an increasing subsequence containing A[I], whose elements come from [A[I]],… , A[n]] subsequence. L [I] is 1 if everything after A[I] is smaller. Otherwise, the maximum value between all elements greater than it after A[I] is 1+. Obviously, l[n]=1, where n is the length of A. The implementation is bottom-up (starting at the end).

An optimization problem occurred while searching for the maximum value between all elements after the current element. The best thing we can do is binary search for the largest element.

To find the now known maximum length subsequence, we simply use an additional array ind[], which stores the index of each maximum.

Time complexity: O(n*log n) Additional space: O(n)

10. Convex Hull algorithm

Given a set of n points in the same plane, find the minimum area convex polygon containing all given points (inside or on the sides of the polygon). This polygon is called a convex hull. Convex hull problem is a classical geometry which has many applications in real life. For example, collision avoidance: if the convex hull of a car avoids collision, then the car also avoids collision. The path calculation is done using the convex representation of the car. Shape analysis is also done with the help of convex hull. In this way, image processing can be easily accomplished by matching the convex defect tree of the model.

There are some algorithms for finding convex hull, such as Jarvis algorithm, Graham scan, etc. Today we’ll talk about Graham scans and some useful optimizations.

The Graham scan sorts points by polar Angle — the slope of a line determined by a point and other selected points. A stack is then used to store the convex hull of the current moment. When one point X is pushed onto the stack, other points are popped off the stack until X is at an Angle of less than 180° to the line identified by the last two points. Finally, the last point of the stack is introduced to close the polygon. Due to sorting, the time complexity of this method is O(n*log n). However, this method can produce accuracy errors in calculating the slope.

An improved solution, with the same time complexity but less error, sorts the points by coordinates (x, then Y). Then we consider the line formed by the leftmost and rightmost points and divide the problem into two subproblems. Finally, we find the convex hull on each side of the line. The convex hull at all given points is the reunion of the two packets.

11. Graph Traversals

The problem with traversing a graph is to visit all the nodes in a particular order, usually calculating other useful information along the way.

Breadth First Search

The breadth-first search (BFS) algorithm is one of the most common methods for determining whether a graph is connected-or, in other words, looking for the connected components of a BFS source node.

BFS is also used to calculate the shortest distance between the source node and all other nodes. Another version of BFS is the Lee algorithm, which computes the shortest path between two cells in a grid.

The algorithm first accesses the source node and then the neighbor that will be pushed to the queue. The first element in the queue is ejected. We will access all of its neighbors and push the previously inaccessible neighbors to the queue. Repeat this process until the queue is empty. When the queue is empty, all reachable vertices have been accessed, and the algorithm ends.

Depth-first Search

Depth-first search (DFS) algorithm is another common traversal method. It is actually the best choice when checking the connectivity of a graph.

First, we access the root node and push it onto the stack. Although the stack is not empty, we examine the top node. If the node has unvisited neighbors, one is selected and pushed onto the stack. Otherwise, if all of its neighbors have been accessed, we will pop the node. When the stack becomes empty, the algorithm ends.

After this traversal, a DFS tree is formed. DFS trees have many applications; The most common one is to store the “start” and “end” times of each node — the time it enters the stack, and the time it pops off the stack, respectively.

12. Floyd-warshall Algorithm

Floyd-warshall/Roy-Floyd algorithm solves all pair shortest path problems by finding the shortest distance between each pair of vertices in a weighted directed graph with assigned edges.

FW is a dynamic planning application. DP structure (matrix) Dist [][] is initialized with input graph matrix. We then treat each vertex as an intermediate between the other two nodes. The shortest path is updated between every two pairs of nodes, with any node K as the intermediate vertex. Dist [I][j] becomes the maximum value between dist[I][k]+dist[k][j] and dist[I][j] if K is the intermediate value in the sorting path between I and j.

Time complexity: O(n³) Space complexity: O(n²)

Dijkstra algorithm and Bellman-Ford algorithm

Dijkstra algorithm

Given a graph and a source vertex in the graph, find the shortest path from the source to all vertices in the given graph.

Dijkstra’s algorithm is used to find such paths in a weighted graph where all the weights are positive.

Dijkstra is a greedy algorithm that uses a shortest path tree (SPT) rooted at the source node. SPT is a self-balancing binary tree, but the algorithm can be implemented using a heap (or priority queue). We will talk about solutions, because its time complexity is O (log | V | | | E *). The idea is to use an adjacency list representation of the graph. In this way, the node will use BFS (breadth first search) in O (| V | | | + E) time traversal.

All vertices are traversed by BFS, and those whose shortest distance has not yet been finalized are stored in the minimum heap (priority queue).

Create a minimum heap and push each node into it along with their distance values. The source then becomes the root of the heap with distance zero. The other nodes will be infinitely allocated as distances. When the heap is not empty, we extract the minimum distance value node X. For each vertex Y adjacent to x, we check to see if y is in the minimum heap. In this case, if the distance value is greater than the weight of (x, y) plus the distance value of x, then we update the distance value of y.

Bellman-ford algorithm

As we said before, Dijkstra applies only to positively weighted graphs. Behrman solved the problem. Given a weighted graph, we can check whether it contains negative cycles. If not, then we can also find the minimum distance (possibly negatively weighted) from our source to other sources.

Bellman – Ford, very suitable for distributed systems, although its time complexity is O (| V | | E |).

We initialize a dist[] as we did in Dijkstra. For * | | V – 1, for each side of (x, y), if dist [y] > dist [x] + (x, y) weight, then we use it to update the dist [y].

We repeat the last step to find possible negative loops. The idea is that if there are no negative cycles, the last step guarantees a minimum distance. A negative loop is detected if any node is less distant in the current step than in the previous step.

14. Kruskal’s Algorithm

We’ve already talked about what a minimum spanning tree is.

There are two algorithms to find the MST of a graph: Prim (for dense graphs) and Kruskal (for most graphs). Now we will discuss Kruskal algorithm.

Kruskal developed a greedy algorithm to find MST. It is very effective in rare figure, because its time complexity is O (log | V | | | E *).

The algorithm works as follows: We sort all edges in increasing order of weight. And then, take the smallest edge. If it does not cycle with the current MST, we include it. Otherwise, discard it. Repeat the last step until the MST in | V | – 1 the edge.

Including edges into the MST is done using disjoint-set-Union, which was also discussed earlier.

Topological Sorting

A directed acyclic graph (DAG) is simply a directed graph that contains no cycles.

Topological ordering in DAG is a linear ordering of vertices such that for each arch (x, y), node X appears before node Y.

Obviously, the first vertex in a topological sort is a vertex with an entry degree of 0 (no arch points to it).

Another special property is that daGs do not have unique topological sorts.

The BFS (breadth first search) implementation follows this routine: find a node with an entry of 0 and push the first one into the sort. The vertex has been removed from the graph. Since the new diagram is also a DAG, we can repeat this process.

At any time during DFS, a node can fall into one of three categories:

  • The node we finished visiting (popping off the stack);
  • The node currently on the stack;
  • Nodes that have not been discovered.

If during DFS in a DAG, node X has an output edge to node Y, then Y belongs to either class 1 or class 3. If y is on the stack, (x, y) ends a loop, which contradicts the DAG definition.

This property actually tells us that a vertex is popped off the stack after all of its outgoing neighbors have been popped. Therefore, to topologically sort a graph, we need to keep track of the reverse list of pop-up vertices.

More relevant articles and my contact information are listed here:

Gitee.com/haiyongcsdn…

Wow, you’ve reached the end of the article. Thank you for reading! The article is long, if there are some mistakes, please also criticize and correct, you can leave a message in the comment area or private letter to me.

Organizing is not easy, finally, don’t forget ❤ or 📑 support oh