Chapter 6 Breadth-first search
Graph:
A graph simulates a set of connections, consisting of nodes and edges. A node may be directly connected to many nodes, which are called neighbors.
In the picture below, Rama is Alex’s neighbor. Adit is not Alex’s neighbor because they are not directly connected. But Adit is Both Rama’s neighbor and Tom’s neighbor.
Diagrams are implemented using hash tables that map keys to values. In the diagram, the node needs to be mapped to all of its neighbors.
Directed graphs contain arrows and the relationship is one-way; Undirected graphs do not contain arrows, and directly connected nodes are neighbors of each other.
Finding the shortest path steps:
- Use diagrams to model the problem
- Use breadth-first search to solve problems
Breadth-first search
Breadth-first search is a graph algorithm that can find the shortest distance between two things, which is used to solve the shortest path problem.
It is used to answer two questions (starting from the starting point and gradually extending outwards through breadth-first search, that is, checking first degree relations, then second degree relations; Check by queue in order of addition) :
- Is there A path from node A to node B?
- What is the shortest way from node A to node B?
Algorithm principle:
def search(name) :
search_queue = deque() # queue to check
search_queue += graph[name] Add name's network to the queue
searched = [] # Nodes detected
while search_queue:
person = search_queue.popleft()
if not person in searched: The node is not checked
if person_is_seller(person):
print person + " is a mango seller!"
return True
else :
search_queue += graph[person]
searched.append(person)
return False
Copy the code
Running time:
In the search, it moves along each edge, so the running time is at least O(the number of edges); A queue is also used to store the people to be checked, and the total time to add everyone to the queue is O(number of people). So, breadth-first search takes O(number of people + number of edges), which is usually written as O(V + E), where V is the vertice number and E is the edge number.
The queue
Queues are like stacks in that they cannot be accessed randomly. Only two operations are supported: join and leave the team.
Queue is a First In First Out (FIFO) data structure, while stack is a Last In First Out (LIFO) data structure.
Chapter 7 Dikstra algorithm
In the previous chapter, breadth-first search was used to find the shortest path between two points, when “shortest path” meant the fewest number of segments. In the Dikstra algorithm, each segment is assigned a weight (weight), so the dikstra algorithm finds the path with the lowest total weight.
A weighted graph is called a weighted graph, and an unweighted graph is called an unweighted graph. To calculate the shortest path in an unweighted graph, use breadth-first search. To calculate the shortest path in a weighted graph, the Dikstra algorithm can be used.
The graph may also have rings, which means you can start at a node, go all the way around and come back to that node. So the weight of the ring increases, and the path around the ring can’t be the shortest path. An undirected graph means two nodes pointing at each other, which is a ring! Therefore, The Dikstra algorithm is only applicable to directed acyclic graphs.
Dikstra algorithm steps:
- Find the cheapest node, which can be accessed in the shortest time;
- For the node’s neighbors, check if there is a shorter path to them, and if so, update their overhead;
- Repeat this process until you have done this for each node in the diagram;
- Compute the final path.
Dikstra algorithm implementation (the following picture is an example) :
- Hash table implementation, all the node neighbors are stored in the hash table;
/ / graph table
graph["start"] = {}
graph["start"] ["a"] = 6
graph["start"] ["b"] = 2
graph["a"] = {}
graph["a"] ["fin"] = 1
graph["b"] = {}
graph["b"] ["a"] = 3
graph["b"] ["fin"] = 5
graph["fin"] = {}
// Table costs (node costs)
infinity = float("inf")
costs = {}
costs["a"] = 6
costs["b"] = 2
costs["fin"] = infinity / / infinity
// parents table
parents = {}
parents["a"] = "start"
parents["b"] = "start"
parents["fin"] = None
// Record the processed nodes
processed = []
Copy the code
- Implementation algorithm code
node = find_lowest_cost_node(costs) Find the least expensive node among the unprocessed nodes
while node is not None:
cost = costs[node] Get the cost of reaching the current node
neighbors = graph[node] Get the node neighbor
for n in neighbors.keys(): Walk through all the neighbors of the current node
new_cost = cost + neighbors[n]
if costs[n] > new_cost: Update the overhead of the neighbor if it is closer to the neighbor via the current node, and set the parent of the neighbor to the current node
costs[n] = new_cost
parents[n] = node
processed.append(node) # Mark the current node as processed to find out what to process next
node = find_lowest_cost_node(costs) # Figure out which node to process next, and loop
Find the node with the lowest overhead
def find_lowest_cost_node(costs) :
lowest_cost = float("inf")
lowest_cost_node = None
for node in costs: Pass through all nodes
cost = costs[node]
if cost < lowest_cost and node not in processed: # If the current node has lower overhead and is not processed
lowest_cost = cost # treat it as the least expensive node
lowest_cost_node = node
return lowest_cost_node
Copy the code