I put together my previous articles to become Github, welcome everyone big guy star github.com/crisxuan/be…
Earlier we learned how the transport layer delivers data to clients and servers, providing end-to-end communication between processes. So let’s look at how the network layer actually implements host-to-host communication services. Almost every end system has a network layer. Therefore, the network layer is necessarily very complex. I’m going to spend a lot of time on the computer network layer.
Network Layer Overview
The network layer is the third layer of the OSI reference model. It is located between the transport layer and the link layer. The main purpose of the network layer is to achieve transparent data transmission between the two end systems.
The role of the network layer is superficially simple, moving packets from one host to another. To do this, the network layer requires two functions
forwarding
: Because there are so many on the InternetThe router
Router is the foundation of the Internet. One of the most important functions of router isPacket forwarding
, when a packet reaches an input link of a router, the router moves the packet to the appropriate output link. Forwarding is the only function implemented in the data plane.
There are two planes of choice in the network
- Data plane: Responsible for forwarding network traffic, such as the forwarding table in a router switch (we’ll talk about that later).
- Control plane: Controls network behavior, such as network path selection.
routing
: The network layer must select the path of packets as they flow from sender to receiver. The algorithm that computes these path choices is calledRouting Algorithm
.
That is, forwarding is the router’s local action of moving packets from an input link to the appropriate output link interface. Routing refers to the selection of the route that a packet locates from the source to the destination. We’ll talk a lot about forwarding and routing.
So, how does the router know which routes are available?
A key concept for every router is the forwarding table. The router forwards packets by checking the value of the field in the packet header to locate the entries in the forwarding table. The value in the header corresponds to the value in the forwarding table, which indicates the router output link to which the packet will be forwarded. As shown in the figure below
In the figure above, when a 1001 packet arrives at a router, it is indexed in the forwarding table and the routing algorithm determines the route the packet will take. Each router has two functions: forwarding and routing. Let’s talk about how routers work.
How routers work
The following is a router architecture diagram. A router is mainly composed of four components
- Input port:
Input port
There are many functions.Line terminal function
andData link processing
These two functions implement the physical and data link layers associated with a single input link of a router.Input port lookup/forward function
It is very important for the switching function of the router. The output port is determined by the switching structure of the router. Specifically, it should be determined by the query forwarding table. - Exchange structure:
Switching fabric
To connect a router’s input port to its output port. This switching structure acts as a network within a router. - Output port:
Output ports
Packets are forwarded by switching structures and transmitted by physical layer and data link layer functions, so that the output port performs reverse data link and physical layer functions as the input port. - Routing processor:
Routing Processor
The router implements routing protocols, maintains routing tables, and performs network management functions.
The above is just a brief introduction of these components. In fact, the composition of these components is not as simple as described. Let’s talk about these components in depth.
The input port
The input port has many functions, including line terminal, data processing, search and forwarding. In fact, these functions have corresponding modules inside the input port. The internal implementation of the input port is shown in the figure below
Each input port has a copy of the routing table maintained by the routing processor, updated according to the routing processor. The byproduct of this routing table is enough to switch each input port without having to go through a unified routing processor. This is a decentralized switchover, which avoids the forwarding bottleneck caused by the unified processing of route selectors.
In routers with limited input port processing capacity, the input port does not perform switching function, but is uniformly processed by the routing processor, which then looks up and forwards the packet to the corresponding output port according to the routing table.
Typically, this router is not a single router, but a workstation or server that acts as a route. In this router, the routing processor is the CPU and the input port is only the network card.
The input port locates the output port according to the forwarding table, and then forwards the packet. Now there is a question: does each packet have its own link? If the number of packets is very large, to hundreds of millions, are there also hundreds of millions of output port paths?
Our subconscious obviously doesn’t. Here’s an example.
Here is an example of three input ports corresponding to three output links in a forwarding table
As you can see, for this example, there are not so many links in the router forwarding table, only four links are needed, which correspond to the output links 0, 1, 2 and 3. In other words, it is possible to use 4 forwarding tables to achieve 100 million link.
How do you do that?
With this forwarding table, the prefix of the router group matches the entries in the table.
If there is a match, then it will be forwarded to the corresponding link, which may be confusing, but let me give you an example.
For example, there is a packet 11000011 10010101 00010000 0001100. Because this packet matches 11000011 10010101 00010000, the router will forward the packet to interface 0 link. If a prefix does not match one of the three output links, the router forwards to link interface 3.
Routing matches according to the longest prefix rule, which means that if there are two matching items, one is long and the other is short, the longest one will be matched.
Once the output port of the packet has been identified through the lookup function, the packet is entered into the switch structure. When entering a switch structure, if the switch structure is in use, the newly arrived packet is blocked until the switch structure schedules a new packet.
Exchange structure
Switching structure is the core function of router. It forwards packets from input port to output port through switching function, which is the main function of switching structure. Switching structure has a variety of forms, mainly divided into memory switching, bus switching, switching through the Internet, let’s discuss it separately.
- Through memory swapping: from the beginning, traditional computers used
swapping
Between the input port and the output port is done by the CPU. Input and output ports function like I/O devices in traditional operating systems. When a packet arrives at an input port, the port is first opened withinterrupt
To signal the routing selector to copy the packet from the input port to memory. The routing processor then extracts the destination address from the packet head, finds the appropriate output port in the forwarding table, and copies the packet to the cache of the output port.
One thing to note here is that if the memory bandwidth is reading or writing B packets per second, then the total switch throughput (the total rate of packets from the input port to the output port) must be less than B/2.
- Over bus switching: In this process, the bus passes packets directly from the input port to the output port without the intervention of a route selector. The bus works as follows: An input port is assigned to a packet
The label
Each output port will determine whether the port in the label matches its own port. If so, the output port will remove the label. The label is only used to cross the bus inside the switch. If you have bothmultiple
When a packet arrives at the router, only one packet can be processed, and the others must wait before entering the switching structure.
- Switching over the Internet: One way to overcome the bandwidth limitations of a single, shared bus is to use a more complex Internet. As shown in the figure below
Each vertical bus crosses each horizontal bus at the crossing point, which can be opened and closed at any time by switching structure controllers. When packets arrive at input port A, if they need to be forwarded to port X, the switch controller closes the intersections between the A and X sections, and port A performs packet forwarding on the bus. This network interconnection switching structure is non-blocking, that is, A -> X intersection closure does not affect B -> Y links. If two packets from two different input ports are destined for the same output port, in this case only one packet can be swapped and the other must wait.
Output port processing
As shown in the figure below, the output port takes out the packets already stored in the output port memory and sends them to the output link. Includes selecting and de-queuing packets for transmission, performing the required link layer and physical layer functions.
In the input port, there are queues waiting to enter the switch, while in the output port, there are queues waiting to forward. The position and degree of queuing depends on the traffic load, the relative frequency of the switching structure and the line rate.
As the queue continues to increase, the cache space of the router will be exhausted, so that there is no memory to store the overflow queue, resulting in packet loss of packets, which is what we call packet loss in the network or discarded by the router.
When the queue occurs
Next, we introduce the possible queuing situation through the queuing queue of the input port and the queuing queue of the output port.
The input queue
If the switching structure does not process as fast as the input queue, in this case the input ports will be queued, and the segments arriving at the switching structure will be queued to the input ports, waiting to be sent to the output ports through the switching structure.
To describe the input queue, we assume the following:
- Using the exchange mode of network interconnection;
- Assume that all links have the same speed;
- It takes the same time for a packet to switch from an input port to an output port in a link, and from any input port to a given output port;
- Grouping according to FCFS, as long as the output port is different, the parallel transmission can be carried out. But if the packets in any two input ports are destined for the same destination, then one of the packets will be blocked and must wait in the input queue because the switching structure can only be transmitted one at a time to a specified port.
As shown in the figure below
In queue A, two sub-groups in the input queue will be sent to the same destination X. Suppose that the switch structure is about to send A packet in queue A. At this point, A packet in queue C will also be sent to X. Packets sent from the C queue to the Y output port also wait, even if there is no contention in Y. This phenomenon is called head-in-the-line (HOL).
Output queue
Let’s discuss the case of a wait in an output queue. Assume that the switching rate is much faster than the input/output transmission rate, and that N input packets are destined to be forwarded to the same output port. In this case, N new packets will arrive at the transport port while sending packets to the output link. Since the output port can only transmit one packet per unit of time, the N packets will wait. However, while waiting for N packets to be processed, N packets arrive at the same time, so packet queues can be formed at the output port. In this case, the number of packets eventually becomes large enough to exhaust the available memory of the output port.
If there is not enough memory to cache a group, other methods must be considered. There are two main methods: one is to drop the group, using the drop-tail method; One way is to delete one or more queued groups to make room for new groups.
The packet discarding policy of routers greatly affects TCP congestion control at the network layer. In the simplest case, the router’s queue usually processes incoming packets according to FCFS rules. Since the queue length is always finite, when the queue is full, all subsequent packets (which would be at the end of the queue if the queue could continue) are discarded. This is called a tail-drop strategy.
In general, discarding the buffer before it fills up is a better strategy.
As shown in the figure above, each input port of A, B and C reaches A group, and this group is sent to X. Only one group can be processed at A time. Then, two more groups are sent from A to B respectively to X, so there are four groups waiting in X.
After the last packet is forwarded, the output port will select one packet among the remaining packets according to packet scheduleer for transmission. We will talk about packet transmission next.
Packet scheduling
Let us now discuss the problem of packet scheduling order, that is, how queued packets are transmitted through the output link. There are countless examples of queuing in our life, but the general queuing algorithm in our life is first come, first served (FCFS), also first in, first out (FIFO).
First in first out
Fifo is mapped to queues in data structures, except that it is now a queuing model for link scheduling rules.
FIFO scheduling rules select packets in the same order that the packets arrive at the output link queue. The packets that arrive at the queue first are forwarded first. In this abstract model, if the queue is full, the discarded tail group will be the one after the end of the queue.
Priority queuing
Priority queuing is an improved version of first-in, first-out queuing, in which packets arriving at the output link are classified into priority classes in the output queue, as shown in the figure below
In general, each group with a different priority has its own priority class, and each priority class has its own queue. Packet transmission is carried out from the queue with a higher priority first, and the selection between groups of the same priority class is usually done in FIFO mode.
Circular weighted fair queuing
Under the Round Robin Weighted Fair Discipline, groups are classified as if using priority. However, there is no strict service priority between classes. The loop scheduler cycles services between these classes. As shown in the figure below
In circular weighted fair queuing, the grouping of class 1 is transmitted, followed by the grouping of class 2, and finally the grouping of class 3, which is a loop, and then it starts again, polling again from 1 -> 2 -> 3. Each queue is also a first-in, first-out queue.
This is a working-queuing rule, which means that if an empty queue is found during polling, the output port does not wait for the queue to be grouped.
In addition, I have uploaded six PDFS by myself. After searching for programmer Cxuan to follow the official account on wechat, I reply to CXuan on the background and get all PDFS. These PDFS are as follows
Six PDF links