preface

To be a good Android developer, you need a completeThe knowledge systemHere, let’s grow up to be what we want to be.

Think big picture outline

First, why to carry out network optimization?

Waiting for the network is the biggest performance bottleneck of our App. No matter how much you optimize the drawing, memory, lag or other aspects, it is not enough to optimize the network! The faster the network communication speed, then:

  • 1) The higher the user engagement.
  • 2) More loyal users.
  • 3) The higher the conversion rate.

The core method of network optimization is to eliminate and reduce unnecessary network delay and minimize the number of bytes transmitted.

2. Network performance evaluation

1. Wireless network communication process

Mobile phone => wireless network => Base station => carrier core network => Internet => serverCopy the code

2. Important indicators

1) Delay

The time required for data to be sent from the information source to the destination. The total latency from client to server is as follows:

  • 1) Propagation delay: the time required for a message to travel from the sending end to the receiving end, which is a function of signal propagation distance and speed. The conclusion that the speed of light is the highest speed at which all energy, matter, and information can travel puts an upper limit on how fast network packets can travel.
  • 2) Transmission delay: the time required to transfer all bits in the message to the link, which is a function of the message length and link rate.
  • 3) Processing delay: the time required to process the head of the group, check for bit errors, and determine the target of the group.
  • 4) Queue delay: the time for incoming packets to be queued for processing.

The more routers passed along the way, the more processing and transmission delays per packet. Also, the more congested the network traffic, the greater the likelihood that the packet will be delayed in the inbound buffer.

Finally, a significant portion of the delay is often spent in the last few kilometers, and with the traceroute command, we know the topology and speed of the Internet service provider.

2) Bandwidth

Maximum throughput of a logical or physical communication path.

Bandwidth at the core of the network

Optical fiber

It’s a “light tube”, slightly thicker than a human hair, designed to transmit light signals from one end to the other.

Metal wire

It is used to transmit electrical signals, but the signal loss, electromagnetic interference is large, and the maintenance cost is high.

Both of these routes are likely to be used for data transmission, but generally long distance packet transmission is done through optical fibers.

Through wavelength-division Multiplexing (WDM) technology, optical fibers can transmit many different wavelengths (channels) at the same time, and thus have a significant bandwidth advantage.

The total bandwidth of an optical fiber connection, equal to the data transfer rate per channel times the number of channels available for multiplexing. Each cable will contain several optical fibers (commonly four), which translates into a bandwidth capacity of several hundred terabits per second.

Bandwidth at the edge of the network

The bandwidth available to the user depends on the lowest capacity connection between the client and the target server, and we can see the average bandwidth around the world at Akamai. However, high bandwidth does not guarantee end-to-end transmission speed.

What are the factors related to latency and bandwidth?

Signal strength, base station distance, network system, congestion and many other factors.

3. Understand the bandwidth and latency reference values of different network systems

The network system Bandwidth (downlink/uplink) delay
2.75 G 384KB/48KB 600~700ms
3 G 7MB/2MB 150~400ms
4 G 128MB/56MB 40~50ms
5G >100MB/>50MB <10ms

What is a weak network?

It has the characteristics of high packet loss rate and high bit error rate.

Network optimization needs to be combined with the actual situation of the App to consider comprehensively, to see whether our App is heavy delay application or heavy bandwidth application.

Unfortunately, humans are unlikely to escape the laws of physics. If you need to optimize for delays, you must design and optimize protocols and applications, keeping in mind the speed of light limit.

3. TPC optimization

The Internet has two core protocols :IP and TCP.

  • IP:Internet Protocol, responsible for routing and addressing between networked hosts;
  • TCP:Transmission Control ProtocolWhich is responsible for providing a reliable abstraction layer over unreliable transmission channels.

TCP/IP is often referred to as the Internet Protocol Suite. In reality, almost all HTTP traffic is routed over TCP because TCP offers many useful features.

We all know about IPv4 and IPv6, but what about ipv1-3 and IPv5?

The 4 in IPv4 represents the fourth version of the TCP/IP protocol, released in September 1981. V4 in IPv4 only indicates its inheritance to the previous three versions of TCP, there was no separate IPv1, IPv2, or IPv3 protocol previously. V5 has been assigned to another experimental Protocol, the Internet Stream Protocol(ST). But ST hasn’t made much progress, which is why we haven’t heard much about it. As a result, the next version of TCP/IP became IPv6.

1. Three handshakes

  • 1) For security reasons, the serial number is randomly generated from both ends.
  • 2) The client can send data immediately after sending ACK group, while the server must wait for receiving ACK group before sending data.
  • 3) The delay caused by the three-way handshake makes it costly to create a new TCP connection. The key to improving the performance of TCP applications is to find ways to reuse connections.

TFO (TCP Fast Open)

TFO aims to reduce the performance penalty associated with new TCP connections. But it only works in certain circumstances. For example, the net load of data sent with a SYN packet has a maximum size limit, can only send certain types of HTTP requests, and can only be applied to repeated connections because of the reliance on encrypted cookies.

2. Congestion prevention and control

ARPANET(Advanced Research Projects Agency Network), a precursor to the modern Internet, was the world’s first operational packet-switched Network. This project was officially launched in 1959, and TCP/IP replaced the original NCP(Network Control Program) protocol as the main communication protocol in 1983.

1) Flow control

To achieve flow control, each party to a TCP connection announces its own receive window (RWND), which contains information about the size of the buffer space that can hold data. This process runs through the life cycle of each TCP connection: each ACK packet carries the latest RWND value so that both ends can dynamically adjust the data flow rate to accommodate the capacity and processing capabilities of both sender and receiver.

Window Scaling (RFC 1323)

While the original TCP specification assigned 16-bit fields to advertise Window sizes, RFC 1323 provides the option of “TCP Window Scaling” to increase the size of the receive Window from 65535 bytes to 1G bytes! Scaling the TCP window is done during the three-way handshake, with a value representing the number of bits left to shift the 16-bit window field in a future ACK.

2) Slow start

Congestion Window size (CWND), the limit on the amount of data that can be sent by the sender before receiving an ACK from the client.

The maximum amount of data transmitted over a new TCP connection is the minimum in RWND and CWND, and the server can actually send four TCP segments to the client and then have to stop and wait for an acknowledgement.

To reduce the time to grow to the congestion window, the round-trip time between the client and the server can be reduced. For example, deploy the server geographically close to the client. Alternatively, increase the initial congestion window size to 10 segments as specified in RFC 9828.

This is because slow start limits the throughput available, which is detrimental to small file transfers.

SSR(Slow Start Restart)

Resets the congestion window for a connection after the connection has been idle for a certain period of time. The reason is very simple, while the connection is idle, the network condition may also change, in order to avoid congestion, you should reset the congestion window back to the “safe” default value.

However, SSR can have a big impact on long-cycle TCP connections where bursts of idle occur, such as KEEP-alive connections for HTTP. Therefore, we recommend that SSR be disabled on the server.

As you can see, the 5 Mbit/s bandwidth between the server and client does not affect the start stage of the TCP connection. Latency and congestion window size are the limiting factors.

3) Congestion prevention

The slow start initializes the connection with a conservative window, and each subsequent round trip doubles the amount of data transmitted until either the receiver’s flow control window, the system-configured Congestion Threshold (SSthresh) window, is exceeded, or packets are lost, at which point the congestion prevention algorithm steps in.

Congestion prevention algorithms use packet loss as a sign of network congestion, that is, a certain connection or router in the path has become so congested that packet deletion measures must be taken. Therefore, you must adjust the window size to avoid packet loss and ensure smooth network flow.

After resetting the congestion window, the congestion prevention mechanism increases the window according to its own algorithm to avoid packet loss. At some point, another bag may be lost, and the process starts all over again. If you’ve ever looked at the swallowing volume tracking curve for A TCP connection and found it zigzagged, now you know why. This is the congestion control and prevention algorithm to adjust the congestion window, and then eliminate the problem of packet loss in network.

TCP PRR(Proportional Rate Reduction)

Initially, TCP used the Multiplicative Decrease and Additive Increase (AIMD) algorithm, which halved the congestion window when packet loss occurred, and then slowly increased the window by a fixed value each round trip.

Then came PRR(Proportional Rate Reduction), a new algorithm specified by RFC 6937, which aims to improve the recovery Rate after packet loss. Using it reduces average connection latency due to packet loss by 3% to 10%. In addition, PRR is now the default congestion prevention algorithm for the Linux 3.2+ kernel.

3, BDP(bandwidth-delay product)

The receive window (RWND) is sent with each ACK, while the Congestion window (CWND) is dynamically adjusted by the sender based on congestion control and prevention algorithms.

Whether the data sent by the sender or the data received by the receiver exceeds the maximum unacknowledged data amount, they must stop and wait for the other party ACK to confirm certain packets before continuing.

Bandwidth-delay Product (BDP) is the maximum amount of data in the state of unconfirmed in transit at any time.

BDP = Capacity of the data link * its end-to-end latencyCopy the code

On a high-speed connection between a client and a server, if the actual transfer speed is a fraction of the available bandwidth, then window size is likely to be the culprit. Either the receiving window of a saturated end notification is small, congestion window resets due to network congestion and packet loss, or more likely, connection throughput is limited due to excessive traffic growth.

4. The TCP Head of Line (HOL) is blocked

TCP enables reliable network transmission over unreliable channels. Basic packet error detection and correction, sequential delivery, packet loss retransmission, and flow control, congestion control, and prevention mechanisms to ensure maximum network efficiency make TCP the most common transport protocol in most network applications.

However, sequential and reliable delivery is sometimes unnecessary and can lead to additional delays that negatively impact performance. For example, each TCP packet is sent with a unique sequence number, and all packets must be sent to the receiver in sequence. If a packet fails to reach the receiving end, subsequent packets must be stored in the TCP buffer at the receiving end until the lost packet is resent and reaches the receiving end. This all happens at the TCP layer, where the application is unaware of TCP retransmissions and queues of packets in the buffer and must wait for all the packets to arrive before accessing the data. Previously, applications could only feel delayed delivery when reading data through a socket. This effect is known as TCP’s HOL (Head of Line) blocking.

Queue head blocking causes an unpredictable delay change in packet arrival time, which is often referred to as jitter.

Therefore, for applications that do not need to deliver data sequentially or can handle packet loss, as well as applications that have high latency or jitter requirements, a protocol such as UDP is a good choice.

The reaction force of packet loss

Packet loss is key to getting the best performance out of TCP. The deleted packets are a feedback mechanism that allows both the receiver and sender to adjust their speeds to avoid network congestion while keeping delays to a minimum.

For audio and video applications that are more real-time, even if a packet is lost, the audio codec can continue processing subsequent packets by inserting a small interval in the audio. As long as the intervals are small enough, the user will not notice them, and waiting for lost packets can cause unexpected pauses in the audio output. The latter is a worse user experience.

5. TCP optimization Tips

1) Key details in TCP

  • 1) TCP three-way handshake adds a full round trip time;
  • 2) TCP slow start will be applied to each new connection;
  • 3) TCP traffic and congestion control will affect the throughput of all connections;
  • 4) TCP throughput is controlled by the current congestion window size.

In most cases, the bottleneck of TCP is latency, not bandwidth.

2) Optimize server configuration

1) Increase the initial TCP congestion window

Increasing the initial congestion window will allow TCP to transmit more data on the first round trip, and the subsequent speed increase will be significant. This is also a particularly critical optimization for sudden, transient connections.

2) Slow start and restart

Disabling slow start when the connection is idle can improve the performance of long TCP connections that send data instantaneously.

3) Window scaling (RFC 1323)

Enabling window scaling can increase the maximum receive window size, which can enable high latency connections to achieve better throughput.

4) Quick TCP opening

Under certain conditions, application data can be sent in the first SYN packet. TFO(TCP Fast Open) is a new optimization option, note that TFO requires both client and server support.

3) Client optimization

  • 1) Less or no network (request merge) : Eliminating unnecessary data transmission is itself a great optimization. For example, reduce unnecessary download resources, or use compression algorithms to minimize the number of bits to send.
  • 2) Shorter communication distance by using CDN: By deploying servers in different regions and placing data close to the client, network round-trip delay can be reduced and TCP performance can be significantly improved.
  • 3) Reuse TCP connections: Minimize the impact of slow start and other congestion control mechanisms.

4. UDP optimization

A datagram is a complete, independent data entity that carries sufficient information from the source node to the destination node without any reliance on the previous data exchange and transmission network between these nodes.

1. Differences between packets and groups

Datagram and packet are two words that are often used interchangeably, but there is actually a difference between them. A packet can be used to refer to any formatted block of data, whereas a datagram is usually used to describe only those packets that are transmitted over unreliable services, with neither guaranteed delivery nor failure notification.

The IETF and the W3C working Group have jointly developed a new SET of apis — WebRTC(Web Real-time Communication). WebRTC looks at native voice and video real-time communication over UDP in a browser, as well as other forms of PEER-to-peer (end-to-end) communication.

2. No protocol service

As we all know, the main task of the IP layer is to send datagrams from the source host to the destination host by address. The implication of datagrams is that the IP layer does not guarantee reliable delivery of messages and does not send failure notifications, in effect exposing the unreliability of the underlying network to the upper layer. If a routing node deletes AN IP packet due to network congestion, high load, or other reasons, the next-layer protocol over the IP is responsible for detecting, recovering, and retransmitting the data if necessary.

The source port and checksum fields in the UDP datagram are optional. The header of the IP packet also has a checksum, and the application can ignore the UDP checksum. Thus, UDP simply provides an “application multiplexing” mechanism by embedding the application’s source and destination ports on top of the IP layer.

No UDP service is available

  • 1) No guarantee of message delivery: no confirmation, no retransmission, and no timeout.
  • 2) No guarantee of delivery order: no package number is set, no rearrangement is made, and no queue first block will occur.
  • 3) Do not track connection status: no need to establish a connection or restart the state machine.
  • 4) No congestion control required: no built-in client or network feedback mechanism.

3, UDP and Network Address Translator

1) The connection status times out

For longer UDP traffic, a de facto best practice is to introduce a bidirectional keep-alive group that periodically resets the timer for the translation record in all NAT devices on the transmission path.

2) NAT penetration

NAT causes several problems, as shown below:

  • 1) The internal client does not know the IP address of the external network, but only the internal IP address.
  • 2) Any group that reaches an IP address outside the NAT device must also have a destination port and an entry in the NAT table that translates it into the IP address and port number of the internal host. If there is no entry (usually from the Internet), the incoming group is deleted.

To solve this mismatch between UDP and NAT, many penetrating technologies (TURN, STUN, ICE) have been invented to establish end-to-end connections between UDP hosts.

3) Session Traversal Utilities for NAT (RFC 5389)

advantage

  • 1) The application program can obtain the IP address and port of the external network, and use these information to communicate with the peer end;
  • 2) The outbound binding request sent to the STUN server will establish a routing entry in the NAT through which the communication will pass.

So that inbound packets arriving at that IP and port can find applications on the Intranet;

  • 3) STUN protocol defines a simple keep-alive detection mechanism to ensure that NAT route entries do not time out.

disadvantages

STUN does not accommodate all types of NAT and network configurations. Not only that, but in some cases UDP can be completely blocked by firewalls or other network devices.

To solve this problem, we can also use the TURN(Traversal Using Relays around NAT) protocol (RFC 5766) as a back-up in case STUN fails. TURN can skip UDP in the worst case and switch to TCP.

4) TURN(Traversal Using Relays around NAT) protocol (RFC 5766)

The working process

  • 1) Both ends should send the assignment request to the same TURN server to establish the connection, and then negotiate the permission.
  • 2) After the negotiation, both ends send the data to the TURN server, which forwards the data

Achieve communication.

disadvantages

The capacity of the relay device must be large enough to meet the requirements of data transmission.

Google’s libjingle is an open source library for building end-to-end applications in C++. The documentation also provides a valuable reference for considering real-world STUN and TURN performance:

  • Direct connection 92% of the time (STUN);
  • Repeaters (turns) are used 8% of the time.

5) ICE(Interactive Connectivity Establishment) Agreement (RFC 5245)

ICE devised a methodology aimed at establishing the most efficient channel between communication terminals: direct connection when possible, STUN negotiation when necessary, and TURN when not. As shown in the figure below:

If we want to build UDP2P-based applications, we should choose existing platform apis or third-party libraries that implement ICE, STUN, and TURN.

4. Tips on UDP optimization

UDP is characterized by its omission of connection status, handshakes, resends, reorganizations, rearranges, congestion control, congestion prevention, flow control, and even optional error detection.

In RFC 5405, many design suggestions are given for designing unicast UDP applications, as follows:

  • 1) Must tolerate various Internet path conditions;
  • 2) Transmission speed should be controlled;
  • 3) Congestion control should be implemented for all flows;
  • 4) Use a bandwidth similar to TCP;
  • 5) A retransmission counter based on packet loss should be prepared;
  • 6) No datagram larger than the path MTU should be sent;
  • 7) Datagram loss, duplication and rearrangement should be handled;
  • 8) Should be stable enough to support delivery delays of more than 2 minutes;
  • IPv4 UDP checksum should be supported. IPv6 checksum must be supported.
  • 10) Keep-alive can be used when needed (minimum interval of 15 seconds).

The WebRTC protocol is a good example of this.

Transport Layer Security (TLS)

The SSL protocol is implemented at the application layer directly above TCP.

The IETF later renamed the SSL protocol Transport Layer Security (TLS) when it standardized it. Many people use TLS and SSL interchangeably, but they are not technically the same because they refer to different versions of the protocol.

Since the SSL protocol was proprietary to Netscape, the IETF set up a team to standardize the protocol, which led to RFC 2246, TLS 1.0, an upgrade of SSL 3.0.

  • TLS 1.0 was released in January 1999.
  • TLS 1.1 was released in April 2006.
  • TLS 1.2 was released in August 2008.

TLS can also be implemented on top of UDP. DTLS(Datagram Transport Layer Security) (RFC 6347) is designed to build on THE TLS protocol while taking into account the Datagram delivery mode and providing similar Security.

1. Encryption, authentication and integrity

The TLS protocol aims to provide three basic services for applications that run on top of it:

  • 1) Encryption: the mechanism for obfuscating data.
  • 2) Authentication: the mechanism for verifying the validity of an identity.
  • 3) Data integrity: a mechanism to detect whether messages are tampered or forged.

1) Encryption

The most ingenious design of the handshake mechanism is its use of public key cryptosystem (also known as “asymmetric key encryption”), which allows communication parties to agree on a shared security key without “knowing” in advance, and the negotiation process is completed over a non-encrypted channel.

2) Authentication

The TLS protocol also allows the two ends of the communication to identify each other during the handshake. This validation first requires establishing a Chain of Trust and Certificate Authorities.

3) Data integrity

The Message framing mechanism uses the Message Authentication Code (MAC) to sign each Message. The MAC algorithm is a unidirectional encrypted hash function (essentially a checksum) with a key negotiated by the two parties to the connection. Whenever a TLS record is sent, a MAC value is generated and appended to the message. The receiver calculates and verifies the MAC value to determine the integrity and reliability of the message.

2. TLS handshake

  • 0 ms:TLS runs on top of the reliable Transport Layer (TCP), which means that TCP’s “three-way handshake”, or a full round trip, must first be completed.
  • 56 MS: After the TCP connection is established, the client sends some specifications in plain text, such as the version of the TLS protocol it runs, a list of encryption suites it supports, and other TLS options it supports or wants to use.
  • 84 ms: The server then obtains the TLS protocol version for future communication, selects one from the list of encryption suites provided by the client, attaches its own certificate, and sends the response back to the client. Optionally, the server can also send a request asking the client to provide a certificate and other TLS extension parameters.
  • 112 MS: Suppose the two sides agree on a common version and encryption suite, and the client happily offers its certificate to the server. The client then generates a new symmetric key, encrypts it with the server’s public key, and sends the encrypted key to the server, telling it to start encrypted communication. So far, all data has been sent in plain text, except for the new symmetric key encrypted with the server’s public key.
  • 140 ms: Finally, the server decrypts the symmetric key sent by the client, checks the integrity of the message by authenticating the MAC address, and returns an encrypted Finished message to the client.
  • 168 ms: The client decrypts the message with the symmetric key it previously generated, verifies the MAC, and, if all goes well, establishes a channel and starts sending application data.

1) ALPN, Application Layer Protocol Negotiation

NPN(Next Protocol Negotiation) is a TLS extension developed by Google in the SPDY Protocol to improve efficiency by negotiating application protocols during TLS handshakes.

The ALPN is a revised and approved version of the IETF based on the NPN. In NPN, the server broadcasts the protocols it supports, and the client selects and acknowledges the protocols. In ALPN, the order of exchange is reversed. The client first declares the protocol it supports, and the server selects and confirms the protocol. The purpose of this modification is to make ALPN consistent with other protocol negotiation standards.

As a TLS extension, ALPN allows us to negotiate the application protocol at the same time as the TLS handshake, thus eliminating the additional round-trip time required by the Upgrade mechanism of HTTP.

Once the TLS handshake is complete, the encrypted channel is established, and the application protocol is agreed upon, the client and server can communicate instantly.

2) SNI (Server Name Indication)

What if the server wants to serve multiple sites at a single IP address, and each site has its own TLS certificate?

To solve this problem, the SNI extension was introduced to the TLS protocol, which allows the client to specify the host name to connect to at the beginning of the handshake.

3. The TLS session is restored

That is, the negotiated security key is shared between multiple connections.

1) Session Identifier (RFC 5246)

The earliest “session identifier” mechanism was introduced in SSL 2.0, enabling servers to create 32-byte session identifiers that are sent as part of their “ServerHello” messages during a full TLS negotiation.

Internally, the server stores a session ID and negotiated session parameters for each client. Accordingly, the client can also save the session ID information and include the ID in the ClientHello message of the subsequent session, thus telling the server that it still remembers the encryption suite and key from the last handshake negotiation, which can be reused.

Assuming that both the client and server can find the shared session ID parameter in their own caches, a short handshake is in order. Otherwise, a new session negotiation is restarted to generate a new session ID. A short TLS handshake is shown below:

advantage

  • 1) Save a round trip.
  • 2) The public key encryption calculation used to negotiate the shared encryption key is omitted.

disadvantages

For servers that “handle” tens of thousands or even millions of individual connections every day, a session ID caching and cleanup strategy is required because of the memory footprint of each TLS connection opened.

To solve the above problem of deploying TLS session caching on the server side, a “session log” mechanism appears.

2) Session Ticket (RFC 5077)

This mechanism does not require the server to hold the session state of each client. As long as the client indicates that it supports a Session Ticket, the server can add a New Session Ticket record to the last exchange of a full TLS handshake, containing all Session data that has been encrypted with a security key known only to the server.

The client then saves the session log and can include it in the SessionTicket extension in ClientHello messages for subsequent sessions. In this way, all session data is kept only on the client side and remains secure because the data is encrypted and the key is known only to the server.

Session identifier and session record single mechanism, that is, session cache or Stateless Recovery mechanism. The benefits are primarily the elimination of the caching burden on the server side, which simplifies deployment by requiring the client to provide a session log when a new connection is established with the server (unless the log expires).

4. Trust chains and Certificate Authorities

Authentication uses one’s own private key to sign, and the other party uses its public key to verify the signature of the received message. But trust is the key to communication.

For the browser, who does it trust?

There are at least three answers:

  • 1) Manually specify certificates: All browsers and operating systems provide a mechanism for manually importing trust certificates.
  • 2) Certificate Authority (CA) : a third party trusted by both the Certificate recipient (owner) and the party that relies on the Certificate.
  • 3), browser and operating system: Each has a built-in list of well-known certification authorities. As a result, you also trust the trusted institutions provided and maintained by the operating system and browser provider.

The most common scenario is for the browser to specify a trusted certificate authority (the root CA). The following figure shows the process for a certificate authority to sign a digital certificate:

All browsers allow users to view their trusted chains of secure connections, and a common way to access them is by clicking the lock icon at the top of the address bar. As shown in the figure below:

The “basis of trust” for the entire chain is the root certification authority, and each browser has a built-in list of trusted certification authorities (the root authority).

5. Certificate revocation

Either end of the communication can check the state of each certificate in the chain against the embedded instructions and signatures.

1), Certificate Revocation List (RFC 528)

  • Each CA maintains and periodically publishes a list of serial numbers of revoked certificates.
  • Anyone who wants to verify a certificate can download the revocation list and check if the certificate is on the list. If yes, the certificate has been revoked.

disadvantages

  • 1) The CRL list grows as the number of certificates to be revoked increases. Each client must obtain a complete list containing all sequence numbers.
  • 2) There is no way to update the sequence number of the revoked certificate immediately. For example, if the client cache the CRL and then a certificate is revoked, the certificate is regarded as valid until the cache expires.

2) OCSP(Online Certificate Status Protocol) (RFC 2560)

  • A mechanism for checking the status of certificates in real time.
  • The authenticator can directly query the serial number in the certificate database to verify whether the certificate chain is valid.

disadvantages

  • 1) Certification authorities must process real-time queries.
  • 2) The certificate authority must ensure access at all times and anywhere.
  • 3) The client blocks the OCSP request before further negotiation.
  • 4), ** Real-time OCSP requests may be compromised because the certification authority knows which site the client is going to visit

Client privacy **.

In reality, THE CRL and OCSP mechanisms are complementary, and most certificates provide both instruction and query support.

6. TLS recording protocol

The TLS logging protocol is responsible for identifying different message types (handshake, warning, or data, through the “content Type” field), as well as the security and integrity validation of each message. The structure of TLS records is shown in the figure below:

The typical process for delivering application data is as follows:

  • 1) Record protocol received application data.
  • 2) The received data is split into blocks with a maximum size of 214 bytes (16 KB) per record.
  • (Optional) Compress application data.
  • 4) Add a Message Authentication Code (MAC) or HMAC.
  • 5) Encrypt data using an agreed encryption suite.

The encrypted data is then handed over to the TCP layer for transmission. The process on the receiving end is the same, in reverse order: decrypt the data using the agreed encryption suite, validate the MAC, extract and forward the data to the upper-layer application.

disadvantages

  • 1) The maximum value of TLS record is 16 KB;
  • 2) Each record contains a header of 5 bytes, MAC(up to 20 bytes in SSL 3.0, TLS 1.0, TLS 1.1, up to 32 bytes in TLS 1.2), and padding if block encryption is used.
  • 3) Decryption and verification can only begin when the whole record is received.

7. TLS optimization Tips

1) Finish the handshake early

Using A CDN, data and services are cached or redeployed on servers around the world, without the need for all users to be connected to a central original server via a cross-sea or transcontinental cable.

advantage

  • 1) Reduce latency by using local proxy servers to distribute load.
  • 2), the local proxy server can also establish a long-term security connection with the original server, full agent request and response.
  • 3) In THE CDN, the client connection terminates at the adjacent CDN node, which forwards the request to the CDN node adjacent to the peer server, and then the request will be routed to the original server. This allows data to find its way through the optimized CDN backbone, further reducing latency between the client and server.

2) Use session caching and stateless recovery

  • It is disabled by default on most servers and we need to turn it on manually.
  • Session slips are used in supported clients and session identifiers are used in unsupported clients.

3) TLS record size

Small records cause waste, and large records cause delay. The reference values for the optimal TLS record size are as follows:

  • IPv4 frames require 20 bytes, IPv6 frames require 40 bytes;
  • TCP frames require 20 bytes;
  • The TCP option requires 40 bytes (timestamp, SACK, and so on).

While common libraries such as OpenSSL allocate 50 KB per connection by default, Google’s servers reduce the size of the OpenSSL buffer to about 5KB. Therefore, we need to use as little memory as possible without compromising functionality.

4) The length of the certificate chain

How does the browser know where to find a certificate?

Because the child certificate usually contains the URL of the parent certificate.

We should ensure that the length of the certificate chain is minimum. If the certificate chain is longer than TCP’s initial congestion window, we inadvertently add another round trip to the handshake: the certificate is longer than the congestion window, causing the server to stop waiting for an ACK message from the client.

There are two solutions to this:

  • 1) Increase the congestion window.
  • 2),Reduce the certificate size:
    • Minimize the number of intermediate certificate authorities: Ideally, the certificate chain sent should contain only two certificates: the site certificate and the certificate from the intermediate certificate authority. The third certificate, the root ca certificate, is already included in the browser’s built-in trust list and does not need to be sent.
    • The ideal certificate chain should be around 2 KB or 3 KB.

5) OCSP envelope

The server can include (envelope) the OCSP response from the certificate authority in the certificate chain, allowing the browser to skip the online query. Moving the query OCSP operation to the server allows the server to cache the signed OCSP response, thus saving many client requests.

6) HTTP Strict Transport Security (HSTS)

A Security policy mechanism that allows the server to declare access rules for applicable browsers through a simple HTTP header (for example, strict-transport-security: max-age=31536000).

Max-age specifies the lifetime of the HSTS rule set in seconds (for example, max-age=31536000 equals 365 days of caching).

advantage

HSTS eliminates the loss of redirection from HTTP to HTTPS by shifting the responsibility to the client, which automatically rewrites all links to HTTPS.

We need to be familiar with the OpenSSL command line tool to check the entire handshake and local server configuration. Its use is as follows:

quchao@quchaodeMacBook-Pro paxgo % openssl s_client -state -CAfile startssl.ca.crt -connect igvita.com:443

4482293356:error:02FFF002:system library:func(4095):No such file or directory:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-47.111./libressl-2.8/crypto/bio/bss_file.c:122:fopen('startssl.ca.crt'.'r')
4482293356:error:20FFF080:BIO routines:CRYPTO_internal:no such file:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-47.111./libressl-2.8/crypto/bio/bss_file.c:125:
4482293356:error:0BFFF002:x509 certificate routines:CRYPTO_internal:system lib:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-47.111./libressl-2.8/crypto/x509/by_file.c:248:
CONNECTED(00000005) SSL_connect:before/connect initialization SSL_connect:SSLv3 write client hello A SSL_connect:SSLv3 read server hello A  depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = igvita.com
verify return:1
SSL_connect:SSLv3 read server certificate A
SSL_connect:SSLv3 read server key exchange A
SSL_connect:SSLv3 read server done A
SSL_connect:SSLv3 write client key exchange A
SSL_connect:SSLv3 write change cipher spec A
SSL_connect:SSLv3 write finished A
SSL_connect:SSLv3 flush data
SSL_connect:SSLv3 read finished A
---
Certificate chain
 0 s:/CN=igvita.com
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFXTCCBEWgAwIBAgISBJN+3MX9OKjS5cX4b6ww/vtAMA0GCSqGSIb3DQEBCwUA
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0yMDA0MjAxMzI1NDNaFw0y
MDA3MTkxMzI1NDNaMBUxEzARBgNVBAMTCmlndml0YS5jb20wggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQCx5ZoBTHLEUmRbkMVyBESzjCR1Oz9aop5aQRAp
bviLSasQbKaXp1DkzaB10am9Nr3ROKtP6tQgB8suaYC94I4SatnJsB3EBGew5GUr
MKybvoQYp4HzJvC49uUZDWFOlWdw6P5ldVXjsX22ATobK5XY0Tr1Ci5j7goanXRF
49sZ6yT5xVsKjprdg8/aoqtIDYXvJsZfJiDyGVung3Qb8RbmjlPvvGS7AXESSA8b
3g7lMdRBhsRPL7BXuVVnoU5CsPcTc7GPuJ5z0Qbfa34NILq4zPqvgH1pWRNJX7Fn
S7Hf5RVhlsuiCEr7BheVGWOjujuxFPOnPkoQ4EcfP6iGBITRAgMBAAGjggJwMIIC
bDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
MAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFJbxqiGGEZ5EEEWj1p1RWhYRU/ESMB8G
A1UdIwQYMBaAFKhKamMEfd265tE5t6ZFZe/zqOyhMG8GCCsGAQUFBwEBBGMwYTAu
BggrBgEFBQcwAYYiaHR0cDovL29jc3AuaW50LXgzLmxldHNlbmNyeXB0Lm9yZzAv
BggrBgEFBQcwAoYjaHR0cDovL2NlcnQuaW50LXgzLmxldHNlbmNyeXB0Lm9yZy8w
JQYDVR0RBB4wHIIKaWd2aXRhLmNvbYIOd3d3Lmlndml0YS5jb20wTAYDVR0gBEUw
QzAIBgZngQwBAgEwNwYLKwYBBAGC3xMBAQEwKDAmBggrBgEFBQcCARYaaHR0cDov
L2Nwcy5sZXRzZW5jcnlwdC5vcmcwggEFBgorBgEEAdZ5AgQCBIH2BIHzAPEAdwBv
U3asMfAxGdiZAKRRFf93FRwR2QLBACkGjbIImjfZEwAAAXGX+wdyAAAEAwBIMEYC
IQC55PavTz4OWvcbMpDNQIcR/SYEDvdSkqrYjxDRGx4vawIhAOCcGF3LKximqSmf
ch6R1EuZo/WTDzPioxM7X3w3kvFAAHYAB7dcG+V9aP/xsMYdIxXHuuZXfFeUt2ru
vGE6GmnTohwAAAFxl/sHcgAABAMARzBFAiBUlTes9VFQ56gbUgRq/7fFUVi6r4Eo
sWHADNNsQ7BSIgIhAPyfR9jDpnHQi3cqjRV2lBp0rrLAcEKf+b4cpDUvw41NMA0G
CSqGSIb3DQEBCwUAA4IBAQBGvck8LK6h8zMxA6bNpxW5Md6K/cUA/HlS0GUiOlnh
9IWZfg3t96Co9d8i90tqjm2gGRVDk7ywiGUClFky6EPICTka0VQRwgLI6aIvh9OF
8syf0QijfXUIkFRZNxGRkAsFqPsbAbDc6+hUMOWQY/uw2yITLB0eS+HyRAZWszoJ
IS4b/Y/gHvnkF/d+y792Y61pf9qtuuTgV/Wdb/KtxJtHKOPVn2eMF7omwyQfqF5o
CijVj/znJBaq9f/8BerL76qRTgeJeM8z0H18ZRpplMyS0T/k1QRTIq6c8lpOt887
PP2IVI8v3WlgNtlZ8XypmZdBjQtncaB1S2MmKgqas5Dx
-----END CERTIFICATE-----
subject=/CN=igvita.com
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
---
No client certificate CA names sent
Server Temp Key: ECDH, P-384.384 bits
---
SSL handshake has read 3093 bytes and written 354 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv12.
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: CF508DEBB4768BBB308095B730EB0FBC7F21C53095AE8DF2E0905D085F98F158
    Session-ID-ctx:
    Master-Key: BEF07A818F91C840EF60A4DB5AEE89A1107EB594BC4718D7B4E2FC6904289AE7E7DB2CF6497812A82CCFD23F33B915B6
    Start Time: 1590415033
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---
SSL3 alert read:warning:close notify
closed
SSL3 alert write:warning:close notify
Copy the code

There are four key pieces of information to know:

  • SSL_connect:SSLv3 read server done A: The client authenticates the received certificate chain.
  • Certificate chain:Received certificate chain (2 certificates).
  • SSL handshake has read 3093 bytes and written 354 bytes:Size of the received certificate chain.
  • Session-ID:The session identifier that was sent to resume stateful TLS.

6. Wireless network performance

1. Type of wireless network

  • Personal LAN (PAN)
  • ** Local Area Network (LAN) **
  • ** MAN (MAN) **
  • A wide area network (WAN)

2. The performance foundation of wireless networks

1) Channel capacity (maximum information rate)

C= BW...log2(1+S/N)
Copy the code
  • C is the channel capacity, in bit/s.
  • BW is the available bandwidth, in Hz;
  • S is the signal, N is the noise, in W.

It covers all the basic factors that affect the performance of most wireless networks.

2) Bandwidth

In order to communicate, the sender and receiver must agree in advance on the frequency range to be used for communication, within which they can smoothly exchange information.

The most important factor affecting performance is the size of the frequency range (bandwidth). According to the formula of channel capacity, the total bit rate of the channel is proportional to the allocated bandwidth.

3) Signal strength

Signal Noise Ratio (SNR), which measures the expected Signal strength relative to background Noise and interference. The greater the background noise, the stronger the signal carrying the information must be.

If you want to achieve the desired data speed in the presence of interference, you either have to increase the transmission power, which means increasing the strength of the signal, or reduce the distance between the transmitting and transmitting ends — or both.

Path loss (path attenuation)

Signal strength decreases with distance.

Near-far effect

The receiver captures the stronger signal, making it impossible to detect the weaker signal, and in effect “extrudes” the weaker signal. For example, one or more loud speakers next to you block weaker signals, creating a near-far effect.

Small area respiratory effect

Cell coverage or signal transmission distance expands and shrinks based on noise size and interference level. For example, the more people you are talking to, the more interference there is, and the smaller the range for you to identify useful signals. This is called the breathing effect.

4) Modulation

Digital signals (ones and zeroes) need to be converted into analog signals (radio waves). Modulation refers to this digital-to-analog conversion process, and the conversion efficiency of different modulation algorithms is different.

However, the cost of higher order modulation is reduced reliability against noise and interference. Therefore, there is a direct tradeoff between them and conversion efficiency.

3. Factors affecting wireless network performance

  • Transceiver distance;
  • The level of background noise at the current position;
  • Interference from other users in the same network (cell);
  • The size of interference from other users in adjacent network (cell);
  • Transmitting power at both ends;
  • Processing power and modulation algorithm.

Seven, wi-fi

Wi-fi can be used to refer to any product based on the IEEE 802.11 standard. It operates on the license-free 2.4 GHz ISM band.

1. From Ethernet to WLAN

In 1971, the University of Hawaii announced the first wireless network protocol, ALOHAnet.

Ethernet protocols borrow heavily from ALOHAnet. Ethernet is often referred to as the Local Area Network (LAN) standard, and 802.11 wireless standard is designed primarily as an extension of the existing Ethernet standard (802.3). Accordingly, it is also referred to as the Wireless Local Area Network (WLAN) standard.

1), Ethernet – Collision Detection (CSMA/CD) mechanism

If a conflict is detected, both parties immediately stop sending data and nap for a random period of time (which then increases exponentially) to ensure that the conflicting sender does not synchronize and does not restart sending data at the same time.

2) Wi-Fi- Collision Avoidance (CSMA/CA) mechanism

Due to the limitations of the hardware used to transmit and receive radio, it cannot detect conflicts during data transmission. Therefore, each sender sends data when it thinks the channel is idle to avoid collisions.

2. Wi-fi optimization Tips

1) Use the bandwidth regardless of traffic

2) Adapt to variable bandwidth

Such as adaptive bit streams to actively adapt to bandwidth changes. Adaptive bit rates are not suitable for all resources, but are ideal for long time streaming services such as video and audio.

During a video stream download by the client, the client or server can monitor the download speed of each video block and, if necessary, adjust the bit rate of the next video block to be downloaded based on changes in bandwidth. In fact, real video services typically start with a low bitrate video block so that video playback can start faster. The bitrate of subsequent video blocks is then adjusted based on dynamic changes in available bandwidth.

Reference links:


  • 1. Authoritative Guide to Web Performance, Chapters 1-13 (core of this article)
  • 2, Geek Time “Android Development Master course” network optimization foundation
  • Geek Time’s “HTTP Protocol Perspective” optimization basics
  • 4, HTTP/2 header compression technology introduction
  • 5. HTTP 2.0 Header table
  • 6, From IPv4 to IPv6, Ali stepped on what pit?
  • 7. How quickly does Tencent Cloud evolve from IPv4 to IPv6?
  • 8. Gigabit LTE and 5G
  • This is probably the most down-to-earth article about 5G
  • 10, IEEE_802. 11 ac
  • 11, MIMO
  • 12. Link Turbo is the network accelerator on the eve of 5G
  • 13, MultiPath TCP-Linux Kernel implementation
  • 14, Multipath TCP: an overview
  • I/O models for Linux
  • Unix network IO model and Linux I/O multiplexing model
  • 17, Android epoll. C

Thank you for reading this article. I hope you can share it with your friends or tech groups. It means a lot to me.

I hope we can be friends beforeGithub,The Denver nuggetsLast time we shared our knowledge.