What is weak net?
1.1 Weaknet concept
Weak network literally means weak network, which is commonly referred to as poor signal and slow network speed. With the rapid development of mobile Internet in recent years, a large number of users will use mobile apps in special situations such as subway, tunnel, elevator and garage. In these scenarios, network delay, interruption, jitter, and timeout may occur.
1.2 Network Configuration
Network form includes wired connection, 2G/3G/4G/5G/Edge/Wifi and other network connection forms. From the perspective of testing, it also includes disconnection and network failure, etc. For the data definition of weak network, different applications have different and unclear meanings. Generally speaking, those with a rate lower than 2G belong to weak network, and 3G can also be divided into weak network. In addition, very low broadband < 50kbps, and weak signal Wifi are also weak network.
1.3 Research Background
There are some special scenes, such as forest disaster relief, border surveillance, etc., which are often related to national security and life safety and require strict real-time communication. However, the base stations that these scenes rely on are often disturbed by natural factors, such as earthquakes and other natural disasters.
Ii. What technical attempts have been made?
2.1 AI control
In the process of watching the live broadcast, I heard Teacher Ma put forward a new concept. When human eyes perceive the image, it is processed by about 100B/s, and then separated by the cells on the retina, it is compressed by about 100 times. After a series of cell processing, it is only about 40B /s at last. In addition, the resolution of the region that the human eye focuses on is relatively higher, and the resolution of the region that the human eye does not focus on is relatively lower. And the human eye is particularly sensitive to certain areas, certain colors, called the attentional mechanism.
The traditional flow control technology in the transmission of audio and video encoding and often cannot according to the specific network environment in the process of choose suitable algorithms and bit rate control, AI control module (brain) will collect video session experience (eye) thing, including video encoder, encoding, network, broadcast state at the receiving end, according to these characteristics, Make coding parameter setting decision against network fluctuation.
2.2 Strengthening network active decision making (compression and convergence)
According to different users, that is, playing side at a personalized frames, but the overall sense does not have big difference, this technology is using of multiframe video time-space consistency principle, based on the cellular sensitivity to the characteristics of different image is not the same as the phenomenon, some cells sensitive to colour, some cells sensitive to athletics, Some cells are sensitive to direction, and some cells are sensitive to texture. Therefore, the human brain decodes the audio and video information it perceives in part rather than one bit and one bit like a decoder. Therefore, for any video input structure, it is mainly divided into two parts. One part is used to store texture details in space, and the other part is not so sensitive to movement details, so the other way is not so high. Of course, in the process of fusion and reconstruction, intelligent learning is also required to compensate and transform. So the final output of audio and video feel will not be very different.
2.3 Video bitrate adaptive based on reinforcement learning
According to video classification and network classification, online learning model training is carried out. For example, most boys like game videos, while most girls like Taobao shopping videos, and the video bit rate and accuracy returned by different categories of videos are different. Based on this, it is proposed whether model training can be conducted for different types of videos. The client selects different algorithms when playing different types of videos. Compared with offline model, the efficiency of online learning platform has been improved to some extent.
Third, personal perception
3.1 What are the specific application scenarios of weak network environment (1 Medicine network/Chongqing 120 First Aid)
During the epidemic period, Pharmacokine.com urgently opened a free online consultation channel for Wuhan and expanded its scope to the whole province of Hubei. Video consultation, electronic prescription and remote drug purchase functions adopted the real-time audio and video technology of Agora. In the scene of video consultation, doctors and patients are in different network environments. All the weak network environments mentioned above may occur. In these environments, Agora has excellent weak network transmission and anti-packet loss algorithm, which can still guarantee audio and video fluency in 60% packet loss cases and voice fluency in 70% packet loss network environment.
120 first aid is through video remote guidance + first aid teaching video guidance, which really strives for opportunities and time for life. However, patients may also be in a weak network environment, so how to ensure the quality of audio and video transmission is still particularly important. In addition, first aid is more about the race against time, and to ensure the connectivity rate, connection failure may mean delay first aid, according to the official website data shows that there are more than 200 data centers in the world, based on this software defined real-time network, in the poor network environment, can also ensure stable and reliable. High quality transmission and 99.9% connectivity.
3.2 Experience
Business forms are changing, and technology must keep up with them. Originally, with the continuous development and progress of technology, such as 5G, GPU, chip and other hardware equipment updates and upgrades, software developers can ignore network jitter and hardware environment constraints, let alone think of, Will one day own developed software may need to run in a relatively harsh environment, or, provided by the service, users use of the equipment is too old is not compatible, and so on and so forth, so was usually don’t pay attention to the robustness of the code, just work, work with, these habits had unconsciously exerts influence me, Do not know if there is a classmate is like me, there is a change, no plus mian.
Before, the concept of audio and video has been stuck in traditional codec, live streaming, video on demand and other common applications, without considering the differences of each user’s network environment. Therefore, it is not fault-finding to study the extreme video communication under weak network. It has very important practical significance, ranging from national defense and security to all aspects of people’s life.
Under the wind of artificial intelligence, combining AI and human visual neuroscience, the field of audio and video can also borrow a wind wind to seek technological breakthroughs and innovations. In addition, I personally believe that the rise and application of concepts such as edge computing and fog computing have shortened the distance between users and services. In the past, services were mostly deployed on central nodes, but now it is more efficient to deploy services in the way of micro services, such as WebRTC service to edge nodes. In addition, edge nodes cost less to deploy services and save bandwidth.