In the design of the TSINGSEE Video platform, streaming capabilities are taken into full consideration, including real-time performance, server performance, network bandwidth pressure and concurrency. Here we explain on-demand and non-on-demand streaming once again.

Before explaining, let me give you a brief introduction of the TSINGSEE Video streaming platform, which has excellent design of separation of the front and back ends so that professional people can do professional business logic. Why, one must ask, make this point in the first place? Because of this design, we can see the front-end of TSINGSEE Video streaming platform as a front-end DEMO, and can make a set of front-end pages to replace the streaming platform.

On demand flow

The so-called on-demand pull flow, in fact, is literally, according to the need to pull flow. Essentially refers to the client request, according to need is a client request, streaming media service to go looking for the front-end equipment for pull stream processing, pull flow – > decapsulation – > to encapsulate – > distribution, the purpose is to save bandwidth pressure, because the front-end equipment may be through the wireless network connection, or the front-end network pressure is very big, The advantage of this is to call at any time according to the need to effectively improve the bandwidth utilization.

However, this method also has some disadvantages, such as slow playback, because audio and video data is not always in progress from the device encoding generation to the player decoding rendering to the form, but only on demand.

Non – on-demand pull flow

Call the on-demand, this pattern is always pull flow, streaming media has been a popular explanation is that from the front-end device to pull the audio and video, don’t interrupt, regardless of the client’s demand, streaming media service has been do pull flow – > decapsulation – > to encapsulate – > distribute work, this method brings the network pressure increases, Because regardless of whether there is a client playback request, the server has to pull the stream processing of the front end device, but it can be done in seconds, because the client is ready to broadcast at any time, the server has data, without waiting for the front device encoding generation, transmission, decoding to get the stream data.

However, this method on the server pressure is actually very big, because the current unpacking -> packaging are completed in memory (except HTTP-HLS), can believe that hundreds of audio and video has been pulled in memory processing, the server pressure can be imagined how much.

Concurrent streaming media

I’ll tell you some more about the concurrency of streaming media.

The TSINGSEE Video streaming platform kernel is based on the Nginx improvement and can handle high concurrent access efficiently, but the concurrency of each protocol stream distributed is not the same. For example, the http-HLS distribution stream, the biggest concurrency bottleneck is not the programming ability, but the disk read and write performance. Because HTTP-HLS is not technically an actual live streaming protocol, it writes TS slices to disk, and then the player repeatedly requests to download the resource to play.

Http-flv, Websocket-FLV, WebRTC are pretty good now, because with FLASH becoming obsolete, the RTMP stream on the WEB is going to take a bit of a struggle to play again, either with other plug-ins or transcoding. The above three streams are processed in memory, so as long as the server performance can keep up with the bandwidth, there is almost no problem with concurrency.