A long time ago, I was a cleaner, until one day God said that do not understand the end of the intelligence of the cleaner is not a good cleaner, so I steal to the next door little brother learned the intelligence of this technology, write down this article, if there is a mistake, please find the next door little brother ~
This article will talk about terminal intelligence and the development of terminal intelligence in watermelon video. You’ve probably heard of Telintelligent, but it’s an itchy sight, so let’s take a look at it today.
From cloud intelligence to end intelligence
Many people may wonder why they want to talk about intelligence. How is it still involved in the edge calculation, is it to make up the word count? In fact, the extension from cloud intelligence to end-intelligence is also the extension from cloud computing to edge computing. We know that cloud computing itself has many advantages, such as huge computing power, massive data storage capacity and so on. At present, many apps providing services to the outside are essentially dependent on a variety of cloud computing, such as live streaming platforms, e-commerce and so on. But the new user experience for more timely, stable services has led us to want to deploy services on physical devices as much as possible, which has ultimately driven edge computing.
From cloud intelligence to end intelligence, the essence is the same. In driverless cars, for example, you can’t stop the calculation of the road condition due to network fluctuations, or be uncertain whether to brake or accelerate for ten thousandths of a second.
So what exactly is edge computing? What does edge computing have to do with so-called end intelligence? Keep reading
Edge of computing
Edge computing is a network concept that strives to keep computing as close to the data source as possible to reduce latency and bandwidth usage. In general, edge computing means running fewer processes in the cloud and moving more processes to local devices, such as the user’s phone, IOT device, or some edge server. The advantage of this is that moving computing to the edge of the network minimizes traffic between the client and server and maintains service stability.
It’s not hard to see that edge computing is essentially a service, similar to cloud computing services we have now, but one that is very close to the user and can provide the user with a faster response, simply put, to do what is fast.
Is important to note that the edge of the calculation is not to replace the cloud, if cloud computing is paid more attention to the global control, the edge of the calculation is focused on the local, essentially edge calculation is a supplement and optimization of cloud computing, can help us in near real time data to solve the problem, people who are in the need to reduce the delay or real-time goal of business scenarios, All have their uses, for example: computing intensive work, artificial intelligence, and so on. The ai scenario here involves the end intelligence that we will talk about next.
The nature of terminal intelligence
As for artificial intelligence, we are already familiar with it, especially the applications represented by Tiaotiao/Douyin/Kuaichun, which are products that use machine learning to the extreme. In addition, hardware devices such as sweeping machines and unmanned cars are also the best examples of artificial intelligence. So what is the role of terminal intelligence?
Similar to the development of cloud computing to edge computing, the development of ARTIFICIAL intelligence has gone through the process of cloud to end, we often say end intelligence is actually machine learning on the end side. In addition to smartphones, end-to-end devices also include IOT devices, embedded devices, such as language translators, surveillance cameras, and of course, autonomous vehicles also belong to this field.
Since 2006, ARTIFICIAL intelligence has entered the third stage of development, and it has defeated Lee Sedol and Ke Jie successively with AlphaGo to announce the arrival of the new era. The advantages behind this are:
- Big data development and hardware computing power improvement, CPU, GPU and dedicated computing units;
- Deep learning algorithms and frameworks are constantly evolving, from Torch and Caffe to TensorFlow and PyTorch.
At the same time, the end – side equipment also has a rapid development in computing power, algorithms and frameworks:
- Computing power: THE performance of CPU and GPU has been improved, the computing power of mobile phones has been increasing, and the specialized neural network processing chip for AI has gradually become the standard configuration, which can support the operation of algorithm model to varying degrees;
- Framework: Machine learning frameworks for mobile make it easier to apply machine learning on the side. On the mobile side, Apple’s Core ML and Google’s NNAPI provide system-level support, but there are plenty of good side frameworks: TensorFlow Mobile/Lite, Caffe2, NCNN, TNN, MACE, Paddle&Paddle Lite, MNN and Tengine, etc.
- Algorithm: The model compression technology is constantly developing, among which quantization has been very mature, and the model can be reduced to 1/4~1/3 of the original without reducing the accuracy. In addition, after continuous optimization of the algorithm model for the end side, the architecture design becomes more and more mature and more friendly to the compatibility of the end side devices.
Of course, we all know that the ultimate driving force for cloud intelligence and terminal intelligence development is the real product demand. From the perspective of terminal intelligence, demand-driven AI scene applications have become the main selling points of products in both hardware and software. For example, the improvement of mobile phone camera capability, in addition to the development of camera and other hardware, the development of various image processing algorithms also plays an important role. The special effects of models such as Douyin and Kuaishou greatly improve the shape of products while simplifying the creation.
Why do end intelligence?
To know why to do end intelligence is the essence of the need to understand the current cloud intelligence is facing what kind of problems. As stated earlier, the development of cloud intelligence is based on modern needs, and its main advantages are:
- Facing massive data, which means we can find the optimal solution of the problem through the continuous accumulation of data;
- Sufficient equipment resources, strong computing power;
- The algorithm has a large scale and unlimited space.
But this cloud-based approach to architecture has its drawbacks:
- Response speed: It depends on network transmission, and the stability and response speed cannot be guaranteed. For cases requiring high real-time performance, this problem is almost unsolvable.
- Non-real-time, out of the real-time Context, poor user sensitivity;
And the advantages of terminal intelligence can complement the disadvantages of cloud intelligence:
- Fast response: data can be directly obtained at the end side for processing, without relying on network transmission, and the stability and responsiveness are OK;
- High real-time: the end and side equipment can reach the user in real time, there is no middleman profit margin, and it can sense the user status in real time;
In addition, since data and production and consumption are completed on the end side, the privacy of sensitive data can be guaranteed and legal risks can be avoided. In addition, we can carry out more refined strategy, it is possible to realize the real realization of thousands of thousands of faces.
Limits of terminal intelligence
As mentioned above, end-to-end intelligence can complement several disadvantages of cloud intelligence, but it also has its own problems:
- Compared with the cloud-side hardware equipment, the end-side resources and computing power are relatively limited, so large-scale continuous computing cannot be carried out, and the model should not be too complex.
- The end side is often only a single user data, data scale is small, global optimal solution is difficult to find. In addition, the data cycle is often short due to the uncontrollable life cycle of end-side applications.
Under the restriction of these two conditions, terminal intelligence often means to do reasoning only, that is, to do reasoning after cloud training model is delivered to the local.
, of course, with the evolution of technology, now we also began to explore the learning, it is essentially a kind of distributed machine learning technology, main hope to ensure data security and privacy under the condition of legal compliance, can effectively solve the problem of data island, let the participants do not share data based on the joint model, improve the effect of the model. In this case, what the end can do is not just reason but also learning.
5G and terminal intelligence
As someone mentioned before, since 5G is fast and good, there is no need to be intelligent. Here is a brief account of my own understanding. (If there is anything wrong, please ask the guy next door)
5G’s high-speed connections and ultra-low latency are intended to help scale distributed intelligence, or cloud-end intelligence. To explain: 5G provides more stable and seamless support for the connection between the terminal and the cloud. At this time, the network will no longer be the bottleneck of the interconnection between the terminal and the cloud. However, with the advent of the Internet of everything era, the data scale will further increase and enter the era of super large data scale, at which time the computing power of the server will become the bottleneck. At this time, it is more important to preprocess and understand the data on the end side, which requires the intervention of the end intelligence.
For example, in the area of security, many home cameras now support cloud storage. What if we want to save videos to the cloud only when someone is in action? The best way is to use AI for image recognition at the end first, and then upload the video clips containing people to the cloud, rather than uploading all the videos to the cloud first and then image processing (cutting out useless video clips). The former can not only save traffic bandwidth, but also save cloud computing power. So the end-to-end processing of the video is the same thing as the preprocessing and understanding of the data that we just talked about.
For AR, terminal intelligence provides interactive capability for AR, and 5G can meet the network transmission capability required by AR. With the support of these two, AR technology with interactivity and high quality will promote the integration of real world and virtual environment.
Why does watermelon do terminal intelligence?
The above content is a simple introduction to the bottom of the intelligence of the past life, saliva content, simple to see a minute. Light talk not practice fake handle type, the following is a brief introduction of watermelon in the end of the intelligence on the exploration and landing, many students know we are doing, but do not know what, today we will unveil this mysterious veil. First, let’s see why our watermelons are smart.
Before we go any further, we have to answer a question: why is watermelon a smart fruit? In other words, how does watermelon position terminal intelligence?
- Watermelon video as a service to large consumer users long video products, in order to meet the needs of different segment operating strategy is bound to make more fine products, but before our strategy often confined to the server or test scenarios strategy pool, the client more as a strategy in the process of the whole of the recipient, passive response and real time is not enough;
- With the continuous promotion of watermelon video quality, significant benefits have been achieved in many scenarios. The accompanying problem is that simple and easy strategies can no longer satisfy our appetite, and more detailed and flexible strategies are needed to obtain benefits.
- Watermelon video has always been committed to providing users with content of better value, while playback, as the core function of watermelon, is committed to creating more intelligent players for users and better audio-visual environment, which undoubtedly puts forward higher requirements for our client playback strategy.
- For video products, the bandwidth cost is not negligible. We hope to reduce the waste of useless bandwidth as far as possible without affecting the playback experience.
Due to the above factors, while solving problems with traditional client-side technology, we also became interested in end-side AI capabilities, and began to investigate in August 2016. Finally, we established two major goals: strategy refinement and revenue saving.
Under the guidance of these two goals and further combining the demand of watermelon, we mainly focus on the following two areas:
- Business area: Fully explore relevant consumer side (player) and creative side can optimize decision points, reduce business costs, and create a more intelligent decision system; With the help of end-side AI capabilities to create better product features for individual scenes;
- Technical area: Try to combine side AI capabilities with stability/compilation optimization to automatically analyze and track related anomaly chains; Try to establish end – side signature database to realize hotspot function/plug-in scheduling decision.
Watermelon tip intelligence
The reason why watermelons are smart is probably clear. But how do you do it overall? On the surface, it is nothing more than moving the system of the original server to the client, and the changes are only in the front and back end, but in fact, the migration from cloud to the end not only involves the changes of the technical link of the cloud end, but also puts more emphasis on the cooperation of engineers in different fields: From the point of view of client, we focus on engineering architecture and interaction, but from the point of view of algorithm, we pay more attention to data and model. The difference between these two concerns determines that there will be great difficulties in the process of intelligent landing at the end.
The basic flow
First, we will briefly understand the stages of the lower intelligence from cloud to end. From cloud to end, it mainly involves algorithm, model training, model optimization, model deployment and end-side inference and prediction. According to the participants, it mainly involves algorithm engineers and client engineers. According to the environment, it can be divided into cloud project and client project, as shown below:
A brief summary of the process is: First algorithm engineer for a specific scenario design and training model, and then to optimize the model, to reduce the volume of the model and improve the efficiency of algorithm & model execution, in the next stage of deployment model will be optimized model into the side of the inference engine supported formats and deployed to mobile devices, the client engineers to the scene to do algorithm transplantation and business transformation, At the appropriate time, the model is loaded by inference engine and inference prediction is performed.
Three mountain
Above, we summarize the process of terminal intelligence landing in one sentence, but the actual process is far more complicated than the above picture:
As shown in the figure above, it is not difficult to find that the overall intelligent landing link of the end is relatively long, and the problem of any node will involve a long troubleshooting route; It requires a high degree of collaboration between algorithm engineers and client engineers. However, due to the large differences in technology stacks on both sides of algorithm and client, and the large differences in knowledge fields, the collaboration cost is actually quite large:
- Client engineers have relatively few algorithms and principles of machine learning, which means that in many cases, the client considers this piece of machine learning as a black box, unable to give certain reference opinions for the conditions of end-side devices and business logic, and unable to effectively find abnormal problems of model reasoning, and unpredictable to the overall operation results. And when it comes to the algorithm transplant, there are more problems, chicken with duck talk is not not exist;
- Algorithm engineers lack the understanding of end-side devices. The end-side device environment is not as simple as the server-side environment, and the device environment is more complex. How to design a more efficient model for end-side and avoid the effect of end-side reasoning on end-side performance indicators is the most important. It remains to be seen whether the algorithm can be transplanted to the end side as expected.
In addition, how the end-side inference engine is compatible with complex end-side devices, ensuring high availability, realizing integrated monitoring and model deployment are also the key issues to be solved in the landing process of end-side intelligence.
End-to-end cloud integrated deployment
Previously, we mentioned three problems in the landing process of end intelligence: long link, high collaboration cost, and complex reasoning deployment. To see the specific analysis of the problem and the corresponding solutions:
- For algorithm design to model training, the MLX platform in the company provides the ability of model training to server deployment, but does not support the link between model training on the cloud and the deployment end. We need to connect the MLX platform on the end to realize model deployment and model format conversion.
- Traditional upper-end intelligence relies on two parts of data, algorithm and model, on the end side. The former algorithm needs to be transplanted to the end side after the completion of cloud testing design, which has a certain porting cost and lacks the ability of rapid deployment and update. At the same time, the algorithm and model belong to the separate state. After multiple iterations of the algorithm and model, version management is relatively complex and compatibility is difficult to be guaranteed. If the algorithm and model can be bound and dynamically delivered, the collaboration cost can be reduced and the link can be simplified.
- In a complex end-to-end environment (especially for Android devices, which have different configurations and more complex hardware environments), inference engines are critical. Before landing, we investigated TensorFlow Lite, TNN, MNN, Paddle-Lite and the internal ByteNN of the company, and finally we chose ByteNN.
Why ByteNN?
As the unified AI base engine library of Douyin and TikTok, ByteNN has already been added to many products and does not need additional volume.
For ARM processing, Adreno/Mali GPU and Apple GPU can perform directional performance tuning, support multi-core parallel acceleration, and the performance is OK.
Wide terminal compatibility, covering all Android devices and iOS, high availability;
Support GPU, GPU, NPU, DSP and other processors, with the support of manufacturers, universality is not a problem.
The three mountains that lie ahead are clear, and what needs to be done is clear. As for end-intelligence, we happen to share the same idea with the platform Client Ai team, and began to jointly promote the exploration and implementation of end-intelligence in byte business strategy.
With the Pitaya solution, we can focus more on algorithm design and business scenario mining. What is Pitaya? To put it simply, it connects the MLX environment on the cloud with the end-side environment, unified algorithms and models in the form of algorithm package deployment to the end-side, and supports the end-side feature engineering. At the same time, the internal integration of the end running container, and drive the inference engine (ByteDT, ByteNN) work of a set of solutions, its basic architecture is as follows:
Now the overall link has been simplified a lot, and we can focus more on the algorithm design of business scenarios. At the same time, with the help of efficient operation container, we can realize the rapid deployment of algorithm package, improve the overall iteration efficiency, and have stronger linkage between end clouds:
Watermelon video intelligent preloading
In terms of interaction, the landscape stream of watermelon video can be understood as a landscape version of Douyin, which is the core consumption scene of watermelon, as shown in the picture below:
As the core scene of watermelon video, its playback experience is crucial. In order to achieve a better video start effect, a video preload strategy is applied to this scene: After the current video starts playing, the three videos following the current video are preloaded, each of which is 800K.
800K is an experimental value, aiming to reduce the cost as much as possible without affecting the lag. As is known to all, preloading can significantly improve the playback experience and also bring an increase in bandwidth cost. You may not have an intuitive sense of 800K. Take 720P video as an example, assuming the average bit rate is 1.725Mbps, then 800K video size is about 4 seconds.
For convenience, let’s use a diagram to show the flow of 3*800:
It is not difficult to see that only in the case of the first video, three videos will be preloaded at once. After that, it becomes an incremental preload of one video, with three videos always preloaded on the end.
Why is it an end-intelligent solution?
The above solution is rough and simple enough, but there are some limitations: the bandwidth and playback experience are not balanced enough:
- In some cases, users will not finish watching the 800K of cached video, simply browse the title or the first few seconds of content to the next, resulting in a waste of bandwidth;
- Or when the user wants to watch the video carefully, there may be not enough cache, easy to cause the start of the video to fail or slow, and thus affect the user experience.
In addition, our data analyst also mentioned that reducing the preload size could reduce the cost, but the lag deterioration was serious. In the short term, the fixed solution of 3*800K was adopted, and in the long term, the solution of dynamically adjusting the preload size was promoted to go online.
In addition to the traditional optimization of the preload task management mechanism, we began to rethink how to measure the effective preload task rate.
Ideally, the preload size is the same as the playback size, so that the video preload efficiency is 100%. If we preload 1000K, but the user only sees 500K, the preload efficiency is 50%. The wasted 500K can be golden 💰; The worst case scenario is that we preload 1000K and the user doesn’t even look at it. Sorry, the preload efficiency is dead, and we’re wasting a lot of money. Now let’s define the formula we use to evaluate preloading efficiency:
It’s not hard to see that the ideal preload strategy is to match the preload size to the playback size as much as possible, and to preload as much as the user will watch during the start of the play, so as to improve the playback experience and reduce bandwidth waste.
So how do you do that? We hope to conduct real-time analysis of user actions based on the ability of end-side AI, and then adjust the preloading strategy, so as to improve user playing experience and avoid bandwidth waste.
Optimization scheme
As shown in the above formula, the key to improve the efficiency of video preloading is to predict the user playing time, so that the video preloading size and the video playing size are as close as possible. However, estimating how long each video user will watch is a difficult task. There are many factors, such as the user’s level of interest in the video, their state of mind and so on.
Taking these complex factors and variables aside, let’s rethink the correlation between video preloading behavior and user browsing behavior. We assume that there are two typical behaviors of users:
- Quick swiping and short browsing: The user is in the state of skimming, and will only glance at each subsequent video, and the frequency of active video switching is high;
- Slide slowly and watch carefully: Users tend to watch each video and switch videos less frequently.
Would it help us to optimize the video preload strategy if we were able to determine the user’s preference for the following browsing behavior through the model based on the user’s browsing behavior? The preloading process then looks like this:
In addition to improving the efficiency of preloading and realizing more accurate preloading scheduling through intelligent solutions at the end, there are also supporting business adjustment and optimization solutions at the end, such as aggregation reasoning parameters, adjustment of model trigger timing, and improvement of preloading task management, which complement each other. The process involving end intelligence is as follows:
Characteristics of the engineering
Based on the buried point data of online users, we extracted the buried point data related to landscape stream for marking and correlation analysis, and finally screened out the relevant features with user browsing behavior after multiple iterations. In addition, with the help of Pitaya’s feature capabilities, it is convenient to obtain real-time features from different data sources and convert them into model input data.
Algorithm selection
There are many machine learning algorithms, from traditional decision trees and random forests to deep neural networks, that can be used to achieve the capabilities of end-side AI. However, the size and performance of the model must be considered based on end-side devices to meet the requirements and effects of the scenario. The smaller the size of the model, the better the performance. For example, in this scenario, both DT and NN can be used to classify the user patterns, but the model volume, performance, and effectiveness should be evaluated to determine which algorithm to use.
Reasoning forecast
First look at the timing of the decision, each slide to a video and after the video (not the first video) play trigger algorithm package, wait for the decision results and schedule the subsequent preloading tasks. The brief process is as follows:
In triggering reasoning, it relies on two parts of data: data about videos that have already been watched, and detailed data about videos that have not been watched. Pitaya takes the initiative to obtain the former through buried point logs, while the latter requires the business side to take the initiative to transparently transmit after aggregation. At the same time, the fault tolerance mechanism needs to be considered on the end to roll back the strategy in time in case of algorithm package exception and reasoning failure.
Preload task scheduling
After the decision result is given, we will schedule according to the latest two reasoning results, mainly to realize the adjustment of the number of preloading tasks (retain valid tasks) and the size of each preloading video (incremental adjustment). There are many business details involved here, so I won’t explain too much.
End – side performance monitoring
In addition to the usual monitoring on the side (performance, business metrics), it involves monitoring model metrics such as execution success rate, inference time, PV/UV, and of course accuracy.
Problems & Challenges
Watermelon from December of last year began to land the scene, to get clear revenue and the full amount of online, finally about 1.5 calendar double months, plus the Spring Festival holiday to have 2 double months, this is why? From the beginning of the experiment, large scale negative to positive benefits, what problems did we encounter?
Reasoning time consuming
In the previous experiments (V1.0 ~ V3.0), we found that the intelligent preloading group had deterioration in the first frame time and other indicators compared with the online strategy, for example, the first frame time increased by 2ms. After a variety of technical means and data analysis, we finally find that factors such as on-end aggregation reasoning parameters and reasoning time can indirectly affect the market index. To this end, we put forward a variety of optimization methods, such as step size reasoning, asynchronous scheduling, preheating of the first push, model optimization and other programs, and finally equated the relevant indicators, and determined the significant benefits.
Bandwidth costs
Bandwidth cost accounting is affected by many factors, and it is easy to have deviation, especially in such a complex business scenario as watermelon Video, how to accurately measure the bandwidth revenue brought by the scheme is very complicated. With the support of several students from the player and video architecture team, we finally determined that the bandwidth cost of the experimental group (95th quartile) was 1.11% lower than that of the online group on average.
Peak model
In many cases, the habit of users is to present phases, so we also propose a peak model for making more detailed decisions during the high/low peak periods of user usage. (Limited to the business relevance is too big, the specific explanation will not be done. There are want to know the students can join the byte to do colleagues!
The experiment yields
Through a series of periodic experiments and strategy verification, the benefits of intelligent preloading are determined:
- Total bandwidth -1.11%, preload bandwidth -10%;
- The video playback failure rate was -3.372%, the non-broadcast failure rate was -3.892%, the caton rate was -2.031%, the number of hundred seconds caton was -1.536%, and the caton permeability was -0.541%.
In addition, other indicators of the experimental group also have positive benefits, but limited to not significant, we will not explain it here.
Present Situation & Summary
The current situation of watermelon – end intelligence
Currently, intelligent preloading has been fully online on the Android terminal, and the iOS terminal is also in the process of access. Smart preloading is just the starting point for watermelon video, and we’re exploring the possibility of more scenes to provide a more intelligent playback experience for users.
Whether it is cloud intelligence or terminal intelligence, ultimately it will bring us new ideas to solve problems. Frankly, I don’t know what terminal intelligence will mean for the future, but it is certain that we can better combine it with business scenarios to seek better business results.
Terminal intelligence is not a silver bullet: we don’t just use it and call it a day and get good results. In order to truly play the business value, better service users, we need to think more, put the end of the intelligence in the place.
New ideas, new attempts
For many client-side engineers, the question may be, how should we deal with end-side intelligence, or how is it different from how we’ve solved problems before? Terminal intelligence is not a new creature from 0 to 1, it is just a natural extension of cloud intelligence. The system behind it is machine learning. For clients, it provides a new way of thinking: from rule-driven to data-driven, and then to model. We can use it to find the right combination in the right business scenario to achieve better results.
So how do we get into this area for client engineers, or what can we do if we want to understand endpoint intelligence?
Understand the basic
As we mentioned it many times, the intelligence is a natural extension of cloud intelligence development, behind it is still the basic machine learning, for most of the client engineers, our goal is not to be experts in the field of machine learning, can not go to further research all kinds of algorithm model, but the system of learning theory is still very be necessary:
- Understand the development process of artificial intelligence, know why the current deep learning become the mainstream direction;
- If you look at some of the classical algorithms that were once popular and the problem areas that they were able to solve, you will see that some of the ways of thinking about previous problems are still relevant to deep learning.
- Understand the common neural network architecture, the role of different network layers (such as the convolution layer/pooling layer in CNN), and how the whole is organized.
Model migration and practice
Systematic knowledge can be acquired largely by reading classics in the field. In addition more activities, we can find some sample first, step by step, the trained model through quantitative treatment after transplantation to the client to run, and can also try the modified algorithm & model on the basis of open source projects, such as attempts to handwritten numeral recognition, face detection and so on, of course can also try the quantitative trading, Use the model that oneself make to predict stock price (of course, lose don’t look for me 😁).
Performance optimization
End side equipment, especially mobile phones, for example pay more attention to interaction, and restricted by hardware, such as power, storage space, calculate the force and so on, this means that our algorithm model must have high execution efficiency, at the same time model volume profit is too big, not corresponding is performance optimization is also very important in the field of intelligent.
For most client engineers, in addition to traditional business scenario-based optimization, inference optimization and deployment may also be involved. Inference optimization involves not only models and frameworks, but also hardware optimization, such as Neon/SSE/AVX instruction set optimization, Qualcomm (Qualcomm GSP/GPU), Huawei (Huawei NPU) and other hardware acceleration technologies.
Embrace AIOT
Whether it is edge computing or terminal intelligence, its carrier is not only mobile phone, but also contains a variety of embedded devices. Like fingerprint locks, surveillance cameras, drones and so on. For us, there are a lot of devices to practice with, whether it’s raspberry PI, Coral Development with TPU chip, Intel ®Movidius™ neural computing rod, etc., and there are a lot of fun things to do with them.
Join us
Now it’s time for advertising, thanks to the rewards from two “advertisers” : Live Broadcasting center and watermelon video team. Let’s take a look at our recent recruitment needs:
We are the team responsible for providing livestreaming services for all bytedance’s apps, including but not limited to Douyin, Huoshan, Toutiao, Watermelon Video, Mantis Shrimp, Understand Car emperor, Tomatino Novel, Tomato Listen, etc. The live broadcasting team is not only responsible for the research and development of live broadcasting platform technology, but also provides stable basic services for live broadcasting. Also responsible for the research and development of live streaming business, committed to providing users with the most extreme live streaming experience.
If you are passionate about technology and want to work with us to create the most extreme live experience, welcome to join the Live Broadcasting Center. Whether you are a social recruitment officer or a fall recruitment student of 2022, you can contact [email protected]. Email subject: Name – Working years – Live broadcast -Android/iOS
Welcome to join bytedance’s watermelon video client team. We focus on the development and technical construction of watermelon video App. If you want to solve technical problems and meet greater technical challenges together, please join us! Watermelon video Client team is looking for Android and iOS architects and R&D engineers. Positions are open in Beijing, Shanghai, Hangzhou and Xiamen, social recruitment, campus recruitment and internship. Please send your resume to [email protected], email subject: Name – watermelon video – Workplace -Android/iOS.