End intelligence refers to the technology that runs artificial intelligence (AI) applications on mobile devices. This paper mainly describes the practical scheme of end-to-end deployment of large-scale deep learning model for search reordering task in the dianping search scenario, including end-to-end feature engineering, model iterative thinking, and specific deployment optimization process, hoping to be of help or inspiration to students engaged in development in related fields.
1 the introduction
With the rapid development of big data, artificial intelligence and other information technologies, cloud computing has been unable to meet the requirements of specific scenarios for data privacy and high real-time performance. Drawing on the idea of edge computing, the ability to deploy AI at the terminal gradually came into the public’s view, and the concept of “end intelligence” emerged. Compared with traditional cloud computing, AI modules deployed and operated in smart phones and other terminals have the following advantages: First, data localization can relieve the pressure of cloud storage, and is also conducive to the privacy protection of user data; Second, localization of computing can alleviate cloud computing overload; Finally, the end intelligence reduces the cost of request communication with the cloud system, makes better use of user interaction on the end, and provides more real-time and personalized service experience.
In the application of end intelligence, domestic and foreign major technology companies have taken the lead. Google puts forward the concept of Recommendation Android App, which recommends content according to users’ interests. Some well-known products such as Apple’s Face ID recognition and Siri intelligent assistant are also typical application representatives of end intelligence. Alibaba, Kuaishou, Bytedance and other enterprises have also implemented end-to-end intelligence in their own application scenarios, and introduced the corresponding inference framework of end-to-end model. For example, Kuaishou online short video special effects shooting, intelligent recognition of objects and other functions. In addition, there are also some practices in the search and recommendation scenario. Among them, the smart recommendation system is deployed on the terminal of Mobile Taobao “Chaichai You Like”, which has achieved significant benefits (EdgeRec[1], IPV and GMV increased by 10% and 5% respectively on Singles’ Day). The on-end rearrangement scheme was also applied to kuaishou’s sliding recommendation scene, and the App duration increased by 1%+.
Search is an important channel for Dianping App to connect users with merchants. More and more users will use search to obtain the services they want in different scenarios. Understanding users’ search intentions and ranking the results they want most is the most important step for search engines. In order to further optimize the sorting ability of search personalization and improve user experience, the search technology center has carried out the exploration and practice of deploying deep personalization model on the terminal. This paper mainly introduces the practical experience of intelligent terminal rearrangement on Dianping App, which is mainly divided into the following three parts: The first part mainly analyzes the problems to be solved and the overall process of intelligent terminal rearrangement; The second part will introduce the exploration and practice of the end-to-end reordering algorithm. The third part will introduce the architecture design and deployment optimization of the end-to-end rearrangement system, and the last part is the summary and prospect.
2 sorting system progression: Why end-to-end rearrangement
2.1 Pain points of cloud sorting
Let’s take a look at the whole process of front and back end execution in one complete search behavior. As shown in Figure 1, after the user initiates a retrieval request at the search portal of the mobile terminal, the cloud server is triggered to execute, including query understanding, multi-way recall, model sorting and display information merging, etc., which are finally returned to the client for rendering and presentation to the user.
Due to the limitation of QPS of the whole system, as well as the influence of the communication and transmission packet of the front and back end requests, paging request mechanism is usually adopted. The client-server architecture of this kind of Client paging request and cloud service retrieval and sorting return to the user to finally display the list, has the following two problems for dianping LBS scene and category recommendation search products:
① Update delay of list result sorting
Paging request restrictions can cause sorting results to be updated late. Nothing the user does until the next paging request has any effect on the search sorting results in the previous page. Taking the dianping search result page as an example, 25 results are returned to the client in one request, and each screen displays about 3 or 4 results. Then the user needs to slide about 6 or 8 screens to trigger a new paging request to the cloud to obtain the next result (taking the food Channel list page as an example, more than 20% of the searches browse more than one result page). The ranking system on the cloud cannot detect changes in users’ interests in time and adjust the order of results delivered to clients.
② Real-time feedback signal perception delay
Generally speaking, real-time feedback signals will be calculated by mini-batch log streams through stream processing platforms such as Storm and Flink, and then stored in KV feature database for searching system model. This approach tends to have minute-level characteristic delays because of the need to parse the feedback data, which becomes more pronounced when more and more complex feedback data are involved. The real-time feedback reflects the real-time preferences of users and is of great significance to the optimization of search ranking.
2.2 Intelligent rearrangement process and advantages
In order to solve the decision delay of paging result sorting adjustment and model the real-time change of user’s interest preference in a more timely manner, we built a reordering system architecture on the end to enable the client to have the ability of deep model reasoning. This scheme has the following advantages:
- Support in-page rearrangement and make real-time decisions on user feedback: it is no longer limited by the cloud paging request update mechanism, and has real-time decision-making functions such as local rearrangement and intelligent refresh.
- No delay sensing User’s real-time preference: it does not need to be processed by the cloud computing platform, and there is no feedback signal perception delay.
- Better protect user privacy: big data era more and more get the attention of the user data privacy, public comments on the App is also actively response regulators in the execution of the personal information protection act, upgrade the personal privacy protection function, sorting can be done on the related data in the client, to better protect the privacy of users.
After the launch of Dianping search and Food Channel page, the intelligent reordering of terminal has achieved significant results, in which the click rate of search traffic has increased by 25BP (basis point), the click rate of food Channel page has increased by 43BP, and the average Query click rate has increased by 0.29%.
Exploration and practice of on-end reordering algorithm
There has been a lot of research work and practice in the field of search and recommendation for reordering task. The core problem to be solved is to generate top-K results from N candidate results. In the end-to-end reordering scenario, the main work we need to do is to generate the arrangement of candidate merchant context according to the feedback behavior of users to the previous sorting results, so as to optimize the overall search click rate of the list page. The following will introduce in detail some explorations and practices we have done in feature engineering, real-time feedback sequence modeling and model structure for end-to-end reordering scenarios.
3.1 Feature Engineering
Construction characteristics of the project on the way of thinking and the cloud search ranking system, User/Item/Query/Contextual dimensions, cross the basis of the characteristics can be quickly reuse to the end, of course, you need to consider the transmission and storage optimization, and the end, the consistency of the cloud characteristics system, do the “feeling” of development and deployment of cloud, This will be covered in more detail in the architecture & Deployment Optimization section below. In addition, there are also some real-time feedback signals with end-to-end characteristics, including more fine-grained interaction behaviors, etc. These characteristics are also the key signals for the decision-making advantages of real-time end-to-end feedback analyzed above.
Specifically, the characteristic system of the rearrangement model constructed on the end is shown in Table 1, which mainly includes the following aspects:
- Basic features, typical user/merchant /Query/Context side features, and cross-features on both sides, etc.
- Bias features mainly include the sorting position returned by the back end, some size existing on the terminal device and other visual bias.
- Real-time feedback features of users, which is an important part of the whole end-to-end rearrangement feature system, including:
- The sequence of direct user interactions (exposure, click, etc.).
- Behavior correlation features, such as click into the merchant details page stay, interaction and other related behaviors.
3.2 User feedback behavior sequence modeling
There are many algorithms in the industry for modeling user feedback sequences, For example, we know DIN (Deep Interest Network[10]), DIEN (Deep Interest Evolved Network[11]) and Transformer based BST (Behavior Sequence) Transformer[12]), etc. In the scenario of end-to-end sorting, the application of user feedback behavior sequence will greatly affect the effect of the algorithm. So, we’ve done some exploration in this area as well.
Introduce deep feedback network
In fine line model optimization of the cloud, we generally consider only users and merchants explicit “positive feedback” behavior (including click, orders, etc.), implicit exposure did not click on the “negative feedback” signal is introduced, rare because of the short and long history, such exposure did not click behavior very much, compared to click on the signal to noise is very large. This real-time exposure “negative feedback” signal is also important on the end. For example, for a certain type of merchant of the same brand, the click-through rate of the merchant of the brand will show a significant downward trend after multiple exposures.
As the implicit negative feedback signals of unclicked exposures account for a large proportion in the real-time feedback sequence, modeling as a whole sequence has a dominant influence on sparse positive feedback signals. Under the recommendation scenario of information flow on taobao home page, Alibaba also proposes a confrontation-based method to mine the connection between exposure and click behavior sequences, so as to find out which behaviors in the current exposure sequence are real negative feedback and which behaviors are more closely related to clicks. Wechat team proposed the deep feedback network DFN[4], which performs some degree of denoising and deviation correction by introducing the interaction relationship between positive and negative feedback signals.
First, we split the feedback sequence to generate positive and negative feedback sequences based on the optimization idea, and use Transformer to perform Cross Attention interaction of positive and negative feedback signals. Specifically, taking the exposure sequence and click sequence as examples, the exposure sequence is used as Query, and the click sequence is used as Key and Value to obtain the Attention result of the exposure sequence and click sequence. Similarly, switch around to get the Attention result of the click action versus the exposure action. Considering the sparsity of positive feedback signals, some average irrelevant noise weights are calculated when only negative feedback sequences are available. Therefore, we refer to the practice of [7] and introduce all zero vectors into the negative feedback sequence to eliminate this potential noise. The specific model structure is shown in Figure 4 below:
Improve the signal-to-noise ratio of negative feedback signal
After the initial model was launched on the food Channel list page, it achieved a stable improvement of 0.1% compared with Base, but there was still a gap between the profit and that of offline, which did not quite meet our expectations. The ablation experiment shows that the main reason is that there is a lot of noise in the negative feedback signal, and the source of noise is that the click behavior of some exposed merchants may occur after the time of feature collection. Therefore, in order to improve the SNR of negative feedback signals, we limit the exposure time of negative feedback signals. Merchants exposed for a long time but not clicked are more likely to be real negative feedback signals. As shown in Figure 5 below, longer stays can be associated with more stable feedback signals and better online effects.
Multi-view cross modeling of positive and negative feedback sequences
Continuing to iterate on the original positive and negative feedback sequence model, we noticed that there was no expected incremental benefit when adjusting the number of multi-heads in Transformer, as compared to using just one Head metric. After analysis, we suspect that the multiple representation generated by random initialization is largely an expansion of the number of parameters.
In addition, in the dianping search scenario, the overall correlation with the merchant list under Query is relatively high, especially for the results within the page, the homogeneity is higher. Differences are mainly reflected in fine-grained characterization such as price, distance, environment and taste. Therefore, we designed a multi-view FeedBack Attention Network (MVFAN) for cross-modeling of positive and negative FeedBack sequences to enhance the interaction between exposure and click behavior in these higher perceptual dimensions. The specific network structure is shown in Figure 6 below:
According to the feedback type, the user behavior sequence is divided into positive click feedback and negative unexposed click feedback. In addition to shopID itself, the sequence also adds more generalized attribute information (including category, price, etc.), as well as context-related features (such as latitude and longitude, distance). These sequences are superimposed to form the final representation of positive and negative feedback sequences. Next, multi-level Transformer will be used for encoding, and signals of multiple dimensions will be input to decode and users’ preferences in different dimensions of merchants will be activated:
- The ID of the merchant to be arranged is used as Q to activate the real-time feedback behavior and express the hidden diversity interest of users.
- The attribute information with more granularity of the merchant is used as Q to activate the attention weight and improve the user’s expression of interest on these more explicitly perceived merchant representations.
- Using the current search context-related signal as Q, the attention weight is activated to enhance the adaptive expression of real-time feedback behavior to different contexts.
Q = [xs, xc,…, xd] ∈ x dmodelQ ℜ K = [x_s, x_c,…, x_d] \ \ Re in ^ \ {K times d_ {model}} Q = [xs, xc,…, xd] ∈ ℜ K x dmodel, K = V = xs radius xc radius… XdK = V = x_s \oplus x_c \oplus… \ oplus x_dK = V = xs radius xc radius… The radius xd represent various feedback sequence (shop_id/category/short/position, etc.), as input of the Transformer, Multi – View the attention of the structure can be represented by the following formula:
MultiHead(Q, K, V) = Concat(head_1, head_2, … , head_h) W^O$$$$head_i = Attention(Q_iW^{Q_i}, KW_i^K, VW_i^V)$$$$Attention(Q_i, K, V) = softmax(\frac{Q_iK^T}{\sqrt{d_k}})V
Ablation experiments show that Transformer activation with explicit use of multiple merchant context features is more effective than multi-head Attention with random initialization.
Match&Aggregate sequence characteristics
For real-time user feedback features on the end, in addition to various commonly used attention-based sequence modeling methods, there is also an explicit crossover interest extraction method. As shown in Figure 7, compared with the general Attention modeling based on Embedding inner product calculation of “Soft” weight, it can be understood as a “Hard” Attention mode, and the forms of extraction include: Hit (Hit or not), Frequency (how many hits), Step (interval), etc., in addition to the intersection of single variable sequences, multiple variables can also be combined to cross, to improve the granularity and differentiation of behavior description.
The crossover feature of feedback sequence introduced based on prior knowledge can avoid some noise information introduced in “Soft” Attention mode to a certain extent, and also has better interpretability. For example, when searching for “hot pot”, the user does not select the nearby merchant, but clicks the historical preference merchant near his usual residence. In this scenario, there are obvious signals indicating the intention of the user to make decisions in advance. At this point, adding some explicit strong cross features (for example, the distance between the merchants to be arranged and the merchants to be clicked on in real time, etc.) can capture this intention very well, so as to arrange the related merchants that are far away but better match the user’s intention. In the dianping search scenario, we introduced a large number of prior cross features based on this approach, and achieved significant results.
3.3 Rearrangement model design
There is also a lot of work on reordering research in the industry, including: Maximal Relevance of multi-objective MMR(MMR) optimization based on greedy strategy [8], context-aware list-wise Model[2,3] and reinforcement learning-based scheme [9]. In the intelligent rearrangement scenario at the search end, we use the context-aware list-wise model to construct, and generate top-K results by modeling the interaction relationship between top-N item contexts generated by the fine arrangement model. The overall model structure is shown in Figure 8 below, which mainly includes the training program of end-cloud linkage to introduce more interactive representations of multi-cloud terminals. And context-relational modeling based on Transformer, as described below.
Terminal cloud joint training
Generally speaking, the cloud reordering model basically reuses the features of the refinement layer, and adds the location or model points of the refinement output on this basis. After a long-term iterative update, dianping search refinement model has built a large number of basic and scene-related features, and modeled multiple joint objectives including click and purchase. Direct reuse of these large-scale dimension features and multi-objective optimization on the end will incur huge computational overhead, storage and transmission pressure. However, using only cloud model location or estimated sub-output will inevitably lose the cross-expression ability of many end-cloud features. At the same time, there will be a large maintenance cost for model iteration and update on both sides of the end-to-end cloud.
Therefore, we adopted the method of end-to-end cloud joint training to introduce a large number of cloud feature cross signals and multi-objective high-order representations into the end-to-end application. As shown in Figure 9, after model training converges in the cloud, rearrange tasks on the end to continue fine-tune update. Note that:
- Because ListWise LambdaLoss is used in the search sorting layer, the estimated score output by the model only has the meaning of relative size, which cannot represent the estimated range of click rate of merchants and cannot be used globally in absolute value. Therefore, only the last layer of the network output is used for access.
- Only the Dense output of the last layer is connected, which greatly loses the crossover ability of cloud features and on-end features. Therefore, it is necessary to select the head features and add them to the cloud for use through feature selection.
Rearrange merchant context modeling
The modeling structure of merchant context rearrangement refers to PRM[3] and some adjustments are made in combination with the application scenarios on the end. The specific structure is shown in FIG. 10 below:
It is mainly composed of the following parts:
- Merchant feature vector X: it is represented by the output of full connection mapping of all aspects of features mentioned above (User/Shop single and double statistical cross feature, feedback sequence coding feature, and cloud fusion output feature). This output already contains location information, so subsequent Transformer inputs do not need to add location encoding.
- The input layer is processed through Query Dynamic Partition, shred into a sequence of contextual merchants for each Query cell, and then entered into Transformer layer for encoding.
- Transformer coding layer: Encode merchant context relationships through multi-head self-attention.
The optimization goal
In the search scenario, we still pay attention to the success rate of user search (whether there is a click behavior). Different from recommendation and advertising scenarios, which usually estimate the click rate of item based on the global loss, the search business is more concerned with the quality of the results at the top of the page, and the priority should be given to the results at the top. Therefore, we used ListWise LambdaLoss in the modeling of rearranging the target to improve the click-through rate of user search, and introduced DeltaNDCG value in gradient update to strengthen the influence of head position. For detailed inference and calculation implementation process, please refer to the deep learning ranking practice based on knowledge graph of Dianping search.
C = (1-12 Sij) sigma (si – sj) + log (1 + e – sigma (si – sj)) = C \ frac {1} {2} {the ij}) (1 – S \ sigma (s_i – s_j) + log (1 + e ^ {\ sigma (s_i – s_j)}) C = 21 (1 -) Sij sigma (si – sj) + log (1 + e – sigma (si – sj)) lambda ij = partial C (si – sj) partial si = – sigma 1 + e sigma (si – sj) ∣ Δ NDCG ∣ \ lambda_ {ij} = \ frac {\ partial C(s_i – s_j)}{\partial s_i} = \frac{-\sigma}{1 + e^{\sigma (s_i-s_j)}}| \Delta _ {NDCG} | lambda ij = partial si partial C (si – sj) = 1 + e sigma (si – sj) – sigma ∣ Δ NDCG ∣
3.4 Application Effects in multiple scenarios
Based on the above characteristics and model optimization measures, the effects of relevant off-line experimental indicators are shown in Table 2:
The AB experiment of terminal intelligent reordering was launched in the list page of comment main search and Food Channel, and the core business index QV_CTR was significantly improved on a high basis. As shown in Figure 11, in the upper part, QV_CTR of the main search list page increased by 0.25%, and QV_CTR of the Food Channel list page increased by 0.43%, showing stable positive performance at the sub-end. In addition, according to the comparison curve of click-through rate by position in the lower part, it can be seen that the end rearrangement can alleviate the click attenuation effect of fixed page requests to a certain extent, especially in the later screen display, there is a significant improvement.
4 Optimization of system architecture and deployment
Different from the large-scale in-depth model on-line in the cloud, models of hundreds of GB or even T can be deployed and used through the distributed scheme of expanding machine fragment loading. Although the computing and storage capabilities of terminal devices have been significantly improved, which can support reasoning with a certain scale of depth model, relatively speaking, the storage resources on the terminal are very limited, after all, the overall size of App is only a few hundred MB at most.
Therefore, in addition to the trade-offs between effect and performance in feature selection and trigger decision control mentioned above, we also further optimized model deployment and compression, and carried out detailed evaluation of energy consumption and other indicators. In addition, in order to iterate the model on the end more efficiently, including further mining the real-time interest preference characteristics of users, we have developed a set of “end insensitive” model training and prediction framework consistent with the process of the cloud system, which will be gradually introduced below.
4.1 System Architecture
The overall terminal intelligent rearrangement system architecture, including the joint deployment scheme with the cloud search sorting system, is shown in Figure 12. Specifically, there are mainly the following three modules to support the implementation of the end-to-end rearrangement system:
- The intelligent triggering scheme module performs the scheduling of intelligent modules on the terminal for various triggering events designed for services. For example, a user clicks on a merchant to trigger a local rearrangement.
- The service module is rearranged on the end, the characteristic data is constructed, and the end side inference engine is called to run the rearrangement model for scoring output. Among them:
- The feature processing part is a general feature operator processing service specially designed by the search technology center for search/push/wide algorithm scenarios. Support for various types of client and cloud data, using lightweight, simple expressions to build features.
- The end – side inference engine is a unified model management framework for the output of the terminal RESEARCH and development center. It supports the deployment of all kinds of lightweight inference engines on the end and the dynamic distribution control of models.
- Native rearrangement processing logic part, mainly for rearrangement output result interpolation, refresh control processing.
4.2 Optimization of large-scale deep model deployment on the end
Sparse Embedding and Dense network split deployment
Due to the limited computing resources on the end, the complete super-scale parameter model cannot be stored. Therefore, based on the most intuitive thinking, we split the model parameters of offline training into the Dense network and the large-scale ID feature Embedding Table, which are deployed separately:
- The Main Dense network and some smaller input layers like Query/Contextual and Shop base attributes were transformed into MNN format and stored on the Meituan resource Management platform for a one-time pull at client startup and stored locally on the client.
- Mass ID feature Embedding Table (80% of the total number of network parameters) is stored in tF-Servering in the cloud. The corresponding Embedding features of the current page merchant result are retrieved from the Serving service and returned to the client along with the merchant result list. After Concat with the rest of the features constructed by the client, the Embedding features are input to the inference engine for scoring rearrangement.
The model of compression
After the splitting process in the previous step, the size of the model can be controlled within 10MB. In order to further reduce the space occupation of the model on the mobile terminal and the impact of power consumption/performance, we adopted the compression scheme provided by Meituan Visual Intelligence Department. The solution in view of the existing neural network model compression did not consider to fit the deployment of intelligent equipment, compressed model often cannot fit a specific device, the output alignment issue such as difference of degree, designed to better applied to mobile terminal deployment of neural network compression tools, better play a performance on reasoning on the frame.
After compression and optimization, it can be seen from the test comparison data below that the model size is further reduced to less than 1MB, while the precision loss is within 100,000 bit gap. Using Sysdiagnose power consumption analysis, open reasoning function, repetitive movements: search “hot pot/wujiaochang” from the home page, enter search list page for the first time rearrangement reasoning, sliding list again after calculation, exit pages (test time for 10 minutes, 20 seconds using a), related to energy consumption indexes had no significant change.
4.3 Terminal intelligent model training and estimation platform
Different from the sorting algorithm experiment process in the cloud, it has been supported by a mature and perfect training and prediction platform, and the feature & model online is very convenient and efficient. The experimental process of the client side has a very large problem of iteration efficiency in the early stage, such as the cumbersome on-line process of the model, including the separation, transformation & verification and release of the model structure, which depends on a lot of manual operation, and the transfer and docking with multiple internal platforms. In addition, feature iteration is inefficient, requiring clients to cooperate to develop the corresponding feature processing logic, which has a great risk of logical consistency, and there are also problems such as the implementation difference of the sub-end.
Based on this, the front and back end engineering of Meituan jointly promoted the development and design of a set of Augur feature processing framework suitable for the client. The model release and feature processing on the end were connected with the one-stop experimental platform (Poker) and the Unified estimation framework (Augur), which laid a good foundation for further algorithm iterative experiments. In the future, the search Technology Center team will introduce the one-stop model training and prediction platform for the application of end intelligence algorithm. Please look forward to it.
5 summary and prospect
Terminal intelligent reordering is an exploration practice of dianping search in the direction of edge computing, and has achieved significant results in core indicators. By using the capability of on-end computing, users’ real-time interests and preferences can be captured more efficiently, and problems such as cloud service decision delay and user feedback information acquisition delay can be remedied. Adjust the order of unexposed candidate results in time to list businesses that are more in line with the user’s intention to bring a better user search experience. At the same time, we upgraded the front and back end training and deployment estimation framework, which laid a good foundation for further rapid iterative experiments.
The search technology center team of Dianping will continue to implement end intelligence technology in various business scenarios. The future directions of optimization can also include:
- Based on federated learning mode, and on the basis of ensuring data privacy security and legal compliance, the intelligent search sorting model of end-to-end cloud union is iterated.
- More accurate and diversified trigger control strategies can be modeled. For the decision-making module with real-time user intention perception on the end, the current control strategy is relatively simple. Later, we will consider to output more flexible predictive signals combining Query context, user feedback signals and other features, and at the same time request the cloud to obtain more candidate results in line with the current intentions of users.
- Continue to optimize the reordering model, including real-time feedback sequence modeling algorithm, explore more robust coding expression for implicit negative feedback signals, etc.
- Consider more rich and flexible application scenarios, such as personalized customization of models, to achieve the ultimate personalized experience of “thousands of models”.
Author’s brief introduction
Zhu Sheng, Liu Zhe, Tang Biao, Jia Wei, Kai Yuan, Yang Le, Hong Chen, Man Man, Hua Lin, Xiao Feng, Zhang Gong, from Meituan/Dianping Business Division/Search technology Center.
Yiran, Zhu Min, from Meituan Platform/Search & NLP Department/Engineering R&D Center.
The resources
[1] Yu Gong, Ziwen Jiang, et al. “EdgeRec: Recommender System on Mobile Taobao” arXiv Preprint arXiv:2005.08416 (2020). [2] Et al. “Learning a Deep Listwise Context Model for Ranking Refinement” arXiv preprint arXiv:1804.05936 (2018) [3] Changhua Pei, Yi Zhang, Et al. “Personalized Re-ranking for Recommendation” arXiv preprint arXiv:1904.06813 (2019). [4] Cheng Ling, et al. “Deep Feedback Network for Recommendation” (IJCA-2020). [5]. [6] Xiao Yao, Jia Qi et al. Deep learning ranking practice of Dianping search based on Knowledge graph. The practice of Transformer in meituan search sorting [7] Qingyao Ai, Daniel N Hill, Et al. “A Zero attention Model for personalized Product Search “arXiv Preprint arXiv: 148.11322 (2019). [8] Teo CH, Nassif H, et al. “Adaptive, Personalized Diversity for Visual Discovery” (RecSys-2016). [9] Eugene Ie, Vihan Jain, et al. “SLATEQ – A Tractable Decomposition for Reinforcement Learning with Recommendation Sets” (IJCAI-19). [10] Zhou, Guorui, et al. “Deep interest network for click-through rate prediction.” (SIGKDD-2018). [11] Zhou, Guorui, et al. “Deep interest evolution network for click-through rate prediction.” (AAAI-2019). [12] Chen, Qiwei, Et al. “Behavior Sequence Transformer for e-Commerce Recommendation in Alibaba.” arXiv preprint arXiv:1905.06874 (2019).
Recruitment information
Meituan/Dianping Business Division — the search technology center, is committed to creating a first-class search system and search experience to meet the diverse search needs of Dianping users and support the search needs of all businesses on Dianping App. Interested students are welcome to send their resumes to: [email protected].
Read more technical articles from meituan’s technical team
Front end | | algorithm back-end | | | data security operations | iOS | Android | test
| in the public bar menu dialog reply goodies for [2021], [2020] special purchases, goodies for [2019], [2018] special purchases, 【 2017 】 special purchases, such as keywords, to view Meituan technology team calendar year essay collection.
| this paper Meituan produced by the technical team, the copyright ownership Meituan. You are welcome to reprint or use the content of this article for non-commercial purposes such as sharing and communication. Please mark “Content reprinted from Meituan Technical team”. This article shall not be reproduced or used commercially without permission. For any commercial activity, please send an email to [email protected] for authorization.