The author | Wang Lingyu (set)
New retail product | alibaba tao technology
Introduction: The author from 17 years before the Double 11 began to take over the front end of Tmall search, the development of the first demand — H5 single page, so far, The front end system of Tmall search has undergone relatively big changes. Today, share a phased summary article, record the past and present of tmall search front-end technology, as well as the only front-end of the business thinking about the future.
Roughly divided into
First of all, based on the evolution of front-end technology, the development process and future trend of Tmall search front-end can be summarized into several eras:
- The PC era
- H5 era
- The MV * age
- Weex era
- Set up time
- Deep construction era
- Smart era
The reason for this division is mainly based on the front end technology direction of Tmall search and the big changes in the front end technology system of Tmall and Tao department. In which we can refine the key words of The Times:
- Closed: PC era, H5 era, MV* era
- Open: Weex era, build era, depth build era
- Intelligence: The age of intelligence
The following is to introduce the technological state and some thinking of tmall’s front end in each era.
The PC era
The PC era can be said to be the ancient era of tmall search front end, it was the 3g/ 2G era of mobile phone traffic is very expensive. So it’s mostly simple WAP pages, and most people are still used to shopping on PCS.
Technical solution
modular
The front-end technology solution of PC era is KISSY + MUI 3, MUI 3 is that set of KISSY module specification KMD. There are also some very old YUI dependencies on the page. At that time when jQuery was the king of the world, basically the PC pages of Tmall adopted the large and complete KISSY framework developed by YUI within the group. The KISSY framework includes almost all the basic functionality needed by the front end: module loaders, DOM manipulation, event handling, asynchronous requests, and so on.
Page rendering
PC Tmall search adopts synchronous rendering, and the main content of the page is rendered and output to the front end through VM template. Therefore, the front end needs to maintain a large number of Velocity template codes to ensure that the content in HTML and its OWN JS code can properly match. Once HTML is involved, the Velocity template needs to be modified to fall into the purgatory of synchronized templates and deployment published by AOne.
Module management
However, module management of the front page of Tmall search in the PC era is very rough. A large number of KISSY modules are divided based on business logic. They operate a unified PAGE DOM together, and cross modification occurs from time to time. The communication between modules is also realized by directly calling module instance method, module coupling is very serious.
summary
It can be seen that the major problems of that era are as follows:
- Large, complete frames make pages clunky and dependent
- The front end spends a lot of money maintaining unfamiliar Velocity code
- Depending on the application environment, it is often necessary to synchronize templates or deployment, and even impossible to debug when the application is unstable
- Page rough modular, serious module coupling
- Heavy DOM operations, DOM management chaos
H5 era
With the popularity of smart phones and the development of 4G technology, the cost of data has been greatly reduced. The demand for wireless terminals has been increasing and the traffic has been increasing. The front end of Tmall has also set the direction of Mobile First. Obviously, THE TECHNICAL solution of PC search cannot meet the requirements of H5 search, so H5 search adopts a new technical solution.
Technical solution
modular
H5 search uses Zepto + MUI 4 as a modular solution. Compared with MUI 3, THE main change of MUI 4 is that THE KMD specification is modified into the more common AMD specification in the industry. At the same time, front-end templates are introduced to realize DOM update. Firstly, Zepto is much lighter than KISSY, which is large and complete. At the same time, the introduction of front-end templates enables modules to self-manage DOM, effectively reducing the confusion caused by DOM operation crossover between modules.
Page rendering
The page rendering of H5 search is based on application control, but only a few DOM such as frame and filter are rendered synchronously. Most DOM, such as list of goods, are implemented in the way of asynchronous rendering front-end template. Therefore, the internal capacity of a synchronization template is relatively small, and the maintenance cost is greatly reduced. At the same time, the wormhole APP that separates the front and back ends is introduced. The front-end does not need to maintain unfamiliar VM templates, but XTPL templates with the same syntax as asynchronous templates, further reducing the cost of template maintenance.
Module management
Different from PC, even page turning is a page jump, such as screening, page turning and other behaviors of rendering need front-end asynchronous processing, module communication requirements are relatively complex. H5 search uses MDV framework — module is encapsulated into a model, and then the subscription mechanism between models established by MDV is used for module instance management, so as to achieve module communication.
summary
H5 Search uses technology that solves several problems of the PC search era:
- Reduce the dependence and meet the performance requirements of H5
- The separate development of the front and back ends based on the Wormhole APP reduces template maintenance costs
- The introduction of front-end templates reduces the clutter of DOM cross management
- MDV is introduced to further decouple modules and standardize module communication
However, with the development of business, there are still some problems:
- Despite the introduction of front-end templates, all updates and state synchronization are still manual, with frequent DOM manipulation
- Module partitioning is still based on logical partitioning, not DOM, so when a logical module needs to operate on multiple DOM’s, it still causes crossover
- Although the module is a black box externally, the internal state of the module is often confused due to the tedious interaction logic and business logic, and DOM is often not updated when the state is updated
- Depending on the application environment, templates and deployments need to be synchronized frequently, because the Wormhole App is also an application
The MV * age
In the context of all in Wireless, due to various reasons such as fixed search structure and abundant development resources of wireless terminal, tmall search business is completely undertaken by Native in the terminal, while H5 pays more attention to both the terminal and the station. This also leads to the disconnection between H5 search business capability and Native, only retaining the core screening function and call function. Therefore, H5 search had almost no front-end resource input for a long time. However, due to the adjustment of organizational structure in August 2017, I gradually took over the front-end business of Tmall search after the handover of tmall commodity business in my hand. The first requirement I need to make is an H5 search cobble page that contains 80% of H5 search functions.
Technical solution
modular
My team mates and I adopted Preact + MUI 5 scheme to achieve this. MUI 5 is actually an upgrade of MUI 4. We used CommonJS writing method to write modules, and then compiled and encapsulated AMD modules through construction tools. In fact, Vue Weex was already widely adopted in the conference, so why did we adopt Preact? There were several considerations:
- There is a good level of understanding of the React stack within the team
- The growth and popularity of Rax and some tweaks made by the Weex team lead us to believe that Rax is the main DSL for Weex in the future
- Preact is lighter than other frameworks, and the search page, while interactive, is simple enough to fit the bill
- H5 is a big trend in the future. At that time, our brothers’ extreme H5 experience and precipitation in Tmall supermarket gave us sufficient confidence
Therefore, at this stage, we used Preact to search a lot of surrounding function pages, such as the H5 order collection mentioned above, the coupon collection bullet layer of cat search, the coupon collection page, search classification page and so on.
Page rendering
Due to the asynchronous rendering nature of Preact, we abandoned the application for page rendering and adopted the zebra source page approach. The advantage of this is that the front end no longer needs to deal with AONE frequently, the server only needs to encapsulate a MTOP interface to the front end, the front end can create basic HTML on zebra to introduce resources and call the interface to obtain data rendering, the front end can fully control the debugging and development rhythm. Because the dependency on the application is limited to one MTOP interface, the front-end can set the data specification in advance and mock it based on local debugging tools while developing the interface on the server side, which greatly reduces the cost of interfacing.
Module management
React Ecology has a wide range of module management solutions, including Flux, Redux, Reflux, and more that we have researched to meet our needs. However, after careful consideration, we chose to directly polish without using any framework to manage module communication. The reasons for this consideration are mainly based on the following ideas:
- First we need to plan the module structure and communication structure of the page
- If there is a big difference between the communication structure and the module structure, and the communication structure is not flat, we need to introduce the module communication framework to realize the flattening of the communication structure
- If the communication structure differs little from the module structure, and the communication structure itself is flat, there is no need to introduce the communication framework
- Keep the module structure clear and separate, and face the future. If the future business development becomes tedious, the framework can be introduced to deal with module communication
summary
With the introduction of the MV* framework, our technical solution has been further improved:
- Due to the introduction of MV* framework, module division is no longer based on logic but based on DOM, DOM operations no longer need manual processing, DOM cross operation is completely eliminated;
- Using zebra source page, no longer dependent on the application environment, more convenient data simulation and debugging. No longer need to maintain templates;
- The internal state of the module is managed based on MV*, and the state is no longer chaotic, which ensures the consistency of DOM and module state.
There are still some questions:
- The H5 is always a performance challenge
- Still is source code development, not flexible enough, also cannot build
- Every time a requirement comes, the whole page needs to be modified and published, with similar industry customization, only one can be developed again
Drawing on the experience of Tmall supermarket ultimate H5, search has made a lot of performance optimization attempts in the classification page, such as:
1. Code optimization
- List split cells for lazy loading
- Access crossimage optimized image loading CDN suffix
- Build your own solution to remove all useless dependencies in the universal solution
- Upgrade the latest Loader to improve computing performance
2. DNS optimization
- Dns-prefetch: DNS is processed in advance
- Convergence of image and resource domain name
3. Interface optimization
- DLP is used to cache MTOP interfaces
4, buried point optimization
- Order arrow buried point delayed to pageload post release
- Use a non-overriding aplus script that leverages the browser cache
- Send by POST, triggering sendBeacon
5. Load optimization
-
Add service worker layer to cache JS, CSS, and images
-
Use ZCache to cache page templates, JS, and CSS
6. Motion sensing optimization
- Use ranger to automatically add URL to hide navigation bar parameters, preventing jitter caused by manual hiding after page loading
Weex era
After the “Double 11” in 2017, the invariable mode of search is also challenged. In this scene with a large amount of traffic and strong user mentality, search can play more scene-oriented, professional and brand-oriented gameplay. The need for more rapid trial and error in products has led to an increasing demand for dynamics. Due to some organizational changes, the number of tmall search clients has been reduced to two — one for ios and one for Android. Client requirements development is iterative because of the need for release. Therefore, Native embedded Weex pit technology solutions and Weex template issued by the Oreo platform were born.
Technical solution
modular
Even before I started searching, Weex schemes and Oreo had already been used in a small way, including Weex 1.0 Vue modules written by some of my clients, as well as some appellate modules. These modules have become difficult to find warehouse sources due to staff adjustments and departures. In addition, these modules are weeX-only and are completely unavailable for H5. So after I went into the search, I came up with a Prax solution that essentially used Rax 0.x (then 0.4) + Preact to compile and transform for Weex and the Web. The reason for using Preact was that we had an off-end H5 scenario, but the PERFORMANCE of H5 was really worrying in Rax at that time, and the Weex team didn’t invest much in H5 performance optimization at that time. In addition, it can be seen from the above MV* era that we also have a lot of precipitation in the technical solutions of Preact + MUI5. Finally, we determine the use of Preact code, through automation tools into Rax code to achieve a source code reuse in Weex and Web technology. The front-end development flow becomes the following:
- Write Prax modules that contain basic reusable pure UI code
- For Native embedded Weex pit, write a Rax module. The Rax module references the Rax part of Prax module, and adds functions such as burying point, container linkage, etc., and publishes online through a special template publishing system for pit use
- For Prax pages, write a zebra module or page that references the Prax module. Publish online through zebra platform, use Weex version of Prax page in the end, and reference H5 version of Prax page outside the end
Page rendering
Prax pages use zebra to handle page rendering. Since the source code pages are packaged, there are two packaged Webpack configurations in the scaffolding, Web and Rax, respectively, and Weex and Web bundles are generated during construction. Since the Web side is essentially the way Preact used to be, rendering follows the page frame of Preact. Rax developed a whole new framework for pages. Weex rendering is essentially controlled by XTPL for Weex Bundle code generation, so the Weex Bundle compiled from the source page only needs to be added to the Weex Bundle by XTPL.
For Native embedded pits, template delivery is adopted. The front end compiles and generates its own runnable Weex Bundle, which contains complete information such as the header and tail, and then publishes it to the machine that searches for the application. When the client’s search request reaches the server, the server determines which modules to use based on the service logic and informs the client of module information, including module name, module location, and module data. When rendering a Native page, the client creates a container based on the location of the module and requests Weex code based on the template name. Finally, the code is rendered into the Weex container and the module data is passed in to realize the final rendering of the module. In the Rax code, window.weex_data is used to retrieve the data that the client stuffs into the container. Of course, these adaptations can be handled at the build level, and developers generally only need to focus on pure UI module writing.
Module management
Module management of Prax pages is basically the same as it was during Preact development. However, for Native embedded Weex pits, the module management scheme is completely different. It looks like the Native embedded Weex pit only shows a single Weex module, but it’s actually a complete Weex page rendered by a single module. Therefore, Weex modules cannot communicate with each other. They are not even in the same Weex container. Because of this, Native embedded Weex mostly do some pure display requirements, for complex linkage requirements are still handed over to the client to complete. In addition, the client provides some common capabilities encapsulated in Weex modules, such as:
- Gets the current filter for the search
- Merge the filter criteria and rerequest
- Clear the filter criteria and rerequest
- The Query words to replace
- Once filter internal MTOP cache mechanism
- .
summary
After the Weex era, our technology has improved again:
- Implement a set of code reuse in Weex and H5 simultaneously
- By introducing Rax, we solved the performance problem in the end
- For off-end scenarios, we also ensured H5 performance and optimized grippers
- The flexible Weex release mechanism breaks through the release restriction and meets the demand of dynamic products
- For the search pit, we also precipitated a set of mechanism that the front end of the industry can participate in the construction
But there are still a few unresolved issues:
- The problem with source page development remains that it is not flexible enough, nor can it be co-built, nor can it be duplicated
- The search pit in-module presentation logic is server-side hard-code control, requiring recoding for both industry scale and dynamic data source access
- When multiple modules work on the same scene at the same time, the presentation logic of modules can cross over, introducing all kinds of problems
Set up time
So far, we can find that the core of the search front end has gradually become a joint construction and scale problem, not a simple front-end technology problem before. Since we want to solve the problem of co-construction and scale, we can look at a typical example of successful scale: the conference hall. In the past, the venue is also a page of front-end human flesh development, to later with zebra can be built by the operation of modular, front-end only need to develop some basic modules can meet the demands of all industries, how beautiful! Can search also be built?
The answer is: “yes”, but after all, the scene is different, we can not pile up the same floor as the venue duangduang, why?
First of all, there are big differences in the level of scenes. When the user enters the meeting, the current scene has been determined. A meeting page is a scene. Therefore, under the meeting page, only the modules of the scene need to be circled. For search, all scenes need to be displayed in the search page, and how to define a scene in the search?
- Query term: What term does the user search for? Is it a category term, a brand term, a generic term, or a specific model term
- Category: What category does the search term belong to? Which primary category? Which leaf order? Or across many categories?
- Crowd: the user himself is which crowd, fashionable cool girl? Or a family man?
- Time: When does the user come in? On Christmas day? The New Year? Birthday?
- LBS: Where are the users coming from? Is there a service store nearby?
- .
These dimensions, one or more interwoven together, define a scene in which one or more modules are presented.
Take Query as an example:
- Users search for brand words (such as Huawei) : Display Minisite brand construction
- Users search for specific category words (e.g. perfume) : display industry modules
- User Searches for gifts (broad term) : Presents gift scene guide cards
- .
Of course, the goods in the venue module can be completely diversified. In addition, the venue now has the capacity of thousands of modules based on kangaroos. Here is the previous venue as an example
Based on such thinking, search launched a Giraffe project, which provided brand Minisite as a point of entry for brand merchants to conduct brand operations
As shown above, there are three versions of Minisite from left to right. As can be seen here, Minisite, as a site for brands to set up in search, has been playing by providing modules for brands to fill in data. Brand owners have a low degree of customization and low maintenance desire, so the efficiency of shopping guide has not been high. In The Giraffe project, search opens a new layer below the product list, which displays a page created by a brand that can be fully customized by the brand. For a big brand operating across categories, we also open the combination of brand words + main category words, which is convenient for it to build its own page for each main category.
After the success of the brand words, we extend our ability to the industry side, in the middle of the search – Meigao built a mental product. Let the industry can also be based on the same way to operate category words, build their own industry page.
Mind isite and Minisite are for category words and brand words respectively, and will not cross each other. Platform-level requirements that span multiple categories, such as channel-related requirements and pull new-related requirements, are easy to cross with other scenarios. In order to solve this problem, we have settled a set of module casting scheme on Meigao — unbounded, which makes it possible to cross scenes through modude-level placement. Just by simply defining the rules of module display, modules of different scenes can be displayed on the same page.
Technical solution
modular
The first is mind isite and Minisite, their core technical schemes are similar — both use Chiba (a page building platform for merchants) as module building platform and package a layer of page delivery strategy management on top of Chiba’s page building capabilities. In fact, there is no difference between the modular solution and the venue, but the implementation of the module adopts Prax to be compatible with Weex and H5:
1. Coding phase
- Each module defines the data structure it puts in
- Modules do not contain page general dependencies (such as Rax base components, MTOP, etc.)
2. Rendering stage
- There is a Solution that controls the rendering of the page template, and for Weex is responsible for wrapping the end of Weex
- Web module code can be retrieved asynchronously. Weex is usually exported to the Bundle synchronously, and the data used by the module is stored on the page to provide an easy way to obtain it
- Soluiton introduces a PI (page initialization script), which takes the module data, extracts the module code, and delivers the data from the page to the module
And when you get to unbounded, it doesn’t work at all. After all, unbounded is mode-level ordering, there is only one container, so the putting scene is likely to cross with mind making, Minisite scene:
- Without modules or Minisite pages, then we need to create a page that shows just unbounded modules
- Mind Builder or Minisite page already exists and we need to insert the current module into the page
We use the following technical solution to insert the unbounded voting module into the page:
- The user initiates a search and the search request reaches the server
- When the server request returns, the Chiba page URL of the corresponding minisite scenario will be returned if it matches the minisite scenario
- When the server request returns, an empty page URL is returned if the mind builder and Minisite scenarios are not matched and unbounded modules are hit
- When the server returns a request, if an unbounded module is matched, the server returns information about the unbounded module, including the module name, module version, module data, and display rule
- Clients create giraffe Weex containers, render mind making, minisite pages, or empty pages, and then pass unbounded delivery module data into Weex containers
- Page rendering triggers PI execution. PI obtains the unbounded casting module, pulls the code of the casting module, and inserts the instance into the page based on the display rules
In this way, the final render of the current scene of all the module content. In addition, we have a set of scene intervention rules in the unbounded background to solve the problem of scene intersection for some manual intervention. At the same time, we also provide a set of dynamic data source mechanism, so that the HSF and TPP data of three parties can be directly bound to the module to realize the decoupling of module and data.
Page rendering
Page rendering as shown above is still through Zebra. But what’s different is this:
- Due to the modular construction, pages are unified controlled by PI, and page rendering depends on PI behavior control
- For giraffe container, it is also Native to create Weex container rendering, but it is obviously different from rendering a Weex page that only contains a module before, giraffe container rendering is also a real modular page
- For unbounded modules, an insertion mechanism is provided to solve the problem of display when scenes are crossed
Module management
As for the communication within the page, the page is managed uniformly through PI, which means that the various capabilities of the module need to rely on PI to achieve. Because the communication structure of such floor building is generally flat, it is basically sent to PI by module bubbling, and PI is sent to the corresponding module. However, communication between modules is generally not recommended.
For page and container communication, we provide some encapsulation capabilities in PI, and we can also call container-provided methods ourselves, just like the weeX-Module solution with Native embedded pits.
summary
Since the technical solution of Giraffe has been separated from the traditional front-end, more design client and server, is the design of the whole product architecture, it is difficult to explain clearly here, put a picture may be a little clearer:
Here’s another step up in front-end technology:
- The realization of visual construction for businesses & industry to provide a position for operation
- It provides the foundation for brand & industry scale in the future
- Provide a better industry co-construction mechanism
- Dynamic data source + unbounded module is more flexible and changeable
But we still have a few unanswered questions:
- We are more and more like the venue, but we are not communicating with the venue module (the venue is finally using Rax, but they are using Rax 1.0, we are still Rax 0.6).
- The granularity of building is too coarse, and the building based on floors is not flexible enough to meet the demands of industry differentiation (imagine that when the search list is a module, it needs to develop a new search list after changing the product style)
Deep construction era
What search does best, of course, is the cascade of products that users want most and are most likely to buy. This ability can also be reused on home pages, channels, stores, albums, etc. What do we do when we have to build a list page in many different places, in the case of floor building?
- First, create a new XXX search list module
- Find an endless list component
- Developing this business requires displaying the merchandise pit
- Request mTOP to fetch the next page of data from loadMore in the endless list component
- Publish the module for build use
- When there is a new demand, a new mark will be added to judge, and a data and commodity pit display will be based on the mark
If handled in this way, if the industry is large-scale, the following problems will occur:
- Different industries have different requirements for the display of goods. For example, the home appliance industry will display some key product attributes, while the home decoration industry will highlight some door-to-door installation services, which leads to different display of goods pits, but also means a lot of development costs
- Because it’s more difficult to build in a warehouse, it’s easier to put all the demands on yourself
- Because of different data sources, when business logic crosses, the judgments in the front-end code increase exponentially
These are obviously things we don’t want to see, but when we completely strip out the different scenarios of components, we can see some patterns:
- If an endless list can pull out a component, it must be reusable
- In fact, the display of goods and the access to data sources are very suitable for joint construction. Let the front end of the industry connect with the industry’s own data = professional people do professional things
Further refinement:
- Interactive reuse
- The UI is customized based on the scenario
- Data is customized based on scenarios
So is it possible to build on this idea a little bit further?
modular
The answer is obvious, and the concept of deep build is designed to solve this problem. Modules on the page have been redefined based on functionality:
- Data module: It does not have any DOM code of its own and is only responsible for processing data. And listens for events that load new data and pass it to the downstream module that receives its data, triggering a refresh
- Container module: itself is actually a container, responsible for receiving data and sending the data to the display module for display. But instead of processing the data itself, it tells the data module what it needs to do by triggering an event interface defined by the data module.
- Display module: UI display module does not handle any logic, the only thing to do is to take the interaction container module to give it the data to display. It doesn’t even handle any events internally, all bubbling to the interaction container module.
By doing this, the container module controls the interaction, the presentation module controls the UI, and the data module controls the business logic, such as a list of filters, product flows, and more specifically, React hooks:
- The data module is essentially a reducer, which exposes a [state, dispatch]. State contains screening information and commodity data, which are rendered by the container module. Dispatch defines loadMore, Changenav, and other methods that the container module calls
- The container module defines navigation and commodity pits as pure display areas, sends screening information and commodity data to the associated display modules for rendering, and passes some callback methods such as click events to them. When the screening clicks, the changenav of the data module is triggered by Dispatch. The loadMore of the data module is triggered by Dispatch when the page is pulled down to the bottom
- Presentation modules are purely stateless UI components, with different components used to display different areas
Reasonable scene division is crucial. Different business scenarios should be a combination of data module + container module + display module. Container module and display module can be reused in most cases, but data module needs to be redeveloped. Make the data module fit the container module and the presentation module as much as possible, without changing the interaction without changing the container module, without changing the UI without changing the presentation module.
Here the module is switched to Rax 1.0, in line with the conference development mode for reuse. Rax 1.0 is optimized for the Web, and the bundle is lighter and has better performance. It has become the main way of front-end business development on Tmall, so Prax is no longer needed.
Page rendering
There is a very important difference between this and traditional build, there is an association between modules. For example:
- The data module needs to be associated with the container module & display module, and state and Dispatch need to be handed to the corresponding module
- The display module and the container module are managed. The container module defines the areas, and different areas use different display modules
- Container modules can even be associated with container modules to enable multiple nested presentations, which are generally used for pure presentations that do not involve interactions (such as search list filters and white space in the middle of a waterfall stream).
Then we need a global way to handle these associations. First of all, we need to specify the association between modules when building modules. For example, I have defined three word ends:
- Key: unique identifier of a module, used by containers and data modules
- Data: If it is a container or presentation module, fill it with the key of the data module being used
- Container: If the module is in another container, it is filled with the key of the container module
Data and Container are two-segment structures. For example, a: B of Data represents the data on the B end of the data module whose key is A, and a: B of Container represents the container module whose key is A. And use itself as a display module for its Area B.
These endpoints will be in scaffolding and generated into schema.$attr when initialized. It is the page initialization script — PI — that is responsible for identifying these endpoints and linking them together. PI does the following:
- Scan all data modules, instantiate them, store them together, and remove them from the render module
- The remaining rendering modules are organized into a tree structure based on Containers, and each node is associated with data modules based on data, and the top layer is rendered
- Pass the submodule data to the container module and provide a default rendering method. The container module can decide how to deal with the submodule data, modify, merge, overwrite, and then render using the render method. If there are submodules of the next level, repeat the following steps
Module management
Here there are two types of module communication, one is based on common data module communication, the other is based on PI communication.
There can be multiple data modules on a page, and each data module can be connected to multiple container/presentation modules (which is why data is a binary structure). Therefore, when multiple modules need to communicate, data module behavior can be triggered through Dispatch, and data module refresh data affects another module.
Communications are generally based on PI for some special scenarios, such as a classic example of selection list selection: suction a top bar deformation when the page has a lot of module, filter list is not full screen size but rolling, along with the entire page when the filter bar rolling out of view, it needs to be sucked into a relatively simple structure. In this scenario, since the top layer itself is an absolute location layer and the page document flow is a layer, the most logical way to share state between the two layers without DOM manipulation is based on a data module. If the top component defines the top style and the normal style, and exposes a method to create a data module, PI will create a separate data module for the top component to share the filter bar state when rendering it.
summary
Why is it so complicated? In fact, we hope to make the UI and interaction more pure. When the UI and interaction are simple enough for a single scene, we can introduce some products to improve the efficiency, such as:
- Use a descriptive approach to zero development to define interactions and generate container modules such as Domagic
- Use a descriptive approach to zero development to define the UI and generate presentation modules, such as imgCook/Magic Cut
As a result, most of the content on the page becomes reusable and the cost of scaling is greatly reduced.
Smart era
Since search is still exploring the era of in-depth construction, the following is my YY based on team direction, but I believe the era will come soon
However, the above methods are still human and need human input. Since tools allow us to address interactions and UIs in a configurable manner, is it possible that the machine automatically generates a data module or interaction for us without even requiring configuration?
This is what our team is doing. At present, students in the team are trying to implement intelligent UI based on machine learning training model. You only need to give some key information (such as which fields of the product to be displayed), and the corresponding display module or interaction module will be automatically generated. These modules are used in the deep construction, and the front end only needs to define the content source of the display through the data module to complete the construction of the entire page. Zero cost for UI.
And we further expand to think, the elephant refrigerator is divided into three steps, then the intelligent search should only need three steps:
- Step one, we tell the machine what to show, and it shows it to the user in the most efficient way, right
- Second, we tell the machine what kind of content to show, and it figures out the most efficient way to show that kind of content to the user
- Third, we tell the machine our scenario, and it finds the content that fits that scenario and presents it to the user
To be specific, this might be:
- First step, we have the product, let the model to train the best way to show the product
- The second step, we define the rights said here is to show a module, then the model will give you the most suitable display content, such as display a store coupon, is to show the store coupons at the same time show some hot style products induced user to place an order, or standing in a commodity dimension coupon combined category will store, allowance together to give an amazing discount induced users to place the order, Or to find out the current users expect to buy the total price of the closest store voucher display and induce them to order and so on
- The third step is to define a scene. For example, if we tell the machine that this is a XX channel, the machine will automatically analyze the main consumer groups of maternal and child channels and recommend the most appropriate content for this group. Suppose we want to build a channel from mother to child, for example, the machine will automatically based on maternal and infant channel main consumption group is bao ma, is the most painful and bao ma keep Eva demand product variety, quality worry not guaranteed at the same time, thus guide recommend her content is given priority to, along with some endorsement on quality, brand assurance, such as user evaluation platform service and so on.
Therefore, we may need a three-tier model:
1, the first layer based on different content, find the most reasonable UI interface to display, that is, intelligent UI model. 2. The second level is to produce these contents, which can be divided into three dimensions: industry & platform + brand & store + goods.
- In view of the appeal of industry differentiation, industry operation determines the type and form of industry content to be displayed here. For example, the mode of goods with rights and interests mentioned just now, it is necessary to find the most appropriate rights and interests through the industry expression model and the most appropriate goods under the combination of such rights and interests.
- For pure commodity display, it is to match the goods yard to the extreme, to find the goods that the user wants most at present;
- For brands and stores, brands need to shape the brand’s mind in the consumer side, and corresponding brands will have their own expression. For example, if a brand wants to make a new product popular, it needs a brand expression model to assist their operation.
The third layer is more complicated
- Industry content model is actually operating experience of precipitation, will operate the target into different scenes, then under the scenario of operating experience as to content, this requires a side various types of content industry as input, while model need after various combinations based on past experience, are the most appropriate combination of the content of the current scene;
- The brand content model is similar, integrating and absorbing various resources of the brand in the whole Tao system, providing some effective operation suggestions (recommendation modules) for brand & store operation, improving the efficiency of brand operation and reducing operating costs;
- For human-goods matching, three models are split to deal with goods, scenarios and users respectively.
** Product model: ** Extract the most attractive features of the product from the product, such as selling points, favorable reviews, sales volume, and structure the product so that it can be displayed in different places;
** Scene model: ** defines the current buying scene of the user based on the real-time behavior of the user. He may want to buy a TV, so he browses a lot of TVS, or he is looking for the online price of the same model in a TV store, or he is planning to buy a TV for his parents during the Spring Festival, etc.;
** User model: ** classifies users into a certain type of group based on their historical behavior and carries out fine operation of the group. For example, a user often buys coldplay, a popular fashion for young people, but on mother’s Day or at a certain time buys some categories of gifts for elders, which indicates that he is a boy/girl with filial piety. Then, he can recommend some gifts for elders on different festivals or other important days for this filial baby
So what data do we have?
- Industry & Platform: Industry has its own sediment, platform has a variety of marketing methods
- Brands & Stores: Brands also have new products, time-limited offers, special events, offline stores and so on
- Goods: Attributes, reviews, selling points, etc., of various products
- Offline user: the history of various actions of the user, as well as some user profiles collected manually
- User real-time: such as its recent behavior, from where to enter the page, in what location, at what point in time