This article was written a year ago, reprinted here, you can add Scott to wechat: Codingdream, become a friend of friends, chat about north and south, hahaha


The first GraphQLParty video address

Good afternoon, everyone. My name is Scott, and I am the founder of hangzhou’s first GraphQLParty.

Back to the subject of the meeting, why is there a meeting at all? Originally the conference registration within a time hope can control up to 100 people, not more than 120 people, the results report to 250 people are still full, then there is no way, we can only from work experience to screen, select work experience in more than three years of senior engineers, and part of 2 years experience of work experience, and 5 years, 10 years or more.

If you know anything about domain driven, could you please raise your hand? (Audience interaction), less than 10 people, if you use GraphQL in your work, less than 10 people, that’s why we have this meeting.

The reason is that when we wanted to try GraphQL at that time, we found that there were very few documents, communities, conferences and other GraphQL resources in China, and there was no place for us to communicate with each other. Later, we had to find our friends in the same industry to communicate with each other, and we found that everyone felt the lack of communication. After we finally stepped over all the potholes on this road, we thought if we could sum up something by ourselves, we could share it with everyone separately.

In the past less than a year, about 10 months, our team has practiced some products with GraphQL, and found that we really benefited from it. We feel that this experience can be discussed with everyone.

Going back to the theme of the conference, there are several key words, one is synergy, one is efficiency, domain driving is a separate Part, and then there are the changes in front and back functions. In fact, these are the words that I feel most. The value brought by a technology is not only the improvement of development efficiency itself, but also additional value, which may be synergy or changes in front and back functions. This is why GraphQLParty was held and the theme was decided. No company in China has really talked about this topic, whether GraphQL or domain-driven, including their combined value. We think we can be the first to eat this topic. And then work with people to push it.

This afternoon, there will be five sharing sessions, including front-end and back-end sharing, as well as comparison of methodology. Of course, there will be dry and wet things. What we can ask the lecturers is not to talk about too lofty things, but to talk about things that are down-to-earth. At the same time, we do not know how the industry level, we will first do their own things, no matter what level, all out to show you.

Then we have some questions listed here, which were posted when we started the registration process. This afternoon, you may have different questions about the lecture. Your question may be one of these or not, but it doesn’t matter. If you want to understand GraphQL and understand domain drivers, there are probably some questions that you can’t avoid.

You can listen to this afternoon’s five sessions with questions, but it’s not clear whether you can get the answers, but eventually you will have your own judgment.

When to solve all the problems, for our present senior practitioners may also face an ultimate question, that is if say today, your lecturers, all have a point, I want to go with going to drive, so will bring my team which revenues, what technical challenges will meet again, meet what pit. The technical challenges and pits encountered will be introduced by my partner later. I will first tell you how much profit Song Xiaocai has made and give some cases.

As for the case, let’s first learn about our product background. After all, technology cannot be separated from the scene. Song Xiaocai has 10 front-end engineers, but actually there are less than 10, maybe 7, and only a few of them have been recruited in recent months. Market research system, report system and so on. This may seem like a big team thing to do, but we only have ten front ends at most. Then there will be many problems, such as the cooperation between APP and APP and between people. We used RN with unstable version before, and made a lot of mistakes. The cost of development and cooperation is too high, so we have to build wheels or use wheels to optimize the development efficiency. Just like the PC side, we need to solve many problems, such as resource on-line, packaging, compilation, version, cache and so on. We are no exception, almost these are the internal tools developed by 7 people, to solve specific problems.

Here’s an example: Big uncle push package system, the problem is when we need to release the APP package, if a person dozen of package, nailing to the friend, he went to upload the package may will go wrong, and that the fault has occurred, this scenario can be resolved by human flesh is out, so we develop big uncle push package system, let the machine docking machine, Through the machine to do this thing, there are big uncle, big cousin, big melon seeds, why take this name, because we hope the product can be as grounded as possible, plus homophonic, such as packaging called big (dozen) bo (package) bo (package).

After solving so many problems just now, we found that we still couldn’t do it. The internal efficiency of the front end team and the coordination cost were indeed reduced, but we found that the cost between teams could not be reduced. For example, we could not escape these three typical problems, which were very related to the front and back end.

Synchronization between the first one is a vivid class statements, we now development front-end, may develop small program, APP, PC, there will be, for our company scenario we are more diverse, the end may be in the ERP to give statements, give reports on the APP, report gives fully in the small programs, but in front of the person different permission statements give different dimensions, But in essence, the data source above and below is the same, so we may develop many interfaces to correspond to different apps, so this problem cannot be solved by the front end.

Second is Shared between multiterminal more module, the module is not very accurate, I explain, a user module, for example, there is an order module, a logistics modules, each module may have one or two components to inside, not necessarily is what components, but the basic data needed to down is likely to be the same or with two copies, The UI is just rendered differently on different ends. Then we want to make it difficult to share data between these modules. We develop a component on different ends and bind an interface, which is put into another module, and when it is used in that module, we find it does not work, which is also a big cost.

The third is that the business changes rapidly, and the product always needs to be upgraded and iterated. It is inevitable that the UI designer will have to find work for you. When the version needs to be revised, a field will be added, a field will be subtracted, or several fields will be superimposed, so the interface will have to be upgraded, or a new interface will be added, which also leads to high cooperation costs.

Why is that? Because we all know, basically general a development process in the industry is such, can our team here is displayed on a longer, than I short doesn’t matter, but you jump but is this a few link, to see a few of red, system design, it involves how to build the Java server project, skeleton, how to build, service, how to break up, Finally, it is the server-side students who design the database and table structure, and what are the fields above these tables. For example, the server will provide you with five interfaces with 15 fields for each interface. After docking with the front and back ends, the server will help you make a Mock data. After the front end reconstructs the page style on the page, the server will adjust the Mock data. And then the page interaction process is tuned and then cut to the formal interface, which is roughly this routine. That there are many tools in the stack, a lot of third party open source tools can be used, after that we find here before end block at this point, and blocked for many years also resolve not to drop, because the control interface design in the service side hand, front don’t know how will give to the interface, then the interface review, 15 minutes or an hour, Basically, it is difficult for the front end to understand the business meaning behind which field. Later, it is necessary to communicate and confirm with the service students repeatedly. Because of the blockage at this point, the above three problems are not easy to solve.

The first one is the DESIGN of THE API. Some server-side students may be forced or raped, but I have to design the API for your changeable UI. When serving these pages, the API may be upgraded. Before the second one is a Mock duties overlap, also know that there are many industry company done Mock their tools and platforms, we have been also in the use of a third party, is sometimes end jointly maintain the same before and after the Mock interfaces, sometimes the server to maintain, but all in all to exist a coordination cost, who to be responsible for it, this thing, I still don’t know. Server-side classmate to finally can be focused to do after you finish the Mock at the bottom of the business development, but found temporary need to adjust a field, to adjust the interface, but forget to update the Mock document, front don’t know this thing, to the two people in the back of the docking found, on the field of which is a typical workflow collaboration. Another problem, which is also the case for toB company or toC, is the statement. The statement may be just a requirement, but the management actually needs to see the whole scale, tonnage, logistics, inventory and data observation of different dimensions of the company’s transactions in the past week and a month. Business play a change report dimension will change, the traditional report development before and after the end of all, the server database, across different Table query, gives a standard field structure, the front is set into the Table form it, this thing is very simple, but is likely to scheduling, a day for two days, three days may speed of the output statements is very limited.

In view of the problem just now, we first song Xiaocai in their own business scenarios, technical solutions to take out to show you. Our current solution is to integrate an aggregation service of GraphQL at the gateway layer. In the second session, the architect will talk about the specific architecture diagram, and this point will be discussed separately. Our ideal GraphQL access way is do with embedded gateway provided inside a pipe, but now we implemented, considering the fast run, temporarily put it in the bottom of the gateway, is for its authentication with security don’t want to occupy too much cost of development, give it to the gateway to do, so it is only for data aggregation such a thing.

So with the system transformation, we started to answer the question, which was, what are the benefits? In 2016 and 2017, we spent almost two years developing 50 statements for the whole company. The total development time was not calculated in detail, but it does not mean that the whole company only needs to see these 50 statements. It is because we only have so many human resources to develop these 50 statements that the development of statements has become a bottleneck. After GraphQL is displayed on the end, including some assembly actions on the server side, we now provide a visual report editing system for product managers and operations, and for server engineers, who configure the report through the visual interface.

Four months after the system went online, it produced more than 200 statements, which absorbed all the company’s demand for statements. What happens now is that the product manager has a meeting with the business side, and the business side says, “I need to see a week’s worth of data for a service station, AND I need a requirement, and I need this metric, and I need this metric. Then before the meeting was finished, the product manager finished the report and went online directly. Now the report output is such a rhythm. By making this system, we found that GraphQL could bring us great convenience, so we continued to dig down, and the value of GraphQL continued to sink from the APP side to the server side, and we became brother in law. The previous report system was big (build) table (table) big (grid). It’s a homonym for Excel. This is my brother-in-law, a product of the company. They are all relatives.

Brother-in-law is we share the theme of today, just the GraphQL aggregation service called brother-in-law, this practice is less than 3 months, so far ran some projects, the current evaluation down can save manpower, if a little daily project need front end to jointly develop 4 days after, we can reduce the cost to 3 days, Then this is the situation of a single person. If there are more people, the increase of multi-person connection cost will be more obvious through this system.

Above is the revenue from business results, and there are other benefits besides this revenue. That brings us to another key word of the day — functional change at the front and back end. The front end in you mind and I don’t know what is the service end, and I said, we now is like this towards a direction, the front have certain control to the data on the page, I need to know what kind of data is only myself, because I need to have a control on the data, faster to output page, including to walk through some of the business process, dot what button, To trigger any event, I have to understand the business meaning behind each field. I used to say I just need to understand the UI, understand the interaction, understand the product, I don’t care about the business, just use whatever fields you give us, and we consume the data. Now on the front end of the challenge here, I’m responsible for some changes in things. On the server side, it was great because I was freed from having to design an API for a variety of pages.

Then there are two problems with liberation:

The first question is what to do with the free time? The second question is if the front end comes into this layer, what else do you want from me (the server)?

To the first question, given time, I can put the glue code is removed, the Mock, glue data of the time saved to do at the bottom of the design services, to provide a more stable data services, in turn, the front will also hope that the service side classmates provided interfaces, different areas of design, give me the design of the stable, Instead of giving me a variable design, don’t cause a big earthquake of back-end service changes because of every front-end page revision development, this is a change in the front and back end.

Then how do we get this profit through code and engineering? Next, my colleague Chen Jinhui will share the specific project with you. Let’s give a round of applause to Chen Jinhui.

Chen Jinhui: Hi, everyone, I’m Chen Jinhui, the front-end engineer of Song Xiaocai. Scott just told us about the results of Song Xiaocai’s GraphQL practice in a period of time and the data aggregation system we built based on GraphQL — uncle, who is a member of Scott’s seven elder sisters and eight elder brothers system. Here’s a brief overview of what I’m going to talk about: what is GraphQL? I’m going to start with a popular science, then I’m going to show you what I’m trying out, what I need to do, and finally I’m going to open up my imagination about GraphQL.

At the beginning, let’s take a look at the data aggregation system of my brother-in-law and see where it is in our entire system architecture. You can see it’s in between our gateway and the back-end data service, and as Scott explained earlier, our gateway already exists, so it already does things like authentication and security, so when we do the data aggregation service, we put that service behind the gateway. GPM, which stands for GraphQL Pipe Manager, is a system that provides data assembly for data provided by back-end services.

This is about a structure diagram of our entire GPM internal architecture, which is divided into two parts: one is the formal service. It can be seen that the formal service is relatively simple, because we try to simplify it to ensure the stability of the formal data service. The other part, which is a little more complicated, is the development service, where we can edit and manage the types, test them on the development service, and then apply them to our formal data service.

After introducing the general situation of using GraphQL, I will first introduce what GraphQL is, since some of you may not have been exposed to GraphQL before. GraphQL is called Graph Query Language. The official tagline is “a query language tailored to your API”, which is explained in the traditional way: The user terminal sends a query statement, and your GraphQL service parses this statement and returns the results of the query from your “API database” through a set of rules. GraphQL is a query language for this system, just like SQL is for MySQL.

After a short practice, Song Xiaocai found that using GraphQL brings us five convenient points: Data redundancy can be avoided by using GraphQL. Fourth, data aggregation is the most important function of the system itself. Finally, data Mock is the added value of the comparison bar.

Let’s start with the single entry point. In traditional RESTful apis, both the front end and the back end need to do API management, one is version management, the other is path management, which is very troublesome, increasing the complexity of project management. But with GraphQL, you only need one entry. As mentioned earlier, GraphQL is like a database. It has only one entry. We only need to access this entry and send the statement we want to query to this entry to get the corresponding data, so it is a structure of single endpoint + diversified query mode.

The second point is the document. Although the document cannot completely replace the traditional document, it can be convenient to us to a certain extent. Traditional RESTful API document management, there are many tools in the market, such as Swagger, Ali open source RAP and ShowDoc, etc. But there is a learning cost to using these API document management tools. Swagger, for example, may not be too complicated to use for an experienced developer, but it takes a while for a new developer to learn. Then there is the headache of “API and document synchronization” when using these platforms, many times need to do their own API and document synchronization plug-ins to solve.

Using GraphQL solves some of the problems with API documentation to a certain extent: When defining the GraphQL type, we can add a description for the type and its properties. This is like commenting on the type. When the type is compiled, we can see the details of the type we edited in the corresponding tool. Its description is “article”, its attributes what, what meaning, will be shown in front of everyone, as long as we standardize the writing type in the development, the whole document display is more standard.

Another great feature of using GraphQL is that each GraphQL type is actually equivalent to a collection in Mongo or a Model in Mongoose, and the relationship between each type can also be vividly expressed by tools. For example, when a model is used in a system, it corresponds to all models that are related to it.

Link: apis. The guru/graphql – voy…

Github’s GraphQL API 4.0 exposes all of Github’s external types. You can see the definitions and explanations for each type displayed on the left. Each type is represented by a corresponding UML diagram, which is a larger and more complex UML diagram. The main core type we usually use is the repository type. We can see that this type is complex and core, and there are many types associated with it. There is also the “issue” attribute under the repository type. If we look at Github open API 4.0, we can do this on Github.

The third benefit of using GraphQL is that you can avoid data redundancy. There are roughly three ways to handle redundant data fields in traditional RESTful ways:

  • First, the front end chooses whether to display these fields or not;
  • Either do an intermediate layer (BFF) to filter these fields and then return to the terminal to display them.
  • Three is more traditional and more troublesome, still can not take effect, is the front-end and back-end agreed to do, if this an interface that a field has no, you can discuss the delete and back-end, but may cause in the case of a redundancy field delete not to drop, the students do after that is the interface may be “universal interface,” This means the interface in the page will use, can also be used in another page in this application will use, also may be used in another application, part of the data sharing between multiterminal, back-end students in order to facilitate may write so a “universal” interface to cope with this situation, over time, found that fields redundancy to a lot of, However, deleting the interface can affect many places, making the interface too large to move, so both the front and back ends have to live with it.

The problem of interface field redundancy can be avoided by using GraphQL, where the front end can decide what data structure it wants to return. As I just explained, GraphQL is actually a query language. When we use it, it is just like querying data in a database. Which fields of a certain data can be written in the query statement, and which fields are returned to us.

Take this as an example: I’m going to fetch the article with id 1, so if I just want id and content, and I specify those two fields in query, then I’m going to return id and content, and if I want to fetch author information in addition to ID and content, I just specify author in query, and GraphQL returns the author information back. So that the front end decides what structure of data it wants and returns what data it wants.

The most important one, of course, is data aggregation, which has several solutions when used in the traditional RESTful way:

One front-end sends separate data requests to multiple data sources on the page, and then presents them one by one, which may cause the page data to load asynchronously. The second is to develop a data assembly middle layer (BFF), which assembles the data provided by the back end and returns it to the front end. There is also a scheme that Song Xiaocai used in the early stage, that is, the back-end students write API for the page, namely the so-called glue code, to splice the data of each service and return it to the front end.

If it’s the third case, there’s a lot of engineering we need to maintain, a lot of apis we need to maintain. None of these problems exist with GraphQL, which inherently supports data assembly.

Why is it built to support data assembly? Let me try to explain in general terms how GraphQL executes. This is the general process of GraphQL execution. The first step is to verify the standard to execute GraphQL and verify the validity of the query statement. The second step is to generate the execution context. All the fields that need to be queried can be retrieved by the algorithm in the query statement. Here is how GraphQL can avoid the redundancy of the data returned. After getting all the fields to be queried, the fourth step is to execute the resolver for each field and get the corresponding data from the data returned by the resolver. Finally, format the result and return it.

In GraphQL, there is a concept of type called type. Each type corresponds to one or more fields, and each field is bound to a Resolver, whose function is to obtain the corresponding data of the field. For example, the article type has four fields: ID, Author, Content, and comment. Each field corresponds to a resolver. In fact, this resolver can be redefined by the developer. If it is not defined, GraphQL will give a default resolver. For example, the author field type of Article is User, and User can be obtained from User service. So we can redefine the author field resolver to get user information through UserService. The same goes for the comments below. We have access to comment data via CommentService, which allows us to query this article for its own data, Author information and comment information were also obtained through UserService and CommentService, which were then assembled and returned to the client, thus the purpose of data splicing using GraphQL was achieved.

Fifth, as an additional point, we can mock data appropriately with GraphQL. How do you mock with GraphQL?

The types of GraphQL can be roughly divided into two types:

A scalar type that provides scalar types such as Int, Float, and String, just like normal development languages. This type is also used in GraphQL by a resolver, We can mock scalar types by redefining them as resolvers, like what range Int returns and what range Float returns. What is the format returned by String? And so on. At the same time, we can also mock some simple but regular data types commonly used in development, such as mobile phone number, picture address, ID number and ID number, by custom scalar types. The second type is a common type, like the Article type in the example above. A common type can have multiple fields, each of which can be of either a common type or a scalar type. This type can also be mocked. Mock data like Article is generated automatically.

Another convenience of using GraphQL for data mocks is that classic mock data can be easily reused. This feature can be used to query the Article author column of type User in the example above, because the User information type is used not only for the Article author but also for the comment author, so we mock the User data. This mock data can be used in both the article author and the comment author of the query, and it can also be used to play tricks, such as returning a random user from several user data, or returning dummy data based on the query criteria.

Using GraphQL to mock data has several benefits:

  • One of the benefits is that the mock data follows the type. When we modify the type, the mock data is also synchronized. There is no case of the mock data being out of sync with the type.
  • The second advantage is that it is easy to implement fine-grained mock data, the principle of which was just explained, which greatly improves our development efficiency.
  • The third benefit is that mock data can be reused, saving development time.
  • Finally, the responsibility for mock data can be shared between the front and back ends. Or the front end will do it itself, since the front end is usually the consumer of mock data, so why not sell mock data itself and save a lot of communication costs?

Here is a simple demonstration (the scene is the background of networking operation, and only some screenshots are inserted here) to show a BFF service we developed — GPM, which is still in the trial period. At present, its page is still relatively simple, and it is still in the trial stage.

In GPM every type is generated in the form of form, of course code will be in the form of a particular place, we just for every type of visualization, if used as a new person, just click on the button to add type, specify the type name, fill in the type description, according to the type of the actual situation, set up the cache time effectively, Which apps are bound to Song Xiaocai. You can then add fields to the already added type, specifying the field name, type, description, cache validity, and mock data.

At the same time, this system can be directly tested and published online: after editing a type, we can deploy it to the development environment, and then debug it in the IDE to see if the data returned is correct. If the front end doesn’t want a field, delete it from the query and execute the query to retrieve the data that doesn’t contain the field. We can also get mock data from the result of this query through the IDE.

The IDE automatically prompts us when writing queries based on the schema we have produced, just as it would with a normal desktop IDE, and each type of document can be seen in the popover on the right. GPM divides the IDE into a formal service and a test service IDE, which queries online data. After testing the new or modified types, we can deploy to the official environment without re-releasing the GPM.

This is just mentioned in the documentation presentation, GPM is also integrated, you can see what these types are, and then what the meaning of these types, and what the relationship between types is, can be easily viewed here.

We added some additional functionality on GPM, because most of the microservices we provide on the back end are called using RSETful, so we made a special trace for RSETful requests, and here you can see every RSETful access.

The most important thing is the tracing of each GraphQL query. You can see, like we execute such a query statement, get the data results, execution time, the details of this query statement can be seen, but also see how each field query speed. Another example is this interface, which is bound to two services, one of which is the service of inventory and the other is the service of supplier information. In this way, we can see how each query field tracks the execution efficiency and inform the backend students to optimize according to the query result.

There are also some of our custom mocks, which is what the whole GPM looks like. In the interest of time, I’ll just touch on how we implemented GraphQL for online editing deployment.

GPM is built using NodeJS, so this solution is for NodeJS, and other language solutions need to be explored. There are several key points to implement this feature.

One of the key points is to replace the schema. In fact, schemas can be changed, as long as we change the schema in a specific way each time we execute graphQL, we will use the latest schema every time we execute graphQL.

The second key point is how to modify the existing schema: We divided the GraphQL schema into two parts: one is the type definition, and the other is the resolver. As mentioned above, each type has fields under it, and each field is bound to a resolver. In fact, we can separate the type definition from the resolver, and at the same time, we can stratify the resolver appropriately. This is the layering structure of GPM, but this is our own kind of layering, and there are other schemes as we’ll talk about later. After defining the resolver and type, we bound it together using some development tools to generate a GraphQL schema like this.

The type definition is essentially a string. The key point is how to dynamically generate resolvers. This is the third key point, which is briefly discussed here.

First, we need to simplify the resolver. The form of the resolver itself is fixed. In fact, the function signature is like this: the field name is under the type, and the field name has four parameters. The first parameter is the result of the parent type of the query, and we may use the data from some of the queries below its type. The second is the specified query parameter; The third most important is the execution Context we just mentioned, we can call the various bound services in the execution Context. This is the general form of a RESOLver in GPM. The first step is to assemble the parameters, and the second step is to call the service with the execution context, so that the data can be dynamically generated.

There are some unavoidable problems that need to be solved when using GraphQL. Here are two unavoidable problems:

  • The first is security
  • Second is the problem of slow query

Of course, there are other issues that need to be addressed, issues of safety that the lecturer will talk about later, but I won’t go into the time.

There are many solutions to this problem in terminals that use caching, such as Apollo and Relay. Another thing is to cache in the GraphQL service. Apollo-engine provided by Apollo is this way, but this can only be used as a reference because it needs to be used over the wall. There is another kind of caching that Song uses in THE GPM: properly use the GraphQL provided directives to replace the Resolver to cache the GQ service data. There is also the use of Dataloader for batch processing of multiple repeated queries, which will be discussed later in the lecture.

In GQ ecology, there is graphQL-extension, which is very useful to use. We can reference it to make our own extension to track the execution of GraphQL query statements. We can also do this using third-party tools, such as Apollo trace.

Finally, let’s have an imagination.

GraphQL itself is actually a standard, there is no need to use the official GraphQL engine, we can implement our own GraphQL according to the actual situation.

Returning to the GraphQL execution flow, we can implement our own GraphQL engine with the following optimizations:

The same query does not need to be validated every time, which can save a little bit of query time. Since it is the same query statement, there is no need to collect such fields. Some simple ways can be used to avoid repeating collect fields. In addition, to improve performance, the official GraphQL-js executes the resolver of each field in a circular sequence. Is it possible to properly execute the resolver in parallel according to the actual situation?

In the end, when Song Xiaocai uses GraphQL, it has the following six characteristics:

  1. Single entry, single-end entry facilitates front-end project management and avoids cumbersome API version management on the back-end.
  2. Documentation, which may to some extent solve the problem of document synchronization and front-end development reading.
  3. Data redundancy is convenient, reducing the communication cost at the front and back ends.
  4. Data aggregation. GraphQL has built-in support for data aggregation. Since each type is bound to a Resolver, different resolvers can be defined to obtain data from different services and achieve data assembly for different types of data sources.
  5. Mocks, where the responsibility for mocks is appropriately placed on the front end, or both front and back ends, and are relatively easy to maintain. So MOCK is convenient for us to develop.
  6. Dynamic editing to achieve real-time deployment, agile development. Real-time deployment quickly responds to online data.

Scott: Answer questions from the audience (resummarized version)

As for the collaborative workflow of the front and back end of Song Xiaocai, the following changes can be seen intuitively:

  1. Front end from the interface design link, the forward step in to the server system in the design of database table structure review link, at this time can not only learn about library table field distribution and the business meaning, also can put forward some Suggestions on the library table design, help the service side the field types and structure of the output is more friendly to the front end, such as precision and dimension, the two are separate, or use commas, Save a String, there are differences;
  2. The server saves Mock, glue API design and maintenance, and saves time to concentrate on the underlying business-based system separation, providing more stable data services, and building a more robust and compatible underlying architecture.
  3. The front-end can abstract most of the fields in the GraphQL custom type Mock before the interface review (once the server has determined the library table structure, the possibility of subsequent changes is very small), and then can implement the DOM page and fill in most of the placeholder fields. In the final structure, both sides of the interface review link for the particularity of the interface, check and adjust again.
  4. The front-end, supported by the server domain boundary, can encapsulate more flexible components for specific domains and combinations of domains. The extensibility of components can be determined by configuration, rather than a single API. This configuration is the aggregation capability of GraphQL.

As for the third point, the front and back ends need to be constantly running-in. As for the fourth point, we are still exploring and trying, and finally want to express some ideas about how our front end team works, which I personally think is very important, especially for the start-up team. Team, no matter what things look belongs to who, in the end it must be a company, a technology popularization affect anybody or who shook up the so-called position represented the interests of the original, as long as good for the company research and development team efficiency, is advantageous to the evolution of technology, will be conducive to the business go faster, so try will be decisive. In the end, it’s the company that pays for all of our actions, but in the end, it’s us.