Problems caused by interface overuse

Interface design has three levels of reuse, respectively: cross-project reuse, cross-page reuse, cross-end reuse. Correct reuse can improve efficiency, but wrong reuse can only reduce efficiency.


1. Reuse across projects

Interfaces should not be reused across projects unless explicitly abstracted as separate public services.

From the perspective of business modeling, the basic premise of reuse is synchronization, that is, the iteration of services must be applied synchronously to all the dependent parties. As long as it is not synchronized, only some of the dependent parties need the new logic and others do not, then reuse will certainly become a mixture of multiple sets of logic. Since the project has been divided, then the business logic will most likely go to two branches, still reuse, will only increase the system liability.

From the perspective of organization division, the maintenance side may not be the same team after the project is split. If there are still reusable interfaces, the boundary of authority and responsibility is not clear, and the incomplete decoupling may lead to mutual interference and failure.

Interface reuse across projects is usually because the new project needs to be put online urgently. In order to achieve speed, the old interface is directly reused in the new project. It is said that there is no time to dismantle it later. Because TODO without a clear responsible person and deadline is completely invalid. There is also a situation, is the project split, not clean, this dark pit, sooner or later will be exposed.

So every project, especially a new project that is split or cloned from an existing system, must clarify dependencies to avoid problems caused by incomplete cutting. If you still keep reuse, and still keep iterating logic, is equal to accumulating debt, and then split, the cost will multiply.


2. Reuse across pages

Reuse of interfaces across pages can also be problematic unless there are explicit equivalency scenarios.

Some common configurations, for example, merge into a common interface because some of the information is required for each page. If configuration information is added to a page separately, it must be added to the public interface. As you get further down the road, the logic of the common interface becomes bloated and performance deteriorates, affecting all pages. And the maintenance cost of public logic and the risk of change is also more and more large, the historical logic pile more and more, even if you do not understand also dare not change, once the error, all pages are affected.

In addition, static configuration information is suitable for caching because it does not change often. A common configuration interface for all pages is also bad for caching, because changing even one configuration item on a page invalidates the entire data cache.

Therefore, in interface design, the reuse level should be reduced to the microservice layer or service layer, and then assembled in the Controller layer, providing separate interfaces for each page.


3. Cross-end reuse

Once a native interface is used, it should be considered a “sealed” interface. In order to be compatible with historical versions, fields can only be added in subsequent development, neither deleted nor changed (unless the interface is versioning, which is disgusting).

To a certain extent, this brings some benefits to the front end, because many times the front end can use this feature to forcefully demand that the logic be moved to the back end: “Because native has a version problem, it cannot be changed once it is launched 😁”. Haha, there’s a better reason: cross-end reuse. Compared with the repeated implementation of Native + front-end, the logic is moved to the back end, which is more efficient on the whole, and the boundary of problem detection is clearer: there is no need to worry about whether the logic is implemented in the front end or the back end, and the front end is pure interface data visualization interface.


The interface format is inconsistent

As the end of the production chain that aggregates all kinds of resources, if more than one back-end team is connected to the front-end, it is highly likely to encounter the problem of inconsistent interface formats. Specific performance:

  • Inconsistent data format. The main reason is that the definition of status code and exception information is inconsistent, for example, status code, not only which Int represents [normal] cannot be unified, even whether to use Int format status code is not uniform, indeed some people use string format status code
  • Content-type is inconsistent, some X-www-form-urlencoded, some JSON
  • Parameter positions are inconsistent. Some post interfaces take parameters in query, some restful interfaces define parameters in Path, and some interfaces pass parameters in requests when they can and should take parameters from cookies
  • Multiple different exceptions share the same status code
  • .

How do you encapsulate the common network request methods? How to do unified exception handling and monitoring?

The front end is the demand side of the page interface, and the back end is the data assembly interface according to the page needs of the supply side. The supplier usually only cares about whether the data is sufficient or not, and does not care about the data format, because the format is what the demand side cares about, according to the principle of “who is the pain who solves”, so the front end has to set standards and card acceptance, such as the front end to the interface document, is a way.

The front end can also write its own interfaces (write controllers), but this can cause some problems, not least the increased workload, as we’ll see in a later section.

The front end also needs some flexible solutions, such as making common network request methods configurable:

const requestServiceA = new Request({
    contentType: 'json'.normalStatus: 0,
    commonParams,
    host
})
const requestServiceB = new Request({
    contentType: 'form'.normalStatus: 200,
    commonParams,
    host
})
requestServiceA.get(...)
requestServiceB.post(...)
Copy the code

There is also a scheme in the gateway layer or agent layer to do an adaptation, of course, the premise is to have the gateway or agent ability.


Field reuse and ambiguity

Given that three pages use the same content, is it reasonable to reuse the same field? If the content is configured in the background, does the person who changes the configuration know that the change will affect not only page A, but also page B and page C? What if the contents of page B need to be changed to another one?

In order to “improve efficiency, save trouble” for the reason of reuse, large probability is a problem.

The other type of reuse, semantic reuse, is even more problematic. For example, suppose there are three types of bus cards: ordinary card, student card and senior card. Ordinary card is balance, student card is secondary card, and senior card is free card. Then can we use [card type] to judge [pricing model] and [free]?

const isFree = cardType === 'Senior citizen card'
const isPayPerUse = cardType === Student Card
Copy the code

If you do, demand will come out and smack you in the face. There will be non-free senior cards, senior cards based on frequency, student cards based on amount.

Each field should have only one semantics; don’t use one field to do two different things.


Get or Post

Sometimes the back end might think, well, get is the same as POST, and POST is more secure, so I’ll just use POST.

Opera browser CTO Luo Ziyu1 explains the difference between post and GET requests. The basic conclusion is: Use get as well as get, because there are several advantages that Post does not have:

  • You can directly enter the URL in the address bar to open the request result
  • Can be cached to improve performance
  • Get requests have a higher network success rate because they can be retried when the network fails


Anti-weight, throttling, and race issues with interface requests

If a button is only bound to the request event and does nothing, then clicking in a short time, sending multiple asynchronous requests in succession, can easily cause problems. If it is a CREATE interface, it will cause repeated creation. If you have a GET interface, you may encounter race problems.

In principle, non-idempotent interface should consider anti-weight; Idempotent interfaces, consider throttling and race handling.

From a monitoring perspective, the front end should consider proactively reporting a large number of repeated requests within a short period of time, as this could indicate an infinite loop or some other failure.


Breaking Change and version control issues

What is the breaking change of interface? Technically, these are all:

  • Parameter data structure and field type changes
  • New Parameter field
  • Data changes and field type changes in response
  • Deletes a field of the response
  • Method, Content-Type, and other header fields change

From a business logic perspective, if the semantics of a field change, it is also a breaking change. For example, the interface returns a list of bus stations, stationList, and one day the product needs to add subway stations. If it is added directly to stationList, then the stationList is no longer the original stationList. The original field meaning was “bus type station”, but changed to “public transport station”, different semantics, corresponding business logic processing must be different.

As most native developers know, fields cannot be changed or deleted. Only fields can be added because breaking change is incompatible with online history versions. Versioning must be considered whenever breaking change occurs on the interface and multiple versions exist on the user.

Both Native and RN have multiple versions, and the Web may also have multiple versions. For example, the front end uses grayscale publishing, so the grayscale published page and the online existing page constitute the new and old versions.

So how do you make version control compatible with older versions? There are three main ideas:

  • Interfaces are controlled directly by version parameters. One limitation of this method is that it is difficult to handle an interface with multiple callers (such as several different apps), each with a different version number
  • The new and old versions are distinguished by the presence or absence of parameters (rather than specific values). If the new parameters are added, it is the new version, and if the parameters are not added, it is the old version, which avoids the limitation of the previous method
  • Add an interface, adjust the new interface for the latest version, and adjust the old interface for the previous version. There may also be limitations to this approach, for example, some operations depend on the path of the interface, and if the path changes, existing functionality (caching, preloading, alarm configuration, automation use cases, and so on) will be affected


Interface document management and maintenance problems

Generally rely on manual interface documentation, a long time, the probability is not updated. And interface one more, often not easy to find. In general, the back end doesn’t care about interface documentation (especially for the front end) because it just looks at the code. The back end doesn’t like to document because programmers don’t like to document.

Interface documentation, which is very standardized and well suited for automated management with tools, should no longer be maintained manually, but it is still not fully implemented in many cases for several reasons: the pain point is not big enough, the tool is too difficult to use, the path depends on no one to drive change, you have to write code before you can generate documentation…

This is one of the additional information costs associated with the separation of the front and back ends. Some of the problems are not technical, but cultural temperament of the team. If everyone can tolerate interface documentation, chances are things will stay that way.


A problem with the front-end BFF layer

There are many possible reasons for front-end BFF:

  • The front end wants to tune the Thrift service splice port itself
  • The front-end does interface proxy forwarding to avoid cross-domain problems
  • The front-end does the service and eats up some of the work of the back-end

No matter how many reasons there are, in my opinion, there is only one reason: the front end can not find something to do, and want to do something other than writing the page.

Unless the front-end service is heavy (the server is technically light and the front-end configuration is heavy), it is best to separate the front end from the front end. For small businesses or some independent functions, the front end can toss services, but for large businesses at the back end, the front end had better restrain and convergence of creative impulse, not to engage in BFF layer tightly coupled with the back end business.

It’s hard to justify any additional benefits (writing controllers for the back end is definitely not a benefit) and increased maintenance costs.

  • Deployment: The front end handles not only the deployment of static resources, but also the deployment of Node services
  • Monitoring: how does the Node layer ensure the availability and stability of services, and how to monitor anomalies?
  • Maintenance: at the beginning of this set of things, it is likely to no longer maintain, patted butt to other places to build wheels, large probability even a document did not stay, notes do not write, take over the people, how to do maintenance? How do you know what the Node layer is doing? Suppose a requirement changes somewhere in the Node layer. How do you know where to change it?
  • Troubleshooting: Because there is a node link, there is also a link for troubleshooting. The front end is said to be an interface problem, and the back end is said to be a node problem. As a result, the front and back ends have to check logs and comb links
  • Organization ability: Front-end write interface, which means to cut into the back-end service model. If there are many back-end modules and business logic is complex, there are higher requirements for people (both willing and able to undertake the development and maintenance of BFF layer).