The introduction

In the first year of e-commerce, I was engaged in client development. Although the size of the team is small, the number of people in the docking middle layer is equivalent to nearly a quarter of the size of the team. In the fourth year, I joined a well-known domestic e-commerce company. The company’s main form of business is sales, and the mid-level team accounts for nearly a third of the team. Now, the team I lead, in the early stages of development, the mid-level team is also close to that size. Three teams are business team, electricity user larger scale, higher requirements on the concurrent, and adopting micro service architecture, by the middle ground floor offers all kinds of electricity services (such as order, inventory) and general services (such as search), so the middle tier teams need to go through all kinds of authorization and authentication service call each BU, to assemble the front-end interface adapter. Because there are so many interfaces to the C-side business, the middle tier takes valuable human resources from the team. Moreover, the longer the team is established, the more accumulated interfaces, effectively managing such a large number of interfaces is a headache.

A series of problems in the middle layer

Development debugging issues

The deployment of the middle layer on the Web site is earlier than that of the middle layer. It is generally deployed after the firewall and Nginx, and is more oriented to the C-end user services. Therefore, it has high requirements on the performance and concurrency, and most teams will choose the asynchronous framework in the selection. Because of its direct exposure to the C-side, it changes a lot. Most of the code that needs to be changed or configured frequently is arranged at this level and is released very frequently. In addition, many teams code in compiled languages rather than interpreted ones. These three factors combine to make debugging and development very painful for developers. For example, we used to choose the Play2 framework, which is an asynchronous Java framework that requires developers to be able to write asynchronously smoothly, but not many colleagues are familiar with debugging skills. It may seem simple enough to configure the various request parameters and result handling in the code, but the time and effort required to wait for Java to compile after tuning, unit testing, or configuration file changes can be enormous. If asynchronous coding specifications are also problematic, it can be a pain for developers.

public F.Promise<BaseDto<List<Good>>> getGoodsByCondi(final StringBuilder searchParams, final GoodsQueryParam param) {
		final Map<String, String> params = new TreeMap<String, String>();
		final OutboundApiKey apiKey = OutboundApiKeyUtils.getApiKey("search.api");
		params.put("apiKey", apiKey.getApiKey());
		params.put("service"."Search.getMerchandiseBy");
		if(StringUtils.isNotBlank(param.getSizeName())){
			try {
				searchParams.append("sizes:" + URLEncoder.encode(param.getSizeName(), "utf-8") + ";");
			} catch(UnsupportedEncodingException e) { e.printStackTrace(); }}if(param.getStock() ! =null) {
			searchParams.append("hasStock:" + param.getStock() + ";");
		}
		if(param.getSort() ! =null && !param.getSort().isEmpty()) {
			searchParams.append("orderBy:" + param.getSort() + ";");
		}
		searchParams.append("limit:" + param.getLimit() + "; page:" + param.getStart());
		params.put("traceId"."open.api.vip.com");
		ApiKeySignUtil.getApiSignMap(params,apiKey.getApiSecret(),"apiSign");
		String url = RemoteServiceUrl.SEARCH_API_URL;
		Promise<HttpResponse> promise = HttpInvoker.get(url, params);
		final GoodListBaseDto retVal = new GoodListBaseDto();
		Promise<BaseDto<List<Good>>> goodListPromise = promise.map(new Function<HttpResponse, BaseDto<List<Good>>>() {
			@Override
			public BaseDto<List<Good>> apply(HttpResponse httpResponse)throws Throwable {
				JsonNode json = JsonUtil.toJsonNode(httpResponse.getBody());
				if (json.get("code").asInt() ! =200) {
					Logger.error("Error :" + httpResponse.getBody());
					return new BaseDto<List<Good>>(CommonError.SYS_ERROR);
				}
				JsonNode result = json.get("items");
				Iterator<JsonNode> iterator = result.elements();
				final List<Good> goods = new ArrayList<Good>();
				while (iterator.hasNext()) {
					final Good good = new Good();
					JsonNode goodJson = iterator.next();
					good.setGid(goodJson.get("id").asText());
					good.setDiscount(String.format("%.2f", goodJson.get("discount").asDouble()));
					good.setAgio(goodJson.get("setAgio").asText());		

					if (goodJson.get("brandStoreSn") != null) {
						good.setBrandStoreSn(goodJson.get("brandStoreSn").asText());
					}
					Iterator<JsonNode> whIter = goodJson.get("warehouses").elements();
					while (whIter.hasNext()) {				
						good.getWarehouses().add(whIter.next().asText());
					}
					if (goodJson.get("saleOut").asInt() == 1) {
						good.setSaleOut(true);
					}		good.setVipPrice(goodJson.get("vipPrice").asText());
					goods.add(good);
				}
				retVal.setData(goods);
				returnretVal; }});if(param.getBrandId() ! =null && !param.getBrandId().isEmpty()))){
			final Promise<List<ActiveTip>> pmsPromise = service.getActiveTipsByBrand(param.getBrandId());
			return goodListPromise.flatMap(new Function<BaseDto<List<Good>>, Promise<BaseDto<List<Good>>>>() {
				@Override
				public Promise<BaseDto<List<Good>>> apply(BaseDto<List<Good>> listBaseDto) throws Throwable {
					return pmsPromise.flatMap(new Function<List<ActiveTip>, Promise<BaseDto<List<Good>>>>() {
						@Override
						public Promise<BaseDto<List<Good>>> apply(List<ActiveTip> activeTips) throws Throwable {
							retVal.setPmsList(activeTips);
							BaseDto<List<Good>> baseDto = (BaseDto<List<Good>>)retVal;
							returnPromise.pure(baseDto); }}); }}); }return goodListPromise;
	}
Copy the code

The code above is just an excerpt of one of the procedure functions. If we make the mid-tier scenario more complex, we’re not just dealing with coding performance, coding quality, coding time.

“Complex” scenario problems

Microservice granularity is fine. In order to achieve simple front-end logic and fewer service calls, most of our output for the C end is the result of aggregation. For example, our middle-tier logic for a search service is a process like this:

Access to membership information, membership card list, membership points balance, because different levels of membership will have different prices;

Obtain the user’s coupon information, which will have an impact on the calculated price; Get search results information, the results from three parts, the inventory price of business travel goods, guess the inventory price you like, the recommended inventory price, the inventory price of overseas goods.

The services involved include: intermediate services (aggregation services), membership services, coupon services, recommendation services, enterprise services, overseas search services, search services. In addition, there are various types of caching facilities and database configuration services.

    public List<ExtenalProduct> searchProduct(String traceId, ExtenalProductQueryParam param, MemberAssetVO memberAssetVO, ProductInfoResultVO resultVO,boolean needAddPrice) {
        // configId of the coupon available to the user
    	String configIds = memberAssetVO == null ? null : memberAssetVO.getConfigIds();
    	// For special projects, you cannot use the coupon function
    	if(customProperties.getIgnoreChannel().contains(param.getChannelCode())) {
    		configIds = null;
    	}
    	final String configIdConstant = configIds;
    	// Main search list information
    	Mono<List<ExtenalProduct>> innInfos = this.search(traceId, param, configIds, resultVO);
    	return innInfos.flatMap(inns -> {
    		// Business travel product recommendations
        	Mono<ExtenalProduct> busiProduct = this.recommendProductService.getBusiProduct(traceId, param, configIdConstant);
        	// Member product recommendation (guess you like)
        	Mono<ExtenalProduct> guessPref = this.recommendProductService.getGuessPref(traceId, param, configIdConstant);
        	// Business related queries
        	String registChainId = memberAssetVO == null || memberAssetVO.getMember() == null ? null : memberAssetVO.getMember().getRegistChainId();
        	Mono<ExtenalProduct> registChain = this.recommendProductService.registChain(traceId, param, configIdConstant, registChainId);
        	// Store manager hot push products
        	Mono<ExtenalProduct> advert = this.recommendProductService.advert(traceId, param, configIdConstant);
    		return Mono.zip(busiProduct, guessPref, registChain, advert).flatMap(product -> {
        		// Recommended (advertising) packaging
        		List<ExtenalProduct> products = recommendProductService.setRecommend(inns, product.getT1(), product.getT2(), product.getT3(), product.getT4(), param);
        		// Set other parameters
        		return this.setOtherParam(traceId, param, products, memberAssetVO);
        	});
    	}).block();
    }
Copy the code

The Service layer of the Service was constantly tweaking and changing based on product requirements and changes in the underlying microservice interface, and the interface invocation sequence diagram developed did not correspond to the code because of these changes made by the team.

In addition to the above issues, the coding of the aggregation of asynchronous calls to multiple microservices in the Service was not properly addressed because the spring-MVC framework coding style was synchronous, while the Service layer used asynchronous Mono and had to use blocks inappropriately. These code changes, missing documentation, and code quality together constitute code management issues in the middle tier.

Problem of savage development

I participated in the construction of a start-up technology team. At first, due to the need for rapid development, we tended to do a fat service, but as the team size began to grow, we gradually split the fat service into microservices, starting to generate mid-tier teams whose main purpose was to apply the aggregation of the underlying services.

But there was a time when we couldn’t hire as fast as we could with the number of services, and it was a constant switching of coding ideas. Because in addition to writing the underlying microservices after the split, you also need to write the aggregated mid-tier services.

When I shut down certain projects and started reorganizing my staff, I realized the hard truth: everyone had dozens of mid-tier services, so I couldn’t replace any of them. Because after changing hands many times, colleagues have been confused about the connection between the service.

In addition, there are various licensing methods, because of the savage growth of the team, all kinds of licensing methods are mixed together, some simple, some complex, some reasonable, and some unreasonable. In short, no one on the team could figure it out.

After a period of development, by organizing online services, we found a lot of resource waste, for example, sometimes only one interface used a microservice. In the early days, these microservices were requested on a large scale, but later, the project was abandoned and there was no traffic, but the running interface remained online. I, as the team manager, didn’t even have any statistics aggregated by the interface on paper.

When my boss told me to suspend the service of the partner company, I could not logically stop and return a business exception. As upstream suppliers, inventory of a multi-channel development we have many docking channels, provide the interface has a lot of special needs of customers, these requirements are generally in the middle of the logic control code, and channels to get offline, also won’t do any adjustment, because the developer needs to be updated according to the requirements for the code.

In addition, the external joint debugging of middle-level teams has been a problem for a long time. I often get complaints from front-end colleagues that they don’t want to add code to the data processing logic, but as front-end, they have to add a lot of code to transform the data to fit the interface logic. In package-size constrained environments such as applets, the movement of this code becomes a major problem late in development.

Gateway type selection failed. Procedure

At the time, there were two types of solutions on the market:

Middle tier solutions. The middle layer solution generally provides naked asynchronous services, other plug-ins, and customized functions based on requirements. Some services of the middle layer also have some functions of the gateway after modification.

Gateway solution. Gateway solutions are typically provided around buckets of microservices, or in their own right, providing general-purpose functionality such as routing. Of course, some of the gateways can be customized to add business functions in the middle layer.

Our business is changing very quickly. If the existing gateway solution on the market can meet the requirements and we have the ability to do secondary development, we are happy to use it.

At the time, Eolinker was our API automated testing provider and provided the corresponding administrative gateway, but the language was Go. The technology stack of our team is mainly Java, and the deployment plan of operation and maintenance is always around Java, which means that our selection is narrow, so we had to give up this idea.

We’ve chosen Kong Gateway in the past, but introducing a new and complex technology stack is not cheap, and Lua recruitment and secondary development, for example, are painful to avoid. Gravitee, Zuul, vert. x are all gateways that have been used by different small-scale teams. The most talked about features are:

1, support fusing, flow control and overload protection 2, support especially high concurrency 3, second kill

For business, however, fuses, flow control and overload protection should be the last measures considered. Moreover, for a growing team, service overload breakdown is a long time of business precipitation.

In addition, the traffic of seckill business is more to maintain a normal level, and its occasional high concurrency is within the processing capacity of our team. In other words, in the selection, more needs to be combined with the actual, rather than considering similar to Alibaba’s traffic, I only need to consider the above medium level and cluster scalability.

Previously, vert. x was the most widely used gateway in our team, and the coding style was gorgeous and cool.

private void dispatchRequests(RoutingContext context) {
  int initialOffset = 5; // length of `/api/`
  // run with circuit breaker in order to deal with failure
  circuitBreaker.execute(future -> { / / (1)
    getAllEndpoints().setHandler(ar -> { / / (2)
      if (ar.succeeded()) {
        List<Record> recordList = ar.result();
        // get relative path and retrieve prefix to dispatch client
        String path = context.request().uri();

        if (path.length() <= initialOffset) {
          notFound(context);
          future.complete();
          return;
        }
        String prefix = (path.substring(initialOffset)
          .split("/"))0];
        // generate new relative path
        String newPath = path.substring(initialOffset + prefix.length());
        // get one relevant HTTP client, may not exist
        Optional<Record> client = recordList.stream()
          .filter(record -> record.getMetadata().getString("api.name") != null)
          .filter(record -> record.getMetadata().getString("api.name").equals(prefix)) / / (3)
          .findAny(); // (4) simple load balance

        if (client.isPresent()) {
          doDispatch(context, newPath, discovery.getReference(client.get()).get(), future); / / (5)
        } else {
          notFound(context); / / (6)future.complete(); }}else{ future.fail(ar.cause()); }}); }).setHandler(ar -> {if (ar.failed()) {
      badGateway(ar.cause(), context); / / (7)}}); }Copy the code

However, the vert. x community was plagued by a lack of support and high entry costs, and the team couldn’t find even more suitable colleagues to maintain the code. The failure of gateway selection made us realize that there was no “Swiss Army knife” in the market, so we started our own research and started Fizz gateway design.

Embark on the road of research gateway

Do we need a gateway? What problems does the gateway layer solve? These two questions speak for themselves. We need gateways because they can help us solve load balancing, aggregation, authorization, monitoring, traffic limiting, logging, permission control, and so on. At the same time, we also need an intermediate layer, where microservices that refine the granularity of services have forced us to aggregate them. What we don’t need is complex coding, redundant glue code, and a lengthy distribution process.

Design considerations for Fizz

To address these issues, we need to blur the boundaries between the gateway and the middle tier, bridge the gap between the gateway and the middle tier, allow the gateway to support dynamic coding in the middle tier, and deploy as few releases as possible. To do this, you can simply use a clean gateway model and use low-code features to cover the functionality of the middle tier as much as possible.

Demand from the origin

When reviewing the original choice, I need to re-emphasize the requirements starting from the origin: 1. Java technology stack, supporting the Spring family bucket; 2, easy to use, zero training can also be arranged; 3, dynamic routing ability, can open new API anytime and anywhere; 4. High performance and horizontal expansion of the cluster; 5. Strong hot service orchestration ability, support front-end and back-end coding, and update API anytime and anywhere; 6. Online coding logic support; 7, extensible security authentication ability, convenient logging; API audit function, control all services; Scalability, powerful plug-in development mechanism;

Technical selection of Fizz

Selection in the Spring after WebFlux, strong features, because its monomer colleagues suggest named Fizz (Fizz is competitive game one of the heroes in “hero alliance”, it is a melee mage, it has one of the best in the AP monomer, therefore can hold most of the wizard, can be used as a counter hero well).

WebFlux is a typical non-blocking asynchronous framework. Its core is based on Reactor API. It can run on containers such as Netty, Undertow, and Servlet3.1, so it runs in a much more selective environment than traditional Web frameworks.

Spring WebFlux is an asynchronous, non-blocking Web framework that takes full advantage of the hardware resources of multi-core cpus to handle a large number of concurrent requests. It relies on Spring’s technology stack, and the code style looks like this:

    public Mono<ServerResponse> getAll(ServerRequest serverRequest) {
        printlnThread("Get all users");
        Flux<User> userFlux = Flux.fromStream(userRepository.getUsers().entrySet().stream().map(Map.Entry::getValue));
        return ServerResponse.ok()
                .body(userFlux, User.class);
    }
Copy the code

Core implementation of Fizz

For us, it was a zero-based project, and many colleagues were not confident at the beginning. I wrote the first core package of service choreography code for the service, Fizz, and wrote the commit as “Get started”.

I intend to define all service aggregation in one configuration file. If the user request is input, then the response is output, which is a Pipe. In a Pipe, there are different steps, corresponding to different series of steps; In each Step, at least one Input exists to receive the output of the previous Step. All the inputs are parallel and can be executed in parallel. A unique Context holds the intermediate Context throughout the lifetime of the Pipe.

In the Input and output of each Input, I added the expansion ability of dynamic script, and now it supports JavaScript and groove. The front-end logic supporting JavaScript can be extended in the back-end. Our configuration file just needs a script like this:

// Aggregate interface configuration
var aggrAPIConfig = {
	name: "input name".// User-defined aggregation interface name
	debug: false.// Debug mode, false by default
    type: "REQUEST".// Type, REQUEST/MYSQL
    method: "GET/POST".path: "/proxy/aggr-hotel/hotel/rates".// Format: /aggr/+ Service name + path. The group name starts with aggr-, indicating the aggregation interface
    langDef: { If the entry verification fails, the system provides a prompt message in different languages based on the configuration. Currently, the system supports Chinese and English
        langParam: "input.request.body.languageCode".// Enter the parameter language field
        langMapping: { // The mapping between the field values and the language
            zh: "0"./ / Chinese
            en: "1" / / English}},headersDef: { / / optional, define aggregate interface header section parameter, using JSON Schema specification (see: http://json-schema.org/specification.html), used for parameter validation, interface document generation
        type:"object".properties: {appId: {type:"string".title:"ID".description:"Description"}},required: ["appId"]},paramsDef: { / / optional, define aggregate interface parameter part parameters, using JSON Schema specification (see: http://json-schema.org/specification.html), used for parameter validation, interface document generation
        type:"object".properties: {lang: {type:"string".title:"Language".description:"Description"}}},bodyDef: { / / optional, define aggregate interface body section parameter, using JSON Schema specification (see: http://json-schema.org/specification.html), used for parameter validation, interface document generation
        type:"object".properties: {userId: {type:"string".title:"Username".description:"Description"}},required: ["userId"]},scriptValidate: { // Optional. This parameter is used in the scenario that headersDef, paramsDef, and bodyDef cannot cover
        type: "".// groovy
        source: "" // The script returns a List
      
        object, null: validation, List: List of error messages
      
    },
    validateResponse: {// The input parameter fails to validate the response, which is handled in the same way as dataMapping
        fixedBody: { // Fix the body
            "code": -411
        },
        fixedHeaders: {/ / fixed the header
            "a":"b"
        },
        headers: { // The referenced header
        },
        body: { // The referenced header
            "msg": "validateMsg"
        },
        script: {
            type: "".// groovy
            source: ""}},dataMapping: {// Aggregate interface data conversion rules
        response: {fixedBody: { // Fix the body
        		"code":"b"
            },
            fixedHeaders: {/ / fixed the header
            	"a":"b"
            },
        	headers: { // The referenced header, which defaults to the source data type, starts with the target type + a space if you want to convert it, e.g. "int"
                "abc": "int step1.requests.request1.headers.xyz"
            },
            body: { // The referenced header, which defaults to the source data type, starts with the target type + a space if you want to convert it, e.g. "int"
                "abc": "int step1.requests.request1.response.id"."inn.innName": "step1.requests.request2.response.hotelName"."ddd": { // script, when the return object of the script contains the _stopAndResponse field and the value is true, will finally request and return the result of the script to the browser
                    "type": "groovy"."source": ""}},script: { // The script evaluates the body value
                type: "".// groovy
                source: ""}}},stepConfigs: [{ // Step configuration
        name: "step1".// Step name
        stop: false.// Whether to return after executing the current step
        dataMapping: {// step response data conversion rule
            response: { 
            	fixedBody: { // Fix the body
                	"a":"b"
                },
                body: { // step result
                    "abc": "step1.requests.request1.response.id"."inn.innName": "step1.requests.request2.response.hotelName"
                },
                script: {// The script evaluates the body value
                    type: "".// groovy
                    source: ""}}},requests: [// Each step can call multiple interfaces
            { // A custom interface name
            	name: "request1".// Interface name in request+N format
            	type: "REQUEST".// Type, REQUEST/MYSQL
            	url: "".// The default url, used when the environment URL is null
                devUrl: "http://baidu.com".// 
                testUrl: "http://baidu.com".// 
                preUrl: "http://baidu.com".// 
                prodUrl: "http://baidu.com".// 
                method: "GET".// GET/POST, default GET
                timeout: 3000.The value ranges from 1 to 10000 seconds. If the value is not specified or less than 1 ms, the default value is 3 seconds. If the value is greater than 10 seconds, the default value is 10 seconds
                condition: {
                    type: "".// groovy
                    source: "return \"ABC\".equals(variables.get(\"param1\")) && variables.get(\"param2\") >= 10;" // The script execution result returns TRUE to execute the interface call, FALSE not to execute the interface call
                },
                fallback: {
                    mode: "stop|continue".// Whether to continue if the request fails
                    defaultResult: "" // When mode=continue, you can set the default response message (JSON string)
                },
                dataMapping: { // Data conversion rules
                    request: {fixedBody: {},fixedHeaders: {},fixedParams: {},headers: {// Default is the source data type, if you want to convert the type to start with the target type + space, such as: "int"
                            "abc": "step1.requests.request1.headers.xyz"
                        },
                        body: {"*": "input.request.body.*".// * To pass through a JSON object
                            "inn.innId": "int step1.requests.request1.response.id" // Default is the source data type, if you want to convert the type to start with the target type + space, such as: "int"
                        },
                        params: {// Default is the source data type, if you want to convert the type to start with the target type + space, such as: "int"
                            "userId": "input.requestBody.userId"
                        },
                        script: {// The script evaluates the body value
                            type: "".// groovy
                            source: ""}},response: {
                    	fixedBody: {},fixedHeaders: {},headers: {
                            "abc": "step1.requests.request1.headers.xyz"
                        },
                        body: {"inn.innId": "step1.requests.request1.response.id"
                        },
                        script: {// The script evaluates the body value
                            //type: "", // groovy
                            source: ""}}}}]}]}Copy the code

Run context format:

// Runtime context, which holds customer input and input and output results for each step
var stepContext = {
	// Whether to DEBUG mode
	debug:false.// elapsed time
	elapsedTimes: [{
		[actionName]: 123.// Operation name: Time}].// input data
    input: {
        request: {path: "".method: "GET/POST".headers: {},
            body: {},
            params: {}},response: { // Aggregate interface responses
            headers: {},
            body: {}}}.// step name
    stepName: {
        // step request data
        requests: {
            request1: {
                request: {url: "".method: "GET/POST".headers: {},
                    body: {}},response: {
                    headers: {},
                    body: {}}}.request2: {
                request: {url: "".method: "GET/POST".headers: {},
                    body: {}},response: {
                headers: {},
                    body: {}}}/ /...
        },
        // step result 
        result: {}}}Copy the code

When I think of Input as nothing more than an Input and output, plus an intermediate process of data processing, then it has a lot of potential for expansion. For example, in code, we could even write a MysqlInput class that extends Input

public class MySQLInput extends Input {}Copy the code

It only needs to define a few Input class methods to support MySQL Input, and even dynamically parse MySQL scripts, and do data parsing transformations.

public class Input {
	protected String name;
	protected InputConfig config;
	protected InputContext inputContext;
	protected StepResponse lastStepResponse = null;
	protected StepResponse stepResponse;
	
	public void setConfig(InputConfig inputConfig) {
		config = inputConfig;
	}
	public InputConfig getConfig(a) {
		return config;
	}
	public void beforeRun(InputContext context) {
		this.inputContext = context;
	}
	public String getName(a) {
		if (name == null) {
			return name = "input" + (int)(Math.random()*100);
		}
		return name;
	}
	/** * Check if the Input needs to be run@stepContextStep context *@returnTRUE: Run */
	public boolean needRun(StepContext<String, Object> stepContext) {
		return Boolean.TRUE;
	}
	public Mono<Map> run(a) {
		return null;
	}
	public void setName(String configName) {
		this.name = configName;
	}
	public StepResponse getStepResponse(a) {
		return stepResponse;
	}
	public void setStepResponse(StepResponse stepResponse) {
		this.stepResponse = stepResponse; }}Copy the code

The content of the extension code does not address asynchronous processing. As such, Fizz already handles asynchronous logic in a friendly way.

Service Orchestration of Fizz

The visualized background allows for Fizz service choreography. Although the core code above is not very complex, it is enough to abstract the whole process. For now, a visual interface through Fizz-Manager simply generates the corresponding configuration file and allows it to be updated and loaded quickly. Through the Request header, Request body and Query parameters defined in the Request Input, as well as verification rules or custom scripts to implement complex logic verification, in defining its Fallback, we implement a Request Input, through some Step assembly, Eventually, a service that has been choreographed online can be used in real time. If it is a read-only interface, we even recommend direct live testing, which of course supports separation of the test interface from the formal interface, and supports return context, which allows you to view the input and output of each step and request throughout the execution process.

Fizz script validation

Fizz also offers more flexible scripting when the built-in script validation isn’t enough to cover scenarios.

// The name of the javascript function cannot be changed
function dyFunc(paramsJsonStr) {
  // For context, see context.js for data structure
  var context = JSON.parse(paramsJsonStr)['context'];
  // common is a built-in context-friendly tool class. For details, see common.js. Such as:
  // var data = common.getStepRespBody(context, 'step2', 'request1', 'data');
  // do something
  If the returned Object contains _stopAndResponse=true, the request will be terminated and the script result will be returned to the client (mainly used in the case of an exception to terminate the request).
  var result = {
    // _stopAndResponse: true,
    msgCode: '0'.message: ' '.data: null
  };
  // If the result is Array or Object, it must be converted to a JSON string
  return JSON.stringify(result);
}
Copy the code

Fizz data processing

Fizz has the ability to transform the Input and output of a request. It takes full advantage of the json Path to change the Input and output of the request by loading the definition of the configuration file.

Powerful routing for Fizz

Fizz’s dynamic routing feature is also designed to be useful. It has a smooth replacement gateway scheme. Initially, Fizz could coexist with other gateways, such as the vert.x based gateway mentioned earlier. So, Fizz has a nginx-like reverse proxy solution, a pure route-based implementation. So, early in the project, traffic through Nginx was forwarded unadulterated to Fizz and then to vert. x, which represented all vert. x traffic. After that, the traffic was progressively forwarded to the back-end microserver, a portion of the custom common code on vert. x was sunk to the low-level microserver, vert. x and the mid-tier services were completely deprecated, and the number of servers was reduced by 50%. After we did the tweaks, the problems that had been plaguing me in the middle tier and the servers were finally resolved, and we were able to pare down the list of services that each of our colleagues had and focus on more valuable projects. When this became clear, the project naturally showed its value.

For channels, the routing function here also has very useful functions. The Fizz service Group concept allows it to set up different groups for different channels to solve the problem of channel differentiation. In fact, there can be multiple sets of different versions of the API online, which also solves the PROBLEM of API versioning.

Extensible authentication for Fizz

Fizz also has a special solution for licensing. Our company was established early, and there are old codes written for many years in our team, so there are various authentication methods in the code. At the same time, there are also problems with the support of external platforms, such as the code in App and wechat, which need to use different authentication support.

The figure above shows the check configuration of the passed configuration mode. Fizz actually offers two options: a public built-in checker and a custom plug-in checker. When the user uses through the drop – down menu can carry out convenient selection.

Fizz plug-in design

From the beginning of Fizz’s design, we thought about the importance of plug-ins, so we designed a plug-in standard that was easy to implement. Of course, this requires developers to have a deep understanding of asynchronous programming, and is suitable for teams with custom requirements. Plugins only need to inherit PluginFilter, and only two functions need to be implemented:

public abstract class PluginFilter {
    private static final Logger log = LoggerFactory.getLogger(PluginFilter.class);
    public Mono<Void> filter(ServerWebExchange exchange, Map<String, Object> config, String fixedConfig) {
       return Mono.empty();
    }
    public abstract Mono<Void> doFilter(ServerWebExchange exchange, Map<String, Object> config, String fixedConfig);
}

Copy the code

Fizz management features

The resource protection of medium and large enterprises is also very important. Once all the traffic passes through Fizz, it is necessary to establish the corresponding routing function in Fizz, and the corresponding API audit system is also a major feature. All the company’S API interface resources are conveniently protected, and there is a strict audit mechanism to ensure that each API is reviewed by the management of the team. In addition, it has API quick offline function and downgrade response function.

Other features of Fizz

Fizz, of course, works with Spring’s family bucket, uses configuration center Apollo, load balancing, logging access, whitelist access, and a host of other gateway features we think we should have.

Fizz performance issues

Performance is not a selling point, but that doesn’t mean Fizz is bad. To benefit from WebFlux plus, we compared Fizz with official Spring-Cloud-Gateway, using the same environment and conditions, using a single node. Our QPS was slightly higher than spring-Cloud-Gateway. Of course, we still have the imagination to be able to optimize.

Intel(R) Xeon(R) CPU X5675 @ 3.07GHz Linux version 3.10.0-327.el7.x86_64

Intel(R) Xeon(R) CPU X5675 @ 3.07GHz Linux version 3.10.0-327.el7.x86_64

conditions QPS (/ s) 90% Latency(ms)
Access the back end directly 9087.46 10.76
fizz-gateway 5927.13 19.86
spring-cloud-gateway 5044.04 22.91

Application and Achievements of Fizz

Fizz was designed with a complex middle tier in mind within the enterprise: it could intercept all traffic and replace existing gateways in parallel and incrementally. So when it was rolled out internally, Fizz went well. In the initial research and development, we selected C-side business as the target business, and only replaced part of the complex scenes when we launched. After a quarter of trial, we solved various problems such as performance and memory. After the release stabilized, Fizz was rolled out across the BU line of business to replace the previously numerous application gateways, followed by the entire company’s applicable business. Originally, the RESEARCH and development teams of the C end and THE B end were able to spare their hands to engage in the research and development of the bottom business. Although the staff of the middle layer was reduced, the research and development efficiency was greatly improved. For example, the research and development time of a group of replication services that used to need several days was reduced to one seventh of the previous time. With Fizz, we were able to merge services, reducing the number of servers in the middle tier by 50% while increasing the capacity of the service.

Communication development of Fizz

In the early days, Fizz started to scale with configuration alone, but as the number of users grew, profile writing and management required us to expand the project. Currently, Fizz contains two main back-end projects, Fizz-Gateway and FizZ-Manager. Fizz-admin is the front-end configuration interface of FizZ. Fizz-manager and Fizz-Admin provide graphical configuration interfaces for Fizz. All pipes can be written and on-line from the interface.

Fizz offers the Fizz-Gateway-Community version of the solution, and the core implementation of the technology will be open under the GNU V3 license as a way of communicating with other technologies. All apis of Fizz-gateway-Community will be published for secondary development. Because fizz-Gateway-Professional is tied to the team business, it is commercially closed. The corresponding management platform code fizz-Manger – Professional is a free download of the commercial version of the open binary package, which is available for free use by projects using the GNU V3 open source license (if your project is commercial, please contact us for authorization). In addition, we will choose the right time to communicate with you about Fizz’s rich plugins.

Whether or not our project exchange can help you, we sincerely hope to get your feedback. No matter whether the project technology is awesome, perfect or not, we remain true to our original vision: Fizz, a management gateway for medium and large enterprises.