Feel free to submit code so that requirements releases don’t have to work overtime

background

Agile development takes the evolution of users’ needs as the core, and adopts the iterative and step by step method to develop software. Xianyu currently adopts the swimlane task mode for iterative development. The development cycle is a version every two weeks. The release frequency is relatively high, and there are many business demands for parallel development. Testing resources are relatively scarce, how to ensure the quality of client development? At the same time, iterative construction, integration, and testing require human intervention, resulting in high communication costs and error rates.

How to solve these problems effectively? The first thing that comes to mind is continuous integration, the ability to automate, integrate testing, and respond to problems in a timely manner to reduce development and testing costs and improve the engineering efficiency of the team. Xianyu has done some exploration and practice in client continuous integration scheme. This paper mainly takes the iOS multi-bundle project as an example to explain how to use SpringBoot and Vue to realize continuous integration scheme, associate demand-code-test, and achieve code structure and continuous integration.

1. Data model

1.1 Swimlane model

First, let’s take a look at the swimlane model to get a sense of it. Let’s start with a picture:

If you need to develop requirements, pull out the corresponding Feature branch from the Develop branch and merge the Feature back into the Develop branch. Pull out the Release branch, because the Master branch is not very commonly used, so it is not used for the idle fish side.

This is the situation of a single library, which is easier to understand. As mentioned in the previous background, the iOS project is in the process of library demolition, and the situation of library demolition is roughly as follows:

IOS will have a master project to manage these sub-libraries, which is 8 sub-libraries, plus a master project, which means 9 Git addresses. When developing requirements, the main bundle changes are concentrated in: IFMatrix, IFContainer; If more than one library is changed, it means that each library needs to pull a Feature branch. Once that’s done, merging it into the Develop branch is a lot of work.

This is just one requirement, and if there are n requirements, that’s 9* N work for the integrator. This is just an iOS project, there is an Android project, and there may be weex and FLUTTER projects in the future. In the worst case, the workload will be 4*9* N, which is a big challenge for any developer/tester /PM.

Therefore, automatic integration is extremely urgent for idle fish. In order to do automatic client integration, there are several problems in front of us:

  1. Multiple requirements, how to ensure orderly integration?

  2. How is continuous integration and how should the solution be designed?

  3. When integration is complete, how do I trigger the tests?

Let’s start by looking at standardizing the development process, linking requirements, code, and integration to automate from source to end.

1.2 Associating requirements and codes

Requirements are managed on the Aone platform, each requirement should have an ID, how to associate requirements with code?

Note: Aone is a demand management platform

The solution is to add the required information, such as the required ID, to git commit. Git commit events are intercepted and related requirements are added to comMet. The next question is how to get related requirements information.

There are two methods:

  1. Unified branch naming conventions, such as Task /task_< AoneId>_

  2. Actively enter requirements information when submitting, such as fix ##

At the time of submission, the

can be obtained and the code will be associated with the requirement, as shown below:

The second line is the link to the associated requirements, and each COMMIT carries the requirements information, mainly to locate the test scope later. After the requirement is tested, it can monitor the metaq message of the requirement state change, merge the branch code first, and then delete the branch automatically.

See this article: Hook Git code to meet requirements

1.3 Associating requirements with integration items

As mentioned in the background, the idle Fish test group expects to trigger relevant integration and test pieces in both the development stage and the integration stage, which requires us to associate requirements with integration items, and one requirement corresponds to a Ferris wheel project.

Note: Ferris Wheel is a build platform that can be configured with dependent modules

In the database, product-demand-Ferris wheel projects can be associated. Each projectId corresponds to one Ferris wheel project, and each Ferris wheel project corresponds to many configuration items. In this way, the database has data relations of integration items, including the dependency relationship between projects.

Next comes the second question: how is the integration sustainable and how should the solution be designed?

2. Automatic integration framework

The relationship between requirements and Ferris wheel project has been stored in the data. How can the associated code be applied to achieve sustainable integration? Xianyu currently adopts WebService to carry the series of services, forming a Pipline mode.

2.1 Platform Architecture

This platform is built with Springboot and adopts the design of separating the front and back ends. The exposed interfaces of the server are restful. The front end is written with Vue and communicates with the server by sending AJAX requests through AXIOS. In addition to gitLab, Ferris Wheel, and Aone, the source stores a relational mapping table in a local database.

In this figure, you can see that it is mainly divided into several chunks: data layer, business layer, interface layer, and front-end UI+ client. The whole platform is a large client, so the GitLab base service, MTL base service, Aone base service and Jenkins base service serve as a data layer. The local database mainly stores the mapping relationship of demand-code-packaging, such as the code change of the sub-library, which needs to trigger the ferris wheel project packaging, and needs to reverse search.

This service is in a server of the daily environment, but there is another problem: the Ferris wheel is in the pre-delivery environment. The daily environment and the pre-delivery environment are network isolated, so the HSF service provided by the Ferris Road cannot be directly invoked. In order to solve this problem, we set up a set of bridge services in the pre-delivery environment, and transfer services through vipServer.

2.2 Event-driven

The entire platform is event-driven and consists of three main parts: Merge Request, Gitlab Push and mechanical trigger (manual and timed).

Gitlab provides a very user-friendly interface, which can listen to code changes. The configuration method is also very simple, as shown in the following figure:

With push events and Merge Requests, the platform provides a POST restfull interface, which is then configured in the GitLab project to listen for code changes.

                                                                        
     
  1. / * *

  2. * Monitor the main entrance of Gitlab Webhook

  3. * @param payload

  4. * /

  5. @RequestMapping (value = "webhook", method = RequestMethod .POST )

  6. public void webhooks( @RequestBody String payload ) {

  7. logger .info (payload );

  8. GitlabHookEvent event = JSON .parseObject (payload , GitlabHookEvent. class);

  9. eventService .dispatchGitlabEvent (event );

  10. }

Copy the code

Note: Gitlab push and merge events may be sent repeatedly, so it needs to be reprocessed

GitlabHookEvent is resolved here, which is then distributed by GitlabEventService and triggered by the integration module, so let’s look at the continuous packaging solution.

2.3 Continuous Packaging

Continuous construction, the priority is to resolve the dependency between bundles. Now only single dependency processing is supported. A copy of the mapping relationship will be saved in the database, and the corresponding dependency library can be resolved when a Ferris Wheel project construction is triggered. The main flow is as follows:

After receiving the list of subbundles to be built, check whether the subbundles need to be repackaged. Check the following rules: Based on the time difference between the last commit information and the last integration success, if the difference is greater than a threshold, it indicates that the subbundles need not be repackaged. Otherwise join the pack queue.

A set of main project + sub-bundles was added to the packaging task queue as an overall integration task. Since there was no way to get the callback of the Metaq message successfully packed by Ferris Wheel, we had to poll the result. First check whether the child bundle has ended, if so, trigger the main project packaging; If no child bundles are being packaged, check to see if the main project is finished.


     
  1. private void triggerMTLBuildInterval(FMPackageTask task, MTLProduct product, int mtlProjectId){

  2. // Analyze submodules

  3.        ArrayList<MTLBuildConfig> modulesConfigs = gitlabMTLBridge.getModuleBuildConfigList(mtlProjectId);

  4. if (modulesConfigs ! = null) {

  5.            for (MTLBuildConfig moduleConfig : modulesConfigs) {

  6.                boolean rebuild = isNeedRebuildForConfig(moduleConfig);

  7. if (! rebuild) {

  8.                    continue;

  9.                }

  10. // Check whether the current package is being packaged. If it is packaged again, cancel the current compilation

  11.                MTLBuildResult latestBuildResult = mtlService.getLatestBuildResult(moduleConfig.id, null);

  12. if (latestBuildResult ! = null){

  13.                    String status = latestBuildResult.buildStatus;

  14.                    if (status.equals(MTLBuildStatus.RUNNING.getValue()) ||

  15.                            status.equals(MTLBuildStatus.WAITING.getValue())){

  16.                        mtlService.cancelBuildTask(product.rpc_key, latestBuildResult.id);

  17. Logger. info(" moduleconfig.toString () ");

  18.                    }

  19.                }

  20. // Perform the packing operation

  21.                int resultId = triggerBuildWithConfig(moduleConfig);

  22. if (resultId ! = 0) {

  23.                    task.moduleConfigs.add(moduleConfig);

  24.                }

  25.            }

  26.        }

  27. // If there are sub-projects, you need to type the main project first and then the main project

  28.        if (task.moduleConfigs.isEmpty()){

  29.            triggerBuildWithConfig(task.mainConfig);

  30.        }

  31.    }

Copy the code

Handling of exceptions, such as the failure of any of the subbundles, requires cancellation of the entire build task. When the build is complete, the event is broadcast through ApplicationEvent, and the required service listens for the result and performs related processing.

                                                                                
     
  1. / * *

  2. * Broadcast build events

  3. * @param task

  4. * /

  5. private void sendApplicationEvent( FMPackageTask task ){

  6. ApplicationEventMTLPackage event = new ApplicationEventMTLPackage( context);

  7. event .task = task ;

  8. context .publishEvent (event );

  9. }

Copy the code

Next comes the third question: how can CI tests be triggered when automatic integration ends?

3. Integration test

Now that we have got the build result, regardless of success or failure, relevant CI tests will be triggered. How can we determine the test range of test checkpieces to improve the test efficiency?

First of all, two problems should be solved:

1. How to define the test scope?

For clients, page-based regression is appropriate, so the protocol of the test system is to regression according to the page scheme. This has the advantage of customizing related parameters, and also supports WEEx and FLUTTER pages.

2. How to determine the test scope?

In the previous article, we also mentioned that requirements-code-builds are now correlated, with related driver events for each integration.

  1. Merge Request: With RELATED Mrs, you can get a list of commits

  2. Push: For each Push, you can also get a list of commits

  3. Mechanical Commits: You can get a list of commits at a specified time interval

For each of the three sources, you can get lists of commits, and in turn, you can get lists of file changes, people who changed them, and associated needs. With this information in hand, you can frame the scope of code variation.

Every time you run a CI test, you know which requirement is being changed.

Example code is as follows:


     
  1. / * *

  2. * Get the modification scope

  3. * @param projectId

  4. * @param commits

  5. * @return

  6. * /

  7. public FMCITriggerParam getChangeScope(int projectId, String branch, ArrayList<GitlabCommit> commits){

  8. // Get platform information

  9.    Repo projectRepo = repoMapper.getRepoByProjectId(projectId);

  10.    String platform = "ios";

  11. if (projectRepo ! = null){

  12.        platform = projectRepo.platform;

  13.    }

  14. // The submitter

  15.    ArrayList<String> authors = getCommitsAuthors(commits);

  16. // Modify the file

  17.    ArrayList<String> changeFiles = commitService.getCommitsChangeFiles(projectId, commits);

  18. // Modify the scope

  19.    ArrayList<String> pages = getPagesByFiles(projectId, changeFiles);

  20. // Trigger mode

  21.    ArrayList<String> triggerTypes = new ArrayList<>();

  22.    triggerTypes.add("uiauto");

  23.    triggerTypes.add("monkey");

  24.    FMCITriggerParam change = new FMCITriggerParam();

  25.    change.projectid = projectId;

  26.    change.platform = platform;

  27.    change.mergerequestid = 0;

  28.    change.branchName = branch;

  29.    change.userlist.addAll(authors);

  30. change.pages = String.join(";" , pages);

  31.    change.triggertype.addAll(triggerTypes);

  32.    return change;

  33. }

Copy the code

After obtaining the information about the submitting person and the modification scope, the test can prompt relevant errors.

4. Result statistics

Here is the number of integrations within a week, using the period from 7.8-7.15 as an example (iOS project) :

From the top is the statistics of the branched dimensions,

  1. Integration branch Develop, which triggers related integrations on a daily basis

  2. The requirements branch, during the development cycle, has a high trigger volume

From the dimension triggered by the event:

  1. Mainly with timing trigger is given priority to

  2. The number of push triggers increases during the development cycle

At present, the demand for integration is relatively small, so the quantity is relatively small. However, the overall plan is stable, and relevant data statistics, such as construction time and demand development time, will be improved later.

5. Summary of trampling pits

During development, some potholes were stepped on and some notes were taken.

5.1 Axios Network Request

Because of the design of front and back end separation, there will be cross-domain problems during debugging, the solution is to do the relevant proxy Settings in the Vue config.


     
  1. proxyTable: {

  2.  '/fishci': {

  3. Target: 'http://127.0.0.1:8090', // API port

  4. // changeOrigin: true, // allows cross-domain

  5.    pathRewrite: {

  6.      '^/fishci': '/'

  7.    }

  8.  }

  9. }

Copy the code

The cross-domain problem was resolved, but when buC authentication was connected, the request needed to be redirected, and the front-end AXIos could not intercept the 302 return. To solve this problem, the server also does relevant processing. The server converts the 302 return into the 200 return, and puts the redirected content in response, which is then intercepted by the front-end AXIos for processing.

First of all, there is related to the configuration to add Filter, adding related configuration registration. AddInitParameter (” HTTP_302_JSON_RESPONSE “, “json”); When the front end requests, if the request ends in JSON needs to be redirected, it can get the return of 200, but the redirected content will be presented in the text form in response and then intercepted.

                                                                                        
     
  1. axios .interceptors .response .use ((response ) => {

  2. if (response .status === 200 && response .data .hasError ) {

  3. Return window. location = "< redirected link >";

  4.     }

  5. return response ;

  6. }, function ( error) {

  7. return Promise. reject( error);

  8. });

Copy the code

5.2 Integrated Traffic Limiting

The packaging logic of a single requirement is relatively simple, but because push or Merge triggers full packaging, the frequency is high, and the related flow limiting logic needs to be done, as shown in the following figure:

There are two queues that hold the current packaging task: the execution queue and the wait queue

  1. New Feature2, currently there are repeated tasks that have been integrated, which will be put in the waiting queue. If there are repeated tasks, they will be deleted

  2. Feature5, which has no similar tasks currently running, is added directly to the execution queue

  3. The execution queue has reached its maximum capacity of 5, and Feature6 has been added to the wait queue

When the task in the execution queue ends, a task that is not packaged is selected from the wait queue and placed in the execution queue.

6. Conclusion

This article is mainly to sort out the practice of how to effectively improve the energy efficiency of client engineering under the swimlane development mode. Now that the principal process has colluded, it is possible to take targeted statistics on relevant data, such as build time and measurement of test effectiveness, to have an overall measurement of client integration efficiency, and then reverse optimize the client integration solution.

In the whole scheme, test verification is very important, how to achieve efficient test? In principle, the cost is relatively low, the frequency of running can be higher, such as: code testing, unit testing; Cost is relatively high, frequency can be low, such as UI Automation. In general, requirements — code — builds can now be correlated to count the number of code submissions, builds, and bugs for a requirement. In reverse, you have a metric for requirements, such as how well requirements are split, length of development cycle, and so on. The ultimate goal, as a client team, is to be able to do fast iterative business, improve the collaborative efficiency among teams, and thus improve energy efficiency on the whole. In Xianyu, we advocate the unmanned way to solve problems, if you are also a student with pursuit of technology, welcome to join us.

Resume: [email protected]

Identify the TWO-DIMENSIONAL code, pay attention to [Xianyu Technology] public number

Reference documentation

  1. Git Workflow Guide

  2. iView – A high quality UI Toolkit based on Vue.js

  3. Spring Boot

  4. GitHub – axios/axios: Promise based HTTP client for the browser and node.js

  5. https://cn.vuejs.org/