Author: Bot Girl, STRONG Intelligence QA team
BytePush: Committed to grounding business scenarios, reaching problem solvers, and achieving intelligent warning
background
Rapid value delivery on the premise of quality assurance has always been the goal of our intelligent QA team’s continuous pursuit of test improvement, and efficient value delivery is the direction of continuous improvement pursuit such as agile development and continuous delivery.
At the present stage, various byte quality assurance tool platforms emerge as The Times require, and the left shift of testing becomes more and more normal. Eg: Code coverage, branch inspection, performance testing and so on all move left to the RD self-test link, and QA carries out risk assessment after testing. The current situation of risk assessment is that RD/QA jointly formulate the criteria for entry and exit, and QA will collect whether the indicators meet the standards at each entry and exit node. In order to meet the standards as soon as possible, the corresponding RD and QA will remind and urge, and each iteration needs to spend 1 QA and 1 RD, and 2 human resources to follow up the whole process.
In order to solve the above pain points, our QA team set up process control center + quality and efficiency model throughout the development and testing process. Through reasonable process setting, it can adapt to various development modes.
The target
As a risk control (process, people, system), it runs through the whole life cycle of production and research, discovers and exposes quality and process problems through various cards combined with measurement standards, completes data convergence through quality operation, continuously analyzes and upgrades standards, and repeats solidification experience to form a closed loop.
The difficulties in
- How to generalize the data source platform to support business custom card push content when all the data source platforms support business custom configuration
- How to customize a service push policy when different services have different requirements for push policies, push objects, and push modes
Pain points
The code level is the biggest pain point:
- First impression of the robot: tune open API, parse, push a card
- Results:
-
It can only meet the current business requirements, but cannot support the diversified and customized requirements, and cannot meet the changing business requirements in a timely manner
-
When multi-directional business is supported, a large number of redundant code is generated, which makes it difficult to change requirements
-
Based on initial impressions, the first two points cause many people to write cards in a rush and then run away
-
To solve the above pain points:
Change the database usage mode
- Original: write SQL frequently
def search_branch_message(branch_name, rid, rd_name, version, src): sql = "SELECT * from branch_message WHERE branch_name = %s and rid = %s rd_name = %s and version = %s and src = %s" args = (branch_name, rid, rd_name, version, src) rows = db_base.search_mysql_args(sql, args) return len(rows)Copy the code
- Current: BytePush combined with AIOMysQL encapsulates a layer of ORM. You only need to pass parameters according to rules and call methods
Mode: mandatory, "add" or "update" or "delete" model: mandatory, database table name data: Def insert_by_params(mode=None, model=None, data=None): s = DBORM.Select() task = DBORM.insert_task_pool(mode=mode, models=model, Data =data) result = s.indesert_many (task) s.lose () return result "" query Query passed parameter: need_typee: mandatory, table name dimension_type: Optional, dimension information (person dimension, requirement dimension, version dimension) dimension_param: Optional, query parameter, Def get_search_result(self, need_type=None, dimension_type=None, dimension_param=None): em = exception_message(mode="group") tasks = [] for need_type_ in need_type.keys(): try: if hasattr(em, need_type_): If dimension_type and dimension_type in ["rid", "manager"]: = getattr(em, need_type_) self.select_params[dimension_type] = dimension_param tasks += exp_message(self.select_params) except AttributeError: Print (' no project ') result = self.s.select_many(tasks) return resultCopy the code
Revamp card generation
- The original: An interface to get a bug can be used over and over again on multiple cards, depending on the parsing needed
- Now: Set up the indicator factory, for example:
- To generate a version combination card, you just need to pick indicators from the metrics factory: Meego defects: bug_message, slardar Memory leaks: MOM_message, slardar and crash: crash_message.
- To generate an Open_Bug timeout card, just pick the metric from the metric factory: MeeGo Bug: Bug_Message
Need_exceptions_rid = {" h_OWner_schedule_notice ": ["node_name"], "bug_message": ["priority_name"], # aggregates "crash_message": ["crash_type"], # aggregates "mom_message": # aggregates "crash_message": ["crash_type"], # aggregates "mom_message": } class exception_message(Mode_Message): def __init__(self, mode): Mode_Message.__init__(self, mode) self.mode = mode def bug_message(self, args=None): result = self.mode_message("bug_message", args) return result def crash_message(self, args=None): result = self.mode_message("crash_message", args) return result def mom_message(self, args=None): Result = self.mode_message("mom_message", args) return result Def code_coverage_message(self, args=None): Result = self.mode_message("code_coverage_message", args) return result # def __init__(self, mode=None): Base_Message.__init__(self) self.mode = mode def mode_message(self, model, args): if self.mode == "single": result = self.just_show_search_by_params(model, args) return result elif self.mode == "group": task = self.many_together_contain(model, args) return task class Base_Message(object): @staticmethod def just_show_search_by_params(model=None, message_select_args=None): s = DBORM.Select() task = DBORM.task_pool(models=[model], message_select_args=message_select_args) result = s.select_many(task) return result @staticmethod def many_together_contain(model=None, message_select_args=None): task = DBORM.task_pool(models=[model], message_select_args=message_select_args) return taskCopy the code
User-defined configuration parameters are supported
- The original: There is a huge amount of code redundancy to parse as needed
- Now: Create the field factory
- For generic fields, just pass the key:value of the required field
- For user-defined configuration fields, pass the corresponding dict of the fields
Class MeegoPayload(CommonInterface): def __init__(self, src, condition_params_dict, SearchType, defined_condition_params=None): CommonInterface.__init__(self, src) self.SearchType = SearchType self.defined_condition_params = defined_condition_params self.condition_params = condition_params_dict.keys() if "planning_version" in self.condition_params: Self. planning_version_value_list = condition_params_dict["planning_version"] # Plan version if "actual_online_version" in self.condition_params: Self. actual_online_version_value_list = condition_params_dict["actual_online_version" self.condition_params: Self. bussiness_value_list = condition_params_dict["business"] # Self.view_value_list = condition_params_dict["stage"] # Set self.condition_params: Self.state_value_list = condition_params_dict["work_item_status"] # Self. resolve_version_value_list = condition_params_dict["resolve_version" self.condition_params: Self.linked_story_value_list = condition_params_dict["linked_story"] # Self.priority_value_list = condition_params_dict["priority"] # Self.discovery_version_value_list = condition_params_dict["discovery_version"] # Defect discovery versionCopy the code
Realize shell engineering
- Upper-layer: Obtain the configuration, common configuration parameters, version, and service line
- Middle level: data collection, all kinds of atomic data collection
- Lower layer: data consumption, card dimensions of push
Def process_slave(self, start_time, card_config): Need_params = card_config["need_params"] trigger_mode = card_config["trigger_mode"] send_mode = card_config["send_mode"] params = {"stage": ["all_story"]} if self.mode not in trigger_mode: Print (" the configuration does not support this trigger ") return # need_func, params_process, params_process Need_type = self.process_master(need_params) # -- data collection -- version = self.get_version_by_config() params[need_params["version_type"]] = self.mv.get_work_items_version(version) if self.business and len(self.business) == Zero: get_bussinesses = self.mv.get_bussinesses() params["business"] = get_bussinesses self.insert_gather_result(need_func, params_process, params) else: params["business"] = self.business self.insert_gather_result(need_func, params_process, {"task_id": start_time} {"task_id": start_time} {"task_id": startt_time} List_to_chat (select_params_1, card_config, need_type) if "single" in send_mode: self.send_person_chat(select_params_1, card_config, need_type)Copy the code
plan
The flow chart of the whole
Stage 1 (Automation)
- Goal:
- Automatic identification of project form, current version, current phase
- Automatic identification of process quality status
- Create an initial rule model, in which people write rules in advance and the tool makes decisions based on the established rules
- Path to reach:
Phase 2 (Platformization)
- Goal:
- Upper level policy market: provides process indicators and benchmark values, and discovers and exposes the existing risk points through the bound cards
- Lower-layer policy market: Each service is configured with various card push policies (push time, push mode, and push object).
Stage 3 (Intelligent)
- Objective: To automatically predict risk based on business status
- Achieve automatic setting of standard thresholds based on project form
- Achieve automatic according to the number of version requirements and production and research estimate, give the manpower ratio
- Achieving automatic risk identification (i.e., whether it can move to the next stage, stage, version)
- Path to reach:
- Statistics of each version of data, manual calibration
- Rules are no longer fixed or given by people, but constantly learned through data. When the rule model accumulates enough data, model algorithm is introduced to make intelligent judgment
Technical Architecture Diagram
Landing strategies
- Push the card
- General push, urgent push
- Personal push, group push, leader push
- Generate quality report
-
Weekly report, monthly report, import measurement platform
-
evaluation
Continuous analysis of core data, output quality reports, to form the [process planning – formulation – inspection – improvement and promotion] closed loop
earnings
- Manpower and target income
- Currently, BytePush has been added into project 16+, saving 0.5 manpower for each project to develop and maintain robot cards
- Currently, BytePush has been connected to project 16+, saving 1-2 full manpower for each project to follow up business iteration
- The proportion of scheduling and responsible person filling is as high as 100%, and the timely rate of circulation is as high as 95%, which enables the calibration of measure efficiency data
- Bug timely solution rate increased by 10%
- On the day of timely boarding the car proportion increased by 50%
- The overall crash was less than 1‰, and the timely resolution rate increased by 50% compared with the previous month
- QA does not need to follow up the code situation during the quality inspection on boarding day, which greatly releases QA students
- Frequency of usage
- Schedule reminders, circulation reminders and responsible person reminders are used 32 times a day
- Open_bug timeout cards are used 32 times a day
- Demand execution progress 10 times per day
- Demand implementation progress Before and after the boarding of the daily use of 1000 times
- The daily use of gray monitoring card is 16 times
- The average daily usage of the data card is 16 times
Best practice of landing based on template cards
Take vigorously intelligent learning lamp as an example
Throughout the production and research process, give reminders at key nodes, continuously track data, and continuously complete data convergence
- Process:
- Regularly push scheduling reminders, completion reminders, and reminders filled in by the responsible person to individuals and groups every day
- During the development period, rebase at midnight every day
- Push the implementation progress of each business demand every morning
- The requirement quality check in the pipline is triggered when the FEATURE branch submits the MR
- Bug data:
- 2 days before the boarding date to the return stage, the timing push timeout bug is not solved (group list, personal urgent)
- From 2 days before the boarding day to the day of the boarding day, push feature combination card and non-combination branch list every morning and evening
- On the day after the day to gray, every morning and evening push version of the code card
- During grayscale period, grayscale monitoring cards are pushed every morning and evening
- During the version, online monitoring cards will be pushed every morning and evening
- Quality Report:
- Continuous iteration, repeated polishing standards
Scope of application
Current best practices for learning lamps are supported in other project versions:
- Bus system
- The version of the system
- Don’t talk to version of the system
Subsequent planning
- Develop various quality operation templates, e.g. test model performance section, etc
- Open shell project, provide custom collection and custom output content function
- Use machine learning algorithms and historical data to achieve business intelligence early warning
Is ReZhao
- 👉 “social recruitment” test development (senior) engineer
- 👉 “Early batch of autumn Recruitment for 2022” test development engineer