Software Engineering 60′

I. Software process

1. Concept of software process

A:

1) ** The software process is described as who, when, what and how to achieve a specific goal in order to develop the software needed by customers. **ISO9000 defines the process as: “the use of resources to convert input into output activities of the system”. (Introduction to Software Engineering P14)

2) Processes define the sequence of methods to be applied, the documentation to be delivered, the management actions to be taken to ensure software quality and coordinate change, and milestones to mark the completion of tasks in each phase of software development. (Introduction to Software Engineering P14)

3) Software process is a series of related processes in the software generation cycle. A process is a collection of activities, and activities are a collection of tasks. (Soft Engineering PPT of Fudan University)

Software processes have three meanings:

Individual Meaning:

Refers to the software product or system in the life cycle of a certain kind of activities, such as software development process, software management process;

Overall meaning:

Refers to the totality of the software processes of the software product or system in all of the above meanings;

Project Meaning:

It applies the principles and methods of software engineering to construct the software process model, and instantiates it according to the specific requirements of software products, as well as the operation under the user environment, so as to further improve software productivity and reduce cost.

2. Characteristics of classic software process model (waterfall model, incremental model, evolution model, unified process model)

A:

The waterfall model

  • The waterfall model divides the software life cycle into six basic activities, such as requirement analysis, specification, design, programming, software testing and operation and maintenance, and defines the fixed sequence of them from top to bottom, which is like the water falling down the waterfall.

  • The waterfall model emphasizes the role of documentation and requires careful validation at each stage.

  • The linear process of waterfall model is so ideal that it is no longer suitable for modern software development mode and has been almost abandoned by the industry. The main problems are as follows:

    1. The division of each stage is completely fixed, and a large number of documents are generated between stages, which greatly increases the workload;

    2. As the development model is linear, users can only see the development results at the end of the whole process, thus increasing the risk of development;

    3. Early errors may not be discovered until later in the development testing phase, resulting in serious consequences.

The incremental model

Like building a building, software is built step by step.

In the incremental model, software is designed, implemented, integrated, and tested as a series of incremental components, each consisting of code fragments that provide a specific function formed by multiple interacting modules.

  • The incremental model does not deliver a working complete product at all stages, but rather a working product that meets a subset of the customer’s needs.
  • The incremental model focuses on delivering a runnable product for each increment.
  • The whole product is broken down into components, and developers deliver the product component by component. The benefit of this is that software development can better adapt to change, and customers can constantly see the software being developed, thus reducing development risk.

However, the incremental model also has the following drawbacks:

1. Since each component is gradually integrated into the existing software architecture, the addition of components must not destroy the constructed part of the system, and the software is required to have an open architecture.

2. Changes in requirements are inevitable during development. The flexibility of incremental models makes it much more capable of handling such changes than waterfall models and rapid prototyping models, but it is also easy to degenerate into a do-it-yourself model, thus losing the integrity of software process control.

When using the incremental model, the first increment is often the core product that implements the basic requirements. After the core product is delivered to the user, the next incremental development plan is formed after evaluation, which includes the modification of the core product and the release of some new functions. Each process is repeated after each incremental release until the final, polished product is produced.

Evolution model

Evolution model is a global software (or product) life cycle model. Belongs to the iterative development method.

That is, according to the basic needs of users, through rapid analysis to construct an initial operational version of the software, the initial software is usually called the prototype, and then according to users in the process of using the prototype to improve the prototype and suggestions, to obtain a new version of the prototype. Repeat this process and you end up with a software product that satisfies users.

The development process of adopting evolutionary model is actually the process of gradually evolving from the initial prototype to the final software product. Evolution models are particularly useful when there is a lack of accurate understanding of software requirements.

Disadvantages:

1. If all product requirements are not completely clear at the beginning, it will bring difficulties to the overall design and weaken the integrity of product design, thus affecting the optimization of product performance and product maintainability;

2. Without rigorous process management, the lifecycle model is likely to degenerate into a primitive, unplanned trial-and-error model;

Unified Process model

RUP/UP (Rational Unified Process) is a use-case-driven, architecture-centric, iterative and incremental software Process model supported by UML methods and tools that is widely used in various object-oriented projects.

RUP was developed and maintained by Rational Corporation and is tightly integrated with a family of software development tools. RUP contains a number of good practices, known as “best practices.”

Best practices include:

  • Iterative software development
  • Demand management
  • Component-based architecture applications
  • Build visual software models
  • Serious software quality
  • Software change control, etc

RUP’s static structure consists of six core workflows (Business modeling, requirements, Analytical Design, implementation, testing, deployment) and three core support workflows (configuration and Change Management, project management, and environment)

RUP divides the software lifecycle into four consecutive phases. Each stage has clear goals, and milestones are defined to evaluate whether these goals are being met. The goals of each phase are accomplished through one or more iterations.

The objectives of the four stages include: initial stage, elite stage, construction stage and handover stage.

RUP model adopting iterative development, through many times perform different development workflow, step by step to determine the part of the requirements analysis and risk, in the design, implementation, and confirm the parts, to do the next part of the requirements analysis, design, implementation and validation, so on, until the whole project is completed, it can better understand the requirements in the gradual integration, Build a robust architecture.

3. Basic concepts of process assessment and CMM/CMMI

A:

1) Capability Maturity Model (CMM), established by software Engineering Institute (SEI) of Carnegie Mellon University (US) with the support of the US Department of Defense in the late 1980s, is used to evaluate software process Capability Maturity of software organizations. At the beginning of its establishment and development, the main purpose of this model is to provide a method to evaluate the capability of software contractors, and to provide a comprehensive and objective evaluation basis for bidding activities of large-scale software projects. And later, it is also used by software organizations to improve their software processes. (Software Engineering PPT of Fudan University)

The CMM provides a maturity level framework:

  • Level 1 – Initial level
  • Level 2 – Repeatable
  • Level 3 – Defined level
  • Level 4 – Managed level
  • Level 5 – Optimization level

2) The U.S. Department of Defense, the U.S. Defense Industry Board, and SEI/CMU launched the CMMI project in 1998, hoping that CMMI would be the synthesis and improvement of several process models and a systematic and consistent process improvement framework supporting multiple engineering disciplines and fields, which could adapt to the characteristics and needs of modern engineering and improve process quality and efficiency. (Software Engineering PPT of Fudan University)

3) THE CMMI model provides two representations for each combination of disciplines: a phased model and a continuous model.

4. Characteristics of agile Manifesto and Agile process

1) Agile Manifesto:

(Software Engineering PPT of Fudan University)

(1) Individuals and interactions over processes and tools

Not to deny the importance of processes and tools, but to emphasize the role of people and communication in software development. Software is developed by a team of people. Only through full communication and effective cooperation can all kinds of people related to software projects successfully develop software that is satisfied by users.

It is difficult to develop software successfully with well-defined processes and advanced tools, and people with poor skills who do not communicate and collaborate well.

(2) Executable software is better than detailed documentation

It is much easier to understand what the software does from a working piece of software than to read thick documents.

Agile software development emphasizes the constant and rapid delivery of working software (not necessarily complete software) to users for acceptance.

Good necessary documentation is still needed to help us understand what the software does, how it does it, and how to use it, but the main goal of software development is to create working software.

(3) Cooperation with customers is more important than contract (contract) negotiation

Only the customer can specify what kind of software is needed, however, extensive practice shows that customers often do not fully express all their requirements in the early stages of development, and some of the requirements identified early may change later.

It is often difficult to pin down requirements through contract negotiations.

Agile software development emphasizes collaboration with customers and responds to their needs by communicating and working closely with them.

(4) Responding to changes in time is better than following the plan

The development of any software project should include a project plan that identifies priorities and start and end dates for development tasks. However, as the project progresses, requirements, business environment, technology, etc., and task priorities and start and end dates can change for a variety of reasons.

Therefore, the project plan should be malleable with room for change. Respond to changes as they occur and revise plans to adapt to them.

2) Characteristics of agile processes

(Software Engineering PPT of Fudan University)

(1) Top priority is to satisfy users by delivering valuable software early and continuously

(2) Welcoming the need for change, even if it occurs late in development, Agile processes use change as a motivator in order to improve their competitive advantage over customers.

(3) Release working software as quickly and continuously as possible, weeks to months at a time

(4) Business people and developers must work together every day throughout the project

(5) Establish a project team with positive employees as the center, give them the environment and support they need, and give full trust to their work

(6) The most efficient and effective way of information transmission in the project group is face-to-face communication

(7) The primary basis for measuring project progress is working software

(8) Agile processes promote sustainable development, where project sponsors, developers and users should be able to maintain a constant speed for a long time

(9) Always focus on technical excellence and good design to enhance agility

(10) Simplicity is essential. It is the art of minimising unnecessary work

(11) The best architectures, requirements, and designs come from self-organizing teams

(12) The team should periodically reflect on how it can be more effective and adjust its behavior accordingly

Second, software requirements

1. Concept of software requirements

A:

Subjective needs: a state or ability required by users to solve a problem or achieve a goal;

1. A state or capability that a system must satisfy or possess in order to satisfy a convention, standard, specification, or other formal document;

Functional requirements:

  1. The system needs to provide services or functions: such as book retrieval;
  2. The way the system handles certain input: for example, the prompt for illegal input;
  3. Behavior of the system in a specific environment: for example, screen saver without operation for a long time;

Non-functional requirements:

  1. Additional quality constraints on system functions or services, such as response time, fault tolerance, security, etc. – customer concerns (external quality);
  2. Quality attributes from a system development and maintenance perspective, such as understandability, extensibility, etc. – concerns of software developers or maintainers (internal quality, held by software)

2. Basic process of requirements engineering

A:

Requirements acquisition, requirements analysis and negotiation, system modeling, requirements specification, requirements verification, requirements management

3. Hierarchical data flow model

A:

Design method of hierarchical data flow graph

The first step is to draw the input and output of the subsystem

By thinking of the entire system as one big process, input and output diagrams can be drawn based on which external entities the data system receives data flows from and to which the system sends data flows. This diagram is called the top-level diagram.

The second step is to draw the interior of the subsystem

The processing of the top layer graph is decomposed into several processes, and these processes are connected by data flow, so that the input data of the top layer graph becomes the output data flow of the top layer graph after several processes. This diagram is called the layer 0 diagram. The process of drawing a data flow diagram from a process is the decomposition of the process. Processes can be identified by drawing a process where a change in the composition or value of a data stream occurs, the function of which is to bring about the change, or by treating the data as a unit (when the data arrives and is processed together), as a data stream. With regard to data stores, some data to be used at a later time can be organized into a data store to represent.

The third step is to draw the inside of the processing

Each process is regarded as a small system, and the input and output data stream of the process is regarded as the input and output stream of the small system. The processing DFD diagram for each small system can then be drawn as a zero-layer diagram.

The fourth step is to draw the decomposition diagram of sub-processing

Repeat the decomposition process of step 3 for each process in the DFD diagram decomposed in step 3 until the undecomposed processes in the diagram are simple enough (i.e. cannot be decomposed again). At this point, a set of hierarchical data flow diagrams is obtained.

The fifth step, the data flow diagram and processing number

For a software system, the data flow diagram may have many layers, and each layer may have many diagrams. In order to distinguish between different processes and different DFD subplots, each plot should be numbered for ease of management.

  • There is only one top layer diagram and only one processing in the diagram, so there is no need to number it.
  • There is only one layer 0 diagram, and the processing numbers in the diagram are 0.1, 0.2,… , or 1, 2.
  • The subgraph is the processing number decomposed in the parent graph.
  • The processing number in the subgraph is composed of graph number, dot and serial number, such as 1.12, 1, 3 and so on.

4. Use case and scenario modeling and UML representation (use case diagrams, activity diagrams, swimlane diagrams, sequence diagrams)

Use case diagram:

A use case is a sequence of verbal descriptions of an interaction between an actor using a function of the system. The description of the user needs of the system expresses the functions and services provided by the system. It only describes what the actors and the system do in the interaction process, but does not describe how to do it.

Activity diagrams:

Activity diagrams are a special case of state diagrams. Used to simplify the description of the working steps of a process or operation. The activity is represented by rounded rectangles – narrower and more elliptical than the state diagram. Once the processing in one activity completes, it automatically causes the next activity to occur. The arrow indicates moving from one activity to the next. Similar to a state diagram, the starting point in an activity diagram is represented by a solid circle and the ending point by a concentric circle (the inner circle is a solid circle). Activity diagrams can have decision points, where one set of conditions triggers one execution path and another set of conditions triggers another execution path, and the two execution paths are mutually exclusive. Decision points are usually represented by small diamond ICONS, with conditions enclosed in square brackets near the relevant path that caused the path to be executed. Please use the activity diagram to describe the phone call.

Lane:

Swimlanes divide the activities in the activity map into groups. A group is assigned to the business organization responsible for this group of activities. In the activity diagram, swim lanes are drawn with vertical solid lines.

Sequence diagram:

A sequence diagram is an interaction diagram that emphasizes message timing and describes the chronological order in which messages are transmitted between objects to represent the sequence of actions in a use case. In this two-dimensional diagram, objects are arranged from left to right and messages are arranged chronologically along the vertical axis. In the construction of the map, should be simple layout.

Schematic diagram, buy car schematic diagram.

5. Data Model Modeling and UML Representation (Class Diagram)

A:

The Class Diagram is a Diagram that describes classes, interfaces, collaborations, and their relationships. It is used to show the static structure of each Class in a system. Class diagrams are the basis for defining other diagrams, on which state diagrams, collaboration diagrams, component diagrams and configuration diagrams can be used to further describe the characteristics of other aspects of the system.

In UML, classes are expressed as descriptors for a group of objects that have the same structure, behavior, and relationships. The properties and operations used are attached to the class. Class defines a set of objects that have state and behavior. Properties and associations are used to describe states. Attributes are typically represented by data values that have no identity, such as numbers and strings. Associations are represented by relationships between objects with identities. Behavior is described by operations, and methods are concrete implementations of operations. The life cycle of an object is described by the state machine attached to the class.

In the graphical representation of UML, the representation of a class is a rectangle consisting of three parts: the Name of the class, the Attribute of the class, and the Operation of the class. The name of the class is at the top of the rectangle, the properties of the class are in the middle, and the bottom of the rectangle shows the actions of the class. The middle part shows not only the class’s attributes, but also the type of the attributes and their initialization values. The bottom of the rectangle can also display the parameter list and return type of the operation, as shown in Figure 1.

You should also include information about the class’s responsibilities, constraints, and notes.

6. Behavior Model Modeling and UML Expression (State machine diagrams)

A:

State machine diagrams are used to model the state of an object and the events that cause it to change. The state machine diagram of UML is mainly used to establish the dynamic behavior model of object class or object, representing the state sequence experienced by an object, the events causing the state or activity transition, and the actions accompanying the state or activity transition. State machine diagrams can also be used to describe Use cases and system-wide dynamic behavior.

3. Software design and construction

1. Concept of software architecture and architecture style

A:

Software architecture (Baidu version) : with a certain form of structural elements, that is, the collection of components, including processing components, data components and connection components. The processing component is responsible for processing the data, which is the information being processed, and the link component connects the different parts of the architecture in groupings and combinations. This definition focuses on distinguishing between processing components, data components, and connection components, an approach that is largely maintained in other definitions and approaches. Due to some common characteristics of software systems, this model can be transferred between multiple systems, especially can be applied to systems with similar quality attributes and functional requirements, and can promote system-level reuse of large-scale software.

Software architecture (specified textbook version) : The software architecture of a program or computer system refers to one or more structures of a system, including software components, externally visible properties of components, and their relationships.

An architecture is not a running program. It’s an expression that serves three purposes:

1) Analysis of the effectiveness of the design in meeting established requirements

2) Consider possible architectural options when design changes are relatively easy

3) Reduce risks associated with software builds

Architectural styles: Research and practice of software architectural styles promotes reuse of designs, and proven solutions can be reliably used to solve new problems. The unchanging part of the architectural style enables different systems to share the same implementation code. As long as the system is organized in a common, normative way, the architecture of the system can be easily understood by other designers. For example, if someone describes the system as a “client/server” mode, we don’t have to give design details and immediately understand how the system is put together and works.

Here is Garlan and Shaw’s classification of common architectural styles:

1) Data flow style: batch sequence; Piping/filter

2) Call/return style: main program/subroutine; Object-oriented style; hierarchy

3) Independent component style: process communication; The event system

4) Virtual machine style: interpreter; Rules-based systems

5) Warehouse style: database system; Hypertext system; The blackboard system

2. Concept of Design Pattern (from Baidu)

A:

Software Design Pattern, also known as Design Pattern, is a set of repeatedly used, most people know, after classification and cataloging, code Design experience summary. Design patterns are used to make code reusable, to make it easier for others to understand, to ensure code reliability, and program reuse.

3. Basic ideas and concepts of modular design (abstraction, decomposition, modularization, encapsulation, information hiding, function independence) (from textbooks)

1) Modular: modular design idea is to divide the whole software into several independent named or independent access components, these modules together to meet the needs of the problem, can divide a large and complex software system into simple modular structure that is easy to understand.

Modularization requirements: information concealment principle needs to be met, modules are independent from each other, and meet the requirements of high cohesion and low coupling as far as possible.

2) Encapsulation: it is a kind of information concealment technology. Users can only see the information on the object encapsulation interface, and the internal implementation of the object is hidden to users. The purpose of encapsulation is to separate the use of an object from the producer. Separate the definition and implementation of an object. An object usually consists of three parts: object name, attribute, and operation.

3) Information concealment: the implementation details of each module are hidden (inaccessible) from other modules, that is, the information contained in the module (data and procedures) is not allowed to be used by other modules that do not need this information.

Information concealment: some factors that may change are hidden in a module during design to improve maintainability and reduce the spread of errors.

4. Concept of software refactoring; UML modeling of software architecture (package diagram, class diagram, component diagram, sequence diagram, deployment diagram); (From blogs and textbooks)

A:

1) Refactoring: The process of modifying a software system in such a way as to improve its internal structure without changing the external behavior of the code. This is a rigorous approach to cleaning up the code to minimize the introduction of errors.

2) Package diagram (omitted, not important) : a kind of static diagram

3) Class diagram (emphasis) : show the static structure of system classes, that is, the relationship between classes.

Specific meaning and its drawing method (see a website to learn will) blog.csdn.net/monkey_d_me…)

4) component diagram

The component diagram shows the static structure of the code. It uses code components to show the physical structure of the code.

5) Sequence diagram (key points)

Use to display the order in which messages are sent between objects, as well as the interaction between objects.

6) Deployment diagram (omitted, not important)

Represents the physical structure of the hardware and software in the system

Illustration: blog.csdn.net/wangyongxia…

5. The concept of interface; Object-oriented design principles (Open close Principle, Liskov substitution principle, dependency transpose principle, interface isolation Principle)

Open and close principle:

Class changes are made by adding code, not by modifying the source code.

Richter’s substitution principle:

Wherever an abstract class appears, it can be replaced with its implementation class, which is essentially a virtual mechanism that implements object-oriented functionality at the language level.

Dependency inversion principle:

Rely on abstractions (interfaces), not concrete implementations (classes), that is, programming for interfaces.

Interface isolation rules:

Users should not be forced to rely on interface methods they do not need. An interface should provide only one external function and should not encapsulate all operations into one interface.

6. Concepts of cohesion and coupling, common types of cohesion and coupling

A:

“High cohesion, low coupling”

Cause: Module independence refers to the fact that each module only performs the independent sub-functions required by the system, and has the least contact with other modules and simple interfaces. Two qualitative metrics – coupling and cohesion.

Coupling is also called interblock connection. A measure of how closely the modules of a software system are interconnected. The closer the connection between modules, the stronger the coupling, the worse the independence of modules. The level of coupling between modules depends on the complexity of module indirect interface, the way of invocation and the information transmitted.

Coupling classification (low – high) : no direct coupling; Data coupling; Tag coupling; Control coupling; Common coupling; Content coupling;

1) No direct coupling: there is no direct connection between the two modules, and the connection between them is completely realized by the control and call of the main module;

2) Data coupling: refers to the call relationship between two modules, passing simple data values, equivalent to the value transfer of high-level languages;

3) Tag coupling: refers to the data structure transmitted between connected modules, such as array names, record names, file names, etc., in high-level language, these names are tags, but actually transmitted is the address of the data structure;

4) Control coupling: when a module calls another module, it transfers control variables (such as switches and flags, etc.), and the modulated module selectively performs a function within the block through the value of the control variable;

5) Common coupling: The coupling between modules that interact through a common data environment. The number of commonly coupled complex programs increases with the number of coupled modules.

6) Content coupling: This is the highest level of coupling and the worst level of coupling. When one module directly uses the internal data of another module, or is transferred into another module through abnormal entry.

Cohesion is also called intra-block connection. A measure of the functional strength of a module, i.e. how closely the elements within a module are integrated with each other. The more closely the elements of a module (language names, program segments) are related, the higher its cohesion is.

Cohesion classification (low – high) : accidental cohesion; Logical cohesion; Time cohesion; Communication cohesion; Sequential cohesion; Functional cohesion;

1) Accidental cohesion: refers to the absence of any connection between processing elements in a module.

2) Logical cohesion: it refers to the execution of several logically similar functions in a module, which function is determined by parameters of the module.

3) Time cohesion: the module that combines the actions to be performed at the same time is called time cohesion module.

4) Communication cohesion: it means that all processing elements in the module operate on the same data structure (sometimes called information cohesion), or that all processes use the same input data or produce the same data data.

5) Sequential cohesion: it means that all processing elements in a module are closely related to the same function and must be executed in sequence. The output of the previous function element is the input of the next function element.

6) Functional cohesion: this is the strongest cohesion, which means that all elements in the module work together to complete a function. Coupling to other modules is weakest.

Coupling and cohesion are two qualitative standards of module independence. When dividing software system into modules, we should try our best to achieve high cohesion and low coupling, improve module independence, and lay a foundation for designing high-quality software structure.

4. Software testing

1. Concept of software testing and test cases;

A:

Software testing is the process of operating the program under specified conditions to find program errors, measure the quality of software, and evaluate whether it can meet the design requirements.

A test case is a set of test inputs, execution conditions, and expected results written for a specific goal to test a program path or verify that a specific requirement is met.

2. Concepts of unit testing, integration testing, validation testing, system testing and regression testing;

A:

Unit testing: also known as module testing, focuses on the verification of the smallest unit of software design (program module). Unit testing tests critical control paths according to the design description to find errors within each module. Unit tests are usually white-box and run in parallel with multiple modules.

Integration testing: also known as assembly testing. Joint testing, where each module works independently after unit testing, but together they often don’t work. Verify that the test is based on the software requirement specification to check whether the functions and performance of the software and other features are consistent with the user’s requirements, including all functions and performance stipulated in the contract, documentation (correct and reasonable), and other requirements (such as portability, compatibility, error recovery, maintainability, etc.).

System test: it is to integrate the software that has passed the confirmation test, as an element of the whole computer-based system, with other system components (such as hardware, peripherals, some supporting software, data and personnel, etc.), and carry out a series of integration tests and confirmation tests on the computer system under the actual operating environment. The purpose of system testing is to find the inconformance or contradiction between software and system definition by comparing it with the requirement definition of the system.

Regression testing: When you modify old code and re-test it to make sure the change doesn’t introduce new errors or cause other code to produce errors.

3. The concept of debugging and the relationship between debugging and testing;

A:

The purpose of testing is to find errors, and the purpose of debugging (also known as debugging) is to determine the cause and exact location of errors and to correct them.

The relationship between debugging and testing: Testing and debugging are different in goals, methods, and ideas. Testing is a process that aims to show the existence of software errors, usually carried out by software test engineers. Debugging is a method, a means, the purpose is to find the cause of the error and solve, generally speaking, debugging after testing activities, usually carried out by the development engineer.

4. Test the concept of coverage

A:

Test coverage evaluation is one of the important means to measure the execution status of periodic software test to determine whether the test meets the pre-set test task completion standard. Test coverage is a quantitative expression of test coverage evaluation, which is generally calculated by the requirements of tested software products, function points, number of test cases or program codes.

5, white box testing, black box testing concept

A:

White box testing, also known as structural testing, regards the test object as a transparent box. According to the internal logical structure and related information of the program, the tester involves test cases and checks whether all the logical paths in the program work correctly according to the predetermined requirements.

Black box test is also called function test. This method treats the test object as a black box. The tester does not consider the internal logical structure and internal characteristics of the program at all, but only checks whether the function of the program conforms to its function description according to the requirements and specifications of the program.

6. Calculation method of code cyclomatic complexity

A:

Cyclomatic complexity is a measure of code complexity. In the concept of software testing, cyclomatic complexity measure a module to determine the complexity of the structure, the article quantitatively characterized by independent linear path, namely the reasonable prevention required test article, the minimum path error count, cyclomatic complexity code could cause big show quality is low and hard to test and maintenance, according to the experience, the application of potential errors and high cyclomatic complexity has much to do. It is calculated as V (G) =e-n-2p. E represents the number of edges in the control flow graph, n represents the number of nodes in the control flow graph, and P represents the number of connected components of the graph (the number of components of the graph is the maximum set of connected nodes). Since the control flow graph is always connected, p is always 1

V (G) = number of regions = Number of decision nodes +1

V of G is equal to R. R represents the number of regions of the plane divided by the control flow diagram

7. Basic path test method in white box test

A:

Basic path testing is a white-box testing technique. This method first draws a control flow diagram from the program or design and calculates the number of areas, then determines the execution path of the program in a group (called the basic path), and finally designs a test case for each basic path. In real life, a less complex program can have a large number of paths, especially if loops are included. Therefore, the number of paths covered should be compressed to a certain limit.

8. Equivalence class division method in black box test

A:

Equivalence class division is unable to exhaust all possible input data to test, so can only choose a few typical input data, to reveal as many bugs as possible, will all possible equivalence class partition method of input data is divided into several equivalence class, then in each equivalence class select a representative sample of data as a test case, Test cases are composed of valid equivalence classes and invalid equivalence classes to ensure the integrity and representativeness of test cases.

mg-5OugY4Ha-1607958334409)]

V (G) = number of regions = Number of decision nodes +1

img-PAxa2FTA-1607958334410

V of G is equal to R. R represents the number of regions of the plane divided by the control flow diagram

imG-WCFD9UYF-1607958334410

7. Basic path test method in white box test

A:

Basic path testing is a white-box testing technique. This method first draws a control flow diagram from the program or design and calculates the number of areas, then determines the execution path of the program in a group (called the basic path), and finally designs a test case for each basic path. In real life, a less complex program can have a large number of paths, especially if loops are included. Therefore, the number of paths covered should be compressed to a certain limit.

8. Equivalence class division method in black box test

A:

Equivalence class division is unable to exhaust all possible input data to test, so can only choose a few typical input data, to reveal as many bugs as possible, will all possible equivalence class partition method of input data is divided into several equivalence class, then in each equivalence class select a representative sample of data as a test case, Test cases are composed of valid equivalence classes and invalid equivalence classes to ensure the integrity and representativeness of test cases.