A, problem,
Many test books have extensive chapters on use-case design, such as equivalence division, boundary values, false inference, cause-and-effect diagrams, and so on. However, in practical application, these theories cannot give us very clear behavior guidance, especially when the business is complex, the correlation module is close, and there are many paths between the input standard and the output result. Completely following these methods can only satisfy us psychologically, but cannot effectively improve the test efficiency. Test interview guide
Sometimes we just have to rely on our experience (or habits) in writing use-cases from previous projects, hoping to be more canonical in this one, but most of the time we only have the “written specification”, and the problems that existed in use-case design are still present.
When the unease-of-use case was almost complete, we found the test case suddenly in a very awkward position with the many regional features and new requirements that came with it:
⊙, Since then, they are rarely enforced
Few bugs are found in the ⊙ execution use cases
There is no time to supplement use cases for new functional requirements
⊙ has time to complement, but the use-case structure becomes increasingly messy
The connection between the feature’s use cases and the common use cases is unarticulated (all changes involved are listed in the new requirement as the main line, but the data or business connection between the feature and the pass is diluted in the use cases)
⊙ knows how to execute this use case, but what does it try to illustrate? (Most use cases give us the impression that we can’t see the forest for the trees, only for one function, and can’t string)
As can be seen from the above series of problems, it seems that test cases bring us more problems than benefits, and it is precisely because of the accumulation of problems encountered in the actual process that we have good reasons to ignore or reject the application of use cases.
But how comfortable would we be writing without use cases or shorthand use cases? It goes without saying that no one wants to go backwards.
Second, the reason why
In fact, the list of problems we encounter with test case writing and design is only a superficial one, and I think there are several reasons why:
1. There is no suitable specification
“Appropriate specification” or “localized specification”. This is the first problem we encounter during testing, and it’s usually easy to get used to and forget. We have a fair amount of process documentation, book definition, but is it appropriate for our current project?
Every test engineer is introduced to testing concepts and terminology at the beginning of his or her career. He or she is also introduced to documentation such as how to write specifications, how to define bug levels, and the main business aspects of software implementation. But when the test manager starts assigning us use-case writing for a certain module, how many of us know how to write and how to write well?
In the test forum, you can often see posts introducing use-case writing methods, and there are many responses that are confused about how to apply them to practice. Why don’t we find clear and appropriate norms within the company and within the project team? So we have to choose to copy from books or from previous use cases, and the structure and the way we do it depends on previous experience. “I’m not saying that this is wrong, but it doesn’t help to have written experience.
We have too much experience, but not the right norms.
2. Separation of functions and services
We know how to list use cases for an input field, but we rarely think about what the input field is used for. If you look closely, it’s not hard to see that this separation of functionality and business is becoming more and more common in use cases.
The use-case approach to boundary values, equivalence division, cause-and-effect diagrams is a highly refined approach that is inherently functional and code oriented, so we have no theoretical reference to how to write business use cases.
Complex business will run through the entire software, involving many functional points, the combination of branches is more numerous.
Test cases need to be concise and unambiguous, which is also “antithetical” to the business. Functional use cases depend on program interfaces, and business descriptions depend on requirements documents. Therefore, we prefer to write functional use cases based on the implemented interface, listing numerous boundary values and equivalence classes.
When the process operates with experience and understanding, the most bugs are tested, but we can’t make the bug correspond to a use case (clicking a button sometimes causes errors that are not in the button or the form in which the button is located). Because we don’t have a good accumulation of use cases in the business, we feel that we don’t find many bugs when executing the use cases.
The division of use-case structure also leads to the separation of function and business to some extent, creating folders according to interface modules and creating different use cases in them, which makes it difficult to connect use cases structurally.
3. Testing failed to keep up
Imagine what testing will do as we hear more and more developers out there chanting “embrace change” and “agile development”? Is testing responsive to the increasing number of regional features and software versions? Change is the biggest challenge we face, and I think the failure of testing to keep up with change is the main cause of the problems and contradictions we encounter during testing.
Testers feel deeply about changes in requirements and procedures, and testing always runs behind requirements and development, putting all the risk on them. The shrinking of time and resources means we have to give up the “unnecessary” work: testing as quickly as possible and finding bugs as quickly as possible, rather than looking at the overall quality and strategy of the software.
The immediate effects of coping are that program quality cannot be measured accurately, progress cannot be controlled, and risk cannot be estimated. Use cases are disjointed from the program, and new use cases are confused and missing. In the long run, we have to give up modifying, adding use cases, or even giving up all the work we have accumulated. Use cases become documented summaries of changes to the program. Without retention of test data, test steps and priorities are not reflected, new features are gradually “divorced” from the original program, and there may be violations that we cannot quickly detect. Test interview guide
Can you leave your opinion in the comments?
Source: text and text from the network, if there is infringement please contact delete