Agile Testing Strategies
In an agile environment, we work in short sprints or iterations, each sprint focusing on a few requirements or user stories, so the documentation may not be as extensive in terms of number and content.
We concluded earlier that due to time constraints, we may not need an extensive test plan for each sprint agile project, but we do need an advanced Agile test strategy as a guide for the Agile team.
The purpose of the Agile test strategy document is to list best practices and some form of structure that teams can follow. Remember, agile does not mean unstructured.
Here, we look at a sample Agile testing strategy and what is included in the documentation.
About:
A test strategy usually has a mission statement that may be related to broader business goals and objectives.
A typical mission statement might be:
Continuously provide working software that meets customer needs by providing quick feedback and defect prevention rather than defect detection.
Supporters:
No code will be written for the story until we first define its acceptance criteria/tests. The story may not be considered complete in the Agile test strategy document until all acceptance tests have passed. I will also remind everyone about quality assurance
Quality assurance is a series of activities designed to ensure that products meet customer requirements in a systematic and reliable manner.
QA in SCRUM is everyone’s responsibility, not just testers. Quality assurance is all the activities we undertake to ensure the correct quality in the development of new products.
Test levels
Unit testing
Why: Make sure your code is developed correctly
Who: Developer/Technical Architect
Content: All new code + refactoring of legacy code and Javascript unit testing
Time: Once the new code is written
Location: Local Dev + CI (part of the build)
How to: Automated, Junit, TestNG, PHPUnit
API/service testing
Cause: Ensure proper communication between components
Who: Developer/Technical Architect
What: New Web services, components, controllers, etc
Timing: Once the new API is developed and ready
Location: Local Dev + CI (part of the build)
How to: automation, soap user interface, rest client
The acceptance test
Why: Ensure customer expectations are met
Who: Developer/SDET/Manual quality assurance
Content: Acceptance testing to validate stories, functional validation
Time: when functionality is ready and unit tested
Location: CI/Test environment
How to: Automate (Cucumber)
System testing/regression testing/UAT
Why: Make sure the entire system works when integrated
Who: SDET/Manual Quality Assurance/Operations Analyst/Product Leader
Content: Scenario testing, user flow and typical user journey, performance and security testing
Time: after completion of acceptance tests
Location: Temporary environment
How to: Automated (Webdriver) Exploratory Testing
The product backlog
The most common cause of software development failure is unclear requirements and different interpretations of requirements from different members of the team.
Stories should be simple, concise, and unambiguous. As a good guideline, it is best to write user stories according to the INVEST model.
A good user story is:
I am independent (all others)
N egotiable (not a specific specific contract)
V aluable (or vertical)
E Stimable (good approximation)
S mall (for iteration)
T estable (in principle, even if not yet tested)
User stories should be written in the following format
As a [role]
I want [feature]
So that [benefit]
It’s important not to forget the “benefits” part, because everyone should develop stories to understand the value they add.
Acceptance criteria
Each user story must include acceptance criteria. This is probably the most important factor in encouraging communication with different members of the team.
Acceptance criteria should be written at the same time as the user story is created, and should be embedded in the body of the story. All acceptance criteria should be testable.
Each acceptance standard should have a number of acceptance tests, written in Gherkin format, for example
Scenario 1: Title
Given [context]
And [some more context]…
When [event]
Then [outcome]
And [another outcome]…
Story workshop/Sprint program
In each story workshop, everyone on the team learns the details of the story so that developers and QA understand the scope of the work. Everyone should have the same understanding of what the story is about.
Developers should have a good understanding of the technical details involved in providing the story, and QA should know how to test the story and if there are any barriers to testing the story.
About:
If software testing, interface testing, automated testing, performance testing, LR script development, interview experience exchange. If you are interested, you can go to 175317069. There will be free information links in the group from time to time, which are collected and sorted out from various technical websites. If you have good learning materials, you can send them to me privately.
To prevent defects
In the story workshop, participation in PO, BA, Dev and QA is mandatory.
Scenarios (valid, invalid and marginal) (where QA can add great value by thinking abstractly about the story) should be considered and written in feature files.
It is important to note that defects occur (and more importantly) when testing the product, so the more effort and time you spend on this activity, the better the end result.
Since most of the defects are due to unclear and vague requirements, this activity also helps prevent misbehavior from being implemented, since everyone should have the same understanding of the story.
Also, during the Sprint planning meeting, the estimate provided for the story should include testing, not just coding. QAR (manual and automated) must also be present in the Sprint planning meeting to provide estimates for story testing.
The development of
If software testing, interface testing, automated testing, performance testing, LR script development, interview experience exchange. If you are interested, you can go to 175317069. There will be free information links in the group from time to time, which are collected and sorted out from various technical websites. If you have good learning materials, you can send them to me privately.
At the start of development, new production code and/or changes to legacy code should be supported by unit tests written by the developer and peer reviewed by another developer or a skilled SDET.
Any submission to the code repository should trigger the execution of unit tests from the CI server. This provides a quick feedback mechanism for the development team.
Unit tests ensure that the system works at the technical level and that there are no errors in the logic.
Developer testing
As a developer, act as if you don’t have any QA on your team or organization. Admittedly, QAs have different mindsets, but you should try your best to test them.
You think you’re saving time by moving quickly to the next story, but in reality, when bugs are found and reported, it takes longer to correct the problem than it does to spend a few minutes making sure the feature works.
Any new code and/or refactoring of legacy code should have appropriate unit tests that will be part of the unit regression test.
Automatic acceptance testing and non-functional testing
Automated acceptance tests, including integration and service tests and UI tests, are designed to demonstrate that the software works at the functional level and meets user requirements and specifications.
Automated acceptance tests are typically written in Gherkin and performed using BDD tools such as Cucumber.
Because these tests typically need to communicate HTTP, they need to be performed on a deployed application rather than run as part of a build.
** Non-functional testing (performance and security) ** Testing is just as important as functional testing and therefore needs to be performed at every deployment.
Performance testing should examine the performance metrics of each deployment to ensure that performance does not degrade.
Security testing should check for basic security vulnerabilities derived from OWASP
Crucially, this should be a fully automated process, with minimal maintenance to get the most out of automated deployment. This means there should be no intermittent test failures, test script problems, and environment damage.
Failures should only be attributed to genuine code defects and not script problems, so any failure tests that are not due to genuine failures should be fixed immediately or removed from the automation package so that consistent results can be obtained.
Regression testing
You don’t expect to find many defects. Their purpose is simply to provide feedback that we haven’t broken major features yet. Very little manual regression testing should be done.
Smoke pack – should not exceed 15 minutes
This package contains only advanced features to ensure that the application is stable enough for further development or testing.
For example, for an e-commerce site, the tests included in this package might be:
Product search, product review purchase items account creation/account login
Complete regression pack – should not exceed 1 hour
This package contains the complete regression test suite and everything else that is not included in the Smoke pack.
Here, the goal is to get quick feedback with more testing. If the feedback takes longer than an hour, it’s not fast. Reduce the number of tests by using paired testing techniques, creating test packages based on risk, or running tests in parallel.
UAT and exploratory testing
There is no reason why UAT and exploratory tests cannot be run in parallel with automated acceptance tests. After all, they are different activities aimed at finding different problems. The goal of UAT is to ensure that the features developed make business sense and help customers.
The PO (product owner) should run user acceptance tests or business acceptance tests to confirm that the built product meets expectations and meets users’ expectations.
Exploratory testing should focus on user scenarios and should find errors that automation misses. Exploratory testing should not find trivial errors, but subtle problems.
Complete the standard
After all the above activities are completed and no problems are found, the story is complete!
These are some guidelines for what can be included in an Agile test strategy document. Obviously, this needs to be customized to your organization’s needs, but hopefully this template will help you create your own Agile test strategy document.