This post is a continuation of my series on adopting healthy practices that enable an organization to make the agile transformation. You can read the first eight parts of this series here:
Part I: Introduction
Part II: Vision and Risk
Part III: Backlog Management
Part IV: Key Players
Part V: Sprint Execution
Part VI: Key traits of Customer Champions
Part VII: Key traits of Development Leads
Part VIII: Key traits of Schedule Facilitators
In Part IV (Key Players) we discussed that a role someone on projects at your organization must play is that of the champion of quality of the software for the customer or Quality Assurance (QA) Lead. People staffed in this role often have job titles such as Lead Test Engineer, or QA Manager or will be the same person running all of the tests at a startup or small company. These resources are tasked with ensuring that the software features developed in each Sprint are free of bugs to the point where they can be delivered to their respective customer.
There are four key concerns that help a QA lead shine in agile organizations.
In most agile projects, there is just enough documentation created to implement features. In this situation, a QA lead cannot use the excuse “the docs didn’t say the software was supposed to do that!” since it should be pretty apparent that the software will do quite a bit that isn’t in the documentation. A primary challenge will be to discover through conversations, reading, and freestyle testing exactly what the software does. The result of understanding these scenarios should be to create a list of scenarios categorized by features, and be able to keep track of which of these has been tested, and what the status of those tests is (pass or fail). This extraction process includes answering questions like:
- What should the default values of controls (or parameters, in a service) be when starting the scenario?
- How does the user get to the scenario (or what methods must be called first, in a service)?
- What fields are required and what is their validation, including validation that is field-specific?
- What should the result(s) be when the scenario is completed?
- What are the other factors that could effect the outcome of this scenario (another user has locked a resource, certain fields or settings are time, user, or environment specific etc.)?
Create Reproducible Tests
Once scenarios begin to be extracted, the steps needed to test them should be recorded in a document, tool, or script that ensures it can be reproduced exactly each time. The excuse that “it’s too time intensive to write down the steps” is a common one that results in inconsistent results from one test cycle to another. The only way to have a strong feeling of conviction as to what’s passing tests is to know you are testing the same way each time. If you can’t take the time to write down the steps needed to test a scenario, you pay back that time in full through extra conversations with developers clarifying steps, bugs found in edge cases not tested in a prior run (with no guarantee you’ll find them in the future as they aren’t written down!), and completely fictional reports of stability.
Coverage at all Costs
Some tools are great at unit testing. Some do a great job at testing use cases. Others setup sophisticated scenarios that are difficult to test by developers. Regardless of the tool or approach used, the goal should be 100% coverage of user (or system) exercised features. If a tool only allows automation of 70% of scenarios, then create (and document!) manual test cases for the remaining 30%. The last words of a failed QA lead are “the tool won’t let us test that so we’re skipping it since it’s not very complicated”.
Estimate Quality with Experience
With the arrival of tools that watch the number of lines of code executed during tests, there is an unfortunate cultural shift in QA towards watching your overall code coverage statistic. Folks declare “90% coverage and all passing tests means ship it!”. When this number is quite misleading. Many of these tools do a good job, but due to the dynamic nature of code (and especially the sophisticated nature of most modern software architectures and their pluggable configuration) it is not possible to determine true code coverage with just one automated tool. A QA lead should spend time with developers understanding enough about the variances and flexibility in their implementation that requires additional testing on a case driven basis. QA leads should report their own estimate of code coverage, which should be based on both the results of automated tests, known scenarios that have not yet been tested, and personal feeling/conviction. As the final arbiter of quality, a QA lead should never report higher or lower stability of a project or feature due to political pressure. Let a manager or developer fight the truth, but never let lies be spoken from the mouth of QA.