I recently had the opportunity to sit on a panel that discussed the possibilities and limitations of test automation, especially in the context of agile projects. The conversation focused primarily on automated test tools used for functional testing rather than the tools used for unit testing.

There were 5 panelists and more than 80 audience members. The discussion covered a wide range of topics and a wide range of views within the realm of automated testing. For the purpose of this blog post, I’d like to mention a few of the concepts that I found particularly interesting. Please note that there were a variety of views espoused on each of these topics so the conclusions that I write about here only reflect my views based on the discussion.

Pareto Principle

In an earlier post, I mentioned that the Pareto principle, also known as the 80 20 rule, is a good way to determine the point of diminishing returns when identifying the features to include in a release. I also have found that this is a good rule of thumb for determining how much testing is enough.

Based on the Pareto principle and experience, I have found that you can typically test approximately 80% of the functionality that people actually use by focusing on 20% of the possible test combinations. I think that most people could intuitively grasp this concept by thinking about what percentage of available features they regularly use in Microsoft Word. Although there are a lot of excellent features available for power users of Microsoft Word, the vast majority of users would never notice if there were bugs in 80% or more of the features.

In many cases, I believe that testing resources are used most efficiently when they’re focused on the 20% of the most used and most important features in the software application. Of course, this doesn’t mean that you should ignore testing of some features altogether, but it does provide a good rule of thumb for how much of your testing budget and effort should be spent on testing each feature.

To me, one of the most interesting comments from a member of the audience was that when they did a code-coverage analysis of their manual testing efforts, they found that their manual regression test was only exercising 25% of the code. To some people, this implied that much more rigorous testing was needed to reach 100% code coverage. However, assuming that this software is currently working well in production, it actually is a great illustration of the power of the 80 20 rule.

Automated Testing vs Manual Testing

One of the primary topics of the discussion was how much automated testing is too much. Based on my experience and research, I’ve found that automated functional testing is valuable as long as everyone is aware of the limits.

Early on in the discussion, a panelist asked the audience if any of them have experienced too much functional test automation. Approximately 1/3 of the audience raised their hand, including me, which surprised some others in the audience. How can there be too much automated functional testing?

Complexity

One thing that people tend to forget is that automated tests = additional code. Oftentimes, people who sell commercial tools like QTP or TestComplete will demonstrate record and playback features that imply that functional test automation is as simple as point and click. However, the reality is that any useful tests that come out of these tools require additional programming in order to test anything useful. These automated tests are actually programs in their own right and need to be debugged as well. When a team exercises buggy automated tests against buggy code, identifying the root cause of any errors that are found becomes exponentially complex. Odds are that the developers will default to blaming bugs in the automated tests for any errors that are found.

In fact, one audience member described a phenomenon that came to be known as greenwashing at his company. This was the practice of simply changing the automated tests to show up as passed (green) whenever errors showed up. This meant that actual bugs were ignored because their first inclination was to blame any failed tests on bugs in the automated test itself.

Maintenance

Because automated tests = additional code, it’s clear that any automated tests that are created also need to be maintained. Each time code related to the automated test is changed, it’s likely that the automated test is likely to change as well. Whenever the test is changed, there’s additional opportunity to introduce bugs into the test itself and additional opportunity for greenwashing.

Several people pointed out that maintenance of manual tests can also be difficult and time-consuming. However, I’ve found that good manual testers can adjust to outdated tests and update them as needed. On the other hand, automated tests do exactly what they’re coded to do and cannot make obvious adjustments on the fly.

Resources

Many managers seem to be drawn to test automation on the premise that they can eliminate human test resources. They are under the impression that automation will be able to replace manual testers. However, in my experience, adding functional test automation typically requires an increase to the Quality Assurance staff. People still need to design the tests to be effective, execute the tests, record bugs, research the root cause of the bugs, prioritize the bugs, verify bug fixes, write the automated tests, and maintain the automated tests.

In fact, because writing and maintaining automated functional tests is a special skill that requires a combination of strong Quality Assurance and programming skills, it can be difficult to find qualified people to do the work.

Acceptance Testing

Some people at the meeting brought up the possibility of automating User Acceptance Testing. However, most people who have any experience in software development have experienced the pain of delivering software that the users hate despite the fact that it meets every stated requirement. It isn’t realistic to anticipate every possible usability issue, and automated tests can only validate that software meets the stated requirements.

One person at the meeting mentioned that Microsoft moved from manual testing to automated testing when building the Vista operating system. He had read that some of the problems with Vista were blamed on the reliance on automated testing and that Microsoft has since added more manual testing back into the mix.

Quality Assurance is a Skill

One thing that most people seemed to agree on was that Quality Assurance is a skill that requires training and experience to do well. I was glad to hear such wide acceptance of that concept since I have met many managers who seem to think that Quality Assurance can be done by any entry-level employee. I was also glad because one of the basic concepts of Embedded Quality is that “QA is performed by qualified experts”.

One person who did not have prior automated testing experience shared a story from his company. They asked him to evaluate and implement automated functional testing in a short amount of time. However, he had trouble meeting the deadline, and test automation was not implemented. He asked what resources and timeline would normally be needed to successfully implement test automation. There was wide agreement that a person with test automation training and experience would be necessary to have any chance of quickly introducing test automation. Most of the panel and audience agreed that test automation is a specialty within the realm of Quality Assurance skills. I’d add that performance testing and security testing are also specialties within the realm of Quality Assurance which require special training and experience to be done well.

Your Experiences

I’ll definitely address some of these topics in more depth in the future, but I wanted to make sure to capture some of the conversation points from the panel discussion right away. I found it particularly interesting to hear about everyone else’s experiences. I’d also be interested in hearing about your experiences. Have you had success with automated functional testing? What tools did you use? What challenges did you face?