Tracking Automated Testing Coverage

So you’ve made the leap to automated testing. You saw the writing on the wall and realized QA would soon be physically incapable of testing 100% of the product manually, so you initiated a test-writing spree, and now you want to know how those automated tests have impacted your product quality.

This is where things tend to get tricky.

Measuring the effectiveness of your automated tests can be difficult if you’re not sure what to measure it against. Do you go by number of issues found? If the number of issues found during manual testing outpaces the number found through automations, then it becomes hard to legitimize time spent writing automated tests. Test failure rate, therefore, isn’t the best measure of automation effectiveness. But without a means of gauging testing effectiveness, you run the risk of ending up with a vast, unstructured, unscalable test suite that's time-consuming to maintain and riddled with duplicate tests.

 

 

Instead of basing test value on failure rate, consider basing it on product coverage. Test coverage models break your product up into area categories and define the rules your automated tests must follow. With models, you can clearly define goals, track progress, and significantly reduce the chances of duplicating your test cases. They also help you avoid many of the pitfalls that will likely arise as your automated test library grows.

Follow these steps to create a testing coverage model for your QA team.

1. Partition the Product. Take a step back and look at your application from a bird’s-eye view. Split the application into unique high-level categories; these will serve as your models. Your models should be relatively permanent, meaning adding new features to a particular area would not change the overall categorization. For example, a website consists of, but is not limited to, its overall security and user interface. We have identified two models: Security and User Interface. A test case validating that a user’s input is correctly sanitized would fall into the Security model. Verifying that a link navigates to a proper location when clicked would go under the User Interface model. Simple, right?

2. Define Test Requirements. For each model, set rules for how tests should be implemented. Ensure that all of your tests conform to a common structure so that they will be easier for your team to read. Also be sure to define what makes a valid test in a specific model (i.e., each test must contain an assertion statement).

3. Create Test Dictionary. After you’ve defined all the models in your application, create a dictionary of all possible test cases within each model. Be as granular as possible. This dictionary will give you the total number of possible test cases to measure your coverage against and prevent your engineers from writing duplicate tests.

4. Verify Test Scope. A single automated test should validate a single test case, not multiple test cases. This is to ensure that a test’s result accurately depicts the state of the case its testing. This will prevent you from spending time figuring out why a test failed. If it fails, it should be as a result of a legitimate issue with that test’s underlying test case.

5. Document. Properly document your test coverage models. Descriptions of your models and the rules associated with each set of tests should be kept in a central document location for easy access. This will ensure that everyone stays on the same page and help new engineers quickly get up to speed with your practices.

6. Specialize. Assign your QA engineers individual models to write tests against. Although the work can still be spread out, creating areas of expertise and defining who is responsible for what will make managing the workload easier and leave no questions as to who does what when new features are developed.

7. Set Goals. With a proper structure in place and a clear understanding of what needs to be covered, set realistic goals based on your team size and strive to meet them. When determining your coverage of a model, take the total number of test cases you’ve defined in your model’s dictionary and compare that to the tests that have actually been implemented. It’s that simple! Tackle each model in a strategic way that will give the most coverage and security when measuring the overall quality of your application.

With these processes in place, you’ll be able to compare percent coverage of both individual models and the entire application between sprints, releases, etc. Combining percent coverage with failure rate metrics will give you and your team a more accurate depiction of your automations’ success. If you reached 70% coverage with 7% failure in one sprint, try striving for 73% coverage and 6% failure in the next. (Keep in mind that your percent coverage may go down as you add new features to your models.) We hope these testing tips help streamline your QA automations and boost your overall productivity.

These modified versions of “Quality Assurance” by The Preiser Project are licensed under CC BY 2.0.

 

 

Authored by
Chris Etuk
Quality Assurance Manager
comments powered by Disqus
`