It's pretty hard to manage what is out of your control. And to control in this case means to measure. Dealing a lot with testing and reviews, I found out some all-too common and insidious symptoms that lots of regression tests suffer from: they are repetitive, redundant or using the same steps all the time hereby limiting code coverage. This affects both manual and automated strategy. We are misled by the observed simplicity of regression.
Let's go further and focus a bit more on the things essential for regression tests management.
Identifying the value of regression tests
It's necessary to figure out what exactly this very value consists of, what put our attention to. The main ingredients are:
Reliability. How confident you are in proper work of the system under test in the end.
How much time it takes you to create, run and maintain the tests.
A good formula is evident – a test takes little time and provides high reliability.
Also keep in mind 'Reliability' does not equate to 'get rid of work'. Though running automated regression tests doesn't require constant presence/attention of an engineer, still it will take expenses to analyze the received data and maintain the tests. So, be alert.
Managing regression tests
More is not necessarily better: avoid executing multiple regression areas just in case. Assess potential weak points of the system and use appropriate tests to reveal them. In this case, it's very useful to have the cases tagged by functional block.
Control release regression time leaks. There is also no need for a whole regression pack for each new release or change of configuration, when just a number of regression tests per user story would be enough. While you do change-impact analysis or analyze new commits you already know which parts of the regression suite you will run.
Control Time since first run: as far as the system under test is getting more complicated, it may require more complicated tests to maintain the necessary level of tests value – look into integration and system level.
Changes in functionality: if a new feature/ functionality is added, it should be reflected on the related tests. It doesn't mean creating new tests, but a test review on their reliability would be wise.
Test level: your choice of regression tests will depend on the lifecycle of software development itself. That is some new tests may be included into the pack, while some old ones (less valuable) should be retired.
Tests retirement: each time you archive a piece of system or an application, mark the related tests accordingly. It will contribute to tests management a lot. Distinguishing between "active/working" and "retired" regression tests will save your time and efforts.
Develop a flair for tracking down the coverage of your regression tests. Be aware of the value of the regression tests you create, run and maintain. Measure and analyze all the time.
These simple steps will allow you to gain to the full from your work:
Decompose: it should be clear what requirement is under test to state clear business priority.
Measure: if you don't know the coverage of your tests, you have no idea about their actual value.
Monitor the runtime: the timeframe required for running the tests either adds value to your QA or cuts it off. Be aware of that.
Update: make sure your tests do not just work correctly but meet the actual state of the app.
Manage: get to know the time and costs required to create, run or maintain your regression tests.