top of page

The Automated Regression Suite. Part 1 of 3. When to create the tests for regression



What do I mean by “automated regression testing”? I am not one for debating for hours what this means, so let me give you my interpretation (not definition), so that we are on the same page: whenever you are performing a new release, you need to make sure the features you released some time ago still work properly. For that, you will need to run some kind of tests, to ensure the features are still working as expected. You could do that manually, but running the same manual test cases repeatedly, for each release, takes a lot of time and quite frankly, becomes boring or even frustrating at one point. Hence, the suite of automated tests comes in handy. Having these in place will allow to verify plenty of scenarios while you can do something more enjoyable during the test run.

When to create the tests for the regression suite

There is an aspect to consider regarding ‘when’ the regression automated tests can be created: whether the feature for which you write the tests is currently being developed, or if it has already been developed some time back and released to production.

For feature under development

The first case is probably the happiest one. As you are currently developing the feature, you need to also test it. Therefore, the stakeholders will more likely and more easily accept automation as being part of the Definition of Done for the user stories being worked on. So instead of manually testing the feature, create the automated tests right away. Of course, you need to evaluate for what scenarios you need automation, but that is a totally different discussion. Having automation as part of the DoD implies that a user story can only be closed when the corresponding automated tests were created and ran successfully.

In this case, at the beginning, these tests are seen as just automated tests for a new feature. Once the feature gets successfully released into production you can consider the tests as being part of the regression suite you have available for running. Basically, once you release the feature you will have a stable version of the tests, without the need to change anything about them, unless a change needs to be done on the feature as well. You can think about it this way too: once the tests have run successfully and no updates are required on them for the current production version of the feature, they can be included in the regression suite. Or considered part of it.

With this approach you are creating your regression suite bit by bit, whenever you are developing something that needs testing.

For previously released feature

In the frequent situation where a feature was released to production long time ago, but without having any automated tests created for it, the challenge will be to find the time to get the regression suite in place. It is always difficult, since the companies’ priorities are to deliver new features, instead of dwelling on old, already proven to work ones. You can create the regression suite incrementally, since nobody will give you 10 sprints back to back to work only on this task.

You will need to do some convincing and plenty of analysis, so the place to start will be gathering data about how much time you spend doing the work manually for each release or each timeframe when this needs to be done. Then, you will need to gather a list of all the features for which you need to create automated tests. You can put this information in a spreadsheet. Ideally you can write down what scenarios you need to automate, so you have a clearer understanding and vision of what needs to be done. Based on this information, you could also write some estimations of how long it will take to automate. Put a very high level one, so you have room to work with. Prioritize the features that need to be tested.

Then, show the spreadsheet to your stakeholders, and discuss: why you need to automate those scenarios; how much it takes to manually test them for each release, and consider how many releases you perform during a month or a year, to get the big picture of the time spent with this task; and then agree on timelines for doing the automation. You should do it in chunks, to allow work on new features also, as the stakeholders like. Start automating the most important scenarios, so that you get more value out of this process of automation. Don’t pick the feature with the least number of scenarios, or the easiest feature to automate, but instead pick the most important ones first, from a business point of view.

This approach helps not only convince your stakeholders that you need to allocate time to creating the automated regression suite, but also for them to better understand what is needed, what is missing, and what you already have in terms of testing the software. As you are progressing with the automation process, mark the progress in the spreadsheet and keep an eye on it. I like highlighting the completed items from a spreadsheet with green, to make the progress more visible.



Recent Posts

See All

Creating an Architecture for Your Automated Tests Writing Automated Tests in Small Increments 4 Times Your Automation Passed While Bugs Were Present Now That You’ve Created Automated Tests, Run Them!

This week my newest article was released, this time on how to use Log4j to log relevant test automation information: https://blog.testproject.io/2021/05/05/logging-test-automation-information-with-log

bottom of page