top of page

The Automated Regression Suite. Part 2 of 3. When to run the tests.


Once you have your automated regression suite in place, you can create a scheduler to run them periodically, without any manual intervention. Mostly you will use Jenkins jobs (or some similar CI tool) to trigger them and have them running on an environment of your choice. Just because they are called “regression tests” it does not mean they are only meant to be run once before a release. They are in place to help validate your system, so you can run them as often as you want.

But how often do you want to run them? That really depends on some factors like: how often are you releasing, how often are you committing, how many other services is your software dependent on, how often are those services also released, and so on.

How often you are releasing determines the amount of new or changed code that needs to reach production. If you are releasing once a year, the amount of such code will be huge. If you release every two weeks you will have less such code, and in case a failure would occur in the software, it would be much easier to pinpoint the root of the problem (as compared to the one-year release cycle).

If you are releasing rarely, with large amounts of time between releases, you really should run your tests at least once a week. From my experience, this test run frequency makes running these tests worth it, since you have a good amount of changed code that you can test, but not too much to make finding a faulty commit painstaking. It’s like with the needle in the haystack: you can justify the need for running the tests once a week, since there are plenty new commits that could cause an error, but the size of the haystack where to look for the faulty commit is decent.

If you are releasing, let’s say, once every sprint, you still need to run the regression prior to the release, but the timeframe does not necessarily justify running a huge amount of regression tests. You can run the entire regression at the end of the week, when the suite has enough time to run. But also, you might want to run some subset of the regression tests throughout the week, depending on what features are being changed. If all the work focuses on updating just one feature out of the entire system, you can run a subset of tests that only deal with that feature, and you can do it daily, or at least every few days, depending on the frequency of commits. You can even analyze before the release if you really need to run all the regression suite, in the case when you are working on only one feature. However, from experience, I noticed that sometimes, even if the tester and developer are evaluating whether certain parts of the code will be affected by the current work and those seem immune to the changes, in reality they might not be. So better run the regression entirely one week before the release, and right before it, just to make sure. The sooner you identify an issue, the better. Way better than finding it in production.

Regression tests should also be run when a service your software is dependent on gets released, even if your code doesn’t. These changes done by a “third party” are not always visible to the customers of such a service, so you need to get to an agreement with them: either you let them know how to run the regression tests and agree that it is their responsibility to validate that they haven’t broken anything in your software, or they will get in touch with you to let you know of the upcoming release and allow you to run what tests you see fit. I find that the latter approach works best, since you are fully aware of what tests you have, are more familiar with them and are more able to figure out what happened in case they failed. Before you start the regression suite run, make sure to check that their changes are deployed to the environment on which you are testing, just to make sure you are not running tests by mistake on an already proven to work version of their service.

Of course you will need to let your product owners know that you will need some time allocated for checking any failures that might be thrown by the tests and reporting any newly discovered bugs, should they occur.

bottom of page