top of page

Write automated tests with repeatable results


Writing automated tests is no longer the biggest challenge in the testing community. Writing reliable automated tests is. So many times tests that were once written are sent to the garbage bin or thrown into oblivion. They are unreliable and people will just ignore them when they are running, simply because they have a history of failing for various random invalid reasons.

One of the reasons of having such useless tests is that they are written either in haste, or carelessly. Many times people have tasks they need to accomplish within a pre-allocated timeframe, usually to finish the testing during the sprint. Or they have to write a specified number of tests during one day. These are the cases when, once a test gets written, and passes at least one time, people are happy to cross it off their list and move on the next one. Even if, from 3 test runs, it only passed once. Managers are probably happier hearing that 100 new tests were written, instead of hearing that 5 sturdy and reliable ones were. They like math and reporting.

But we are testers, and we should not be concerned with these numbers. Our main focus should be to write tests that are reliable, which means a lot of things. Among others, a good automated test must have repeatable results. This means that if the software under test does not change, each test run must have the same outcome each time it runs. If you create a test that checks for the happy flow, and all the code behind the happy flow is correct, that test should pass every time, when running on the same code version. That is if there is no bug in the software.


How to achieve repeatable results?


Well, to start with, run a test that you wrote more than three times. I usually run mine about 20 times around the time I finish writing them. But I also run them several times throughout the day, to see how they get impacted by the different activity going on in the environment where they run. And I also run them several times throughout the week. Simply to have them running on different ‘states’ of the environment. If you are doing continuous delivery that should be no problem, as you can schedule periodic CI jobs to run them. Even if you don’t, you can just pick them up and run them by yourself.

Doing that will easily reveal if there are any issues in your tests that you need to address. If you are doing front end testing for example, several test runs will highlight any timing issues and any tweaks you need to implement due to pages not loading at the same speed each time you interact with them.

When you run tests enough times, it will seem like each time they fail there is another reason for the failure. Look at them one by one and see why they happen: is it a bug, is it an environment issue, or is it an inevitable delay or some similar events? If it’s a bug, raise it with the team. If it’s an environment issue, raise it with the relevant people. If it’s just one of those timing issues (for example), just update the test to handle the timing. Don’t make the test pass no matter what. But make it pass by addressing its bottlenecks.

The update might be tricky at times, you might need to do plenty of debugging and have lots of patience to get to a green test. But consider it a challenge, and once the challenge is solved, you will feel good about having created a reliable test that will be consistent across all its’ future runs.

The best thing about having a reliable test with repeatable results is that it will require less maintenance in the future, and it will not require any re-running, hence it will not add any extra run time when you are running a huge suite of tests at once (like when you are preparing for a release).

Therefore, if you see a test that does not pass at each run, when it should, don’t hesitate to take a look at it and transform it into a beautifully reliable test. Managers are also happy to hear that you only spend more time on writing a good test once, as opposed to spending even more whenever the test randomly fails, when it shouldn’t.

Recent Posts

See All

Creating an Architecture for Your Automated Tests Writing Automated Tests in Small Increments 4 Times Your Automation Passed While Bugs Were Present Now That You’ve Created Automated Tests, Run Them!

This week my newest article was released, this time on how to use Log4j to log relevant test automation information: https://blog.testproject.io/2021/05/05/logging-test-automation-information-with-log

bottom of page