top of page

As a tester i should not find THESE bugs in the system

I'm not saying this JUST because creating a new entry in Jira for the bug takes a lot of time (due to making sure you typed the correct steps, in the correct order, grabbing screenshots, downloading logs, and so on). And no, i'm not referring to not  finding any bug at all. But instead...

I'm talking about finding the most obvious ones. Like the happy flow of a new feature not working. I have been in this situation many times and the story is almost identical each time: i prepare myself for the journey of testing a new feature. I sit comfortably in my chair, sip my tea, open the Jira ticket and start reading the acceptance criteria. Once i know what the 'happy flow' looks like, i start looking at the feature, by exercising exactly that: the happy flow. Not the complex negative scenarios that can be deducted out of the requirements. No, the happy flow. Twenty seconds later, i'm not sitting comfortably in my chair anymore. I am probably shouting the name of the developer who worked on the feature. Why? Because...well...the happy not working! The developer comes to see what i am talking about, always saying: 'but this should work'.

At this point, i am already moving the Jira ticket back to 'in progress' and i am not spending any more time testing the feature. Why? Well because the changes that will be made to the code might be so significant that any testing i might do while the fix is being worked on can become obsolete. Basically i could be testing code that will not exist anymore once the fix is in place.

But coming back to the initial problem, which is: i should not have found this bug in the first place. The happy flow should have been bug free, since it was assigned to the QA. Why? order to be assigned for testing, the feature under test was supposed to go through certain phases. Any of them should have picked up that the most important part of the feature, the happy flow, is not working. These phases include, but are not limited to:

  1. Analysis of the requirements before implementation. In order for the happy flow to properly work, the developer working on this task needs to understand what the happy flow really is. Based on the understanding of the requirements, the correct feature can be implemented. This is the first checkpoint in getting the happy flow working properly.

  2. 'Manually' testing the feature by the developer: before writing unit tests, usually the feature would be tested by the developer, maybe as a walk through of the steps in the acceptance criteria, or just to confirm that all the needed code was written. That is of course, if the developer doesn't write the unit tests first, in which case see point 3.

  3. Creating and running unit tests, on the local machine. After having the unit tests in place, these could be ran by the developer on the local machine. They should also be part of the build process, so that when the build is made, if the happy flow is not working properly, this would be addressed way before the code ends up in a test environment. Build failure on the local machine means no deploy, and an opportunity to fix the failure early.

  4. Code review. Although developers would argue that code review only checks for code correctness, and for things like proper usage of language constructs, i believe there should also be a hint in there that the feature won't work properly.

  5. Running the unit tests on the machine where the code is deployed (to a test environment). As part of the deployment to a test environment, the build needs to be built by a Jenkins-like machine, which runs the required commands. Since these include unit tests, the build should fail and should report that the functionality is not working properly. At this point, the feature will not even end up on the test environment due to the test failure.

  6. In the totally undesirable situation where unit tests do not exist, the deployment to the test environment should be successful. Once this happens, the developer should take a look at the deployed feature, to make sure everything is ok. They don't need to test every single test scenario, but the most basic, know...happy flow, should be looked at, before the Jira ticket is moved to the QA Ready column. Even if unit tests are in place, a small check on the deployment environment will only take a few minutes, so it is advisable to perform such a check. come the feature does end up in QA Ready without it really being QA Ready? Some of the steps above are probably skipped for some reason.What can us testers do about it? I have some ideas:

  1. Make it known to the team when this happens. Create visibility over this situation so that the  entire team realizes this result and tries to avoid this happening in the future.

  2. Present your belief that the above steps should be taken for each newly implemented feature, and have an agreement with the other team members that such a situation should not occur again.

  3. Don't shame the people who got it wrong this time, but try to encourage them to follow a standard development process.

  4. Create a 'happy flow' jar. Developers can drop candies in there whenever their feature has the 'happy flow not working' problem. They never will, but you can only hope. The least they can do is sweeten the situation a little bit.

Recent Posts

See All

Creating an Architecture for Your Automated Tests Writing Automated Tests in Small Increments 4 Times Your Automation Passed While Bugs Were Present Now That You’ve Created Automated Tests, Run Them!

This week my newest article was released, this time on how to use Log4j to log relevant test automation information:

bottom of page