top of page

The tester and the code review

Code review, although very important and frequent in the software development world, is not as frequent in the automation testing world. Normally, it would be part of the whole process: someone writes code, reviews it, makes it available to the rest of the team, they review it, and if changes are needed they will be made, and the improved code will now be available back to the team. This helps in having better code and having awareness inside the team on what is being implemented.

Code is still code, no matter whether it is created for implementing or testing a feature, so there should be code reviews for all of it.

Who should participate in code reviews?

For the developer's code

If the developer creates a review for some code he/she wrote, normally only developers are asked to take a look at the changes. However, for a tester it is equally useful to read the code, to identify possible testing scenarios or to understand how the feature implemented will affect other features. Sure, the tester might not understand all the code, but at least he/she will have the option to read the implementation. And, with a little help from the developer, gain more insight into how and why the code was written.

For the tester's code

When it comes to automation test code, not many testers feel confident enough or want to create code reviews. Let alone to include developers in them. Many times testers feel they will get a negative response to showing their code to others, when in fact, code review is nothing more than a learning process and a way to improve. Sure, sometimes feedback can be a bit negative in the way it is written, but the idea is not to take it personal. Instead, testers should see that code review can offer good suggestions on how to improve what was already written, leading to more documentation reading and learning new things.

That will help in finding a good approach whenever similar code needs to be written in the future. Of course, in the best case scenario, nothing will need to be changed in the initial test implementation. But having code reviews can make the testing team aware of what new tests were written and what testing helper code has been added (that code that can be used in future tests written for other functionalities under test).

Consider the code review: a process for improving code, learning new things and getting everyone up to speed on what tests and utilities are now available.

Types of feedback from reviews

Personally, i consider feedback comes in several flavors, and can be categorized according to severity and actions to be taken.

Level 0

Or the "i should have fixed it before committing the code" level

This category includes things that should have been corrected before committing the code to the repository. These are common sense things, that are also most definitely highlighted by the IDE that you are using, in case you missed them yourself. Such things might be: unused imports, declared but never used variables, errors that lead to the code not compiling, incorrect naming (for example having written a method name starting in upper case) and so on.

Tip: before committing the code, do a proper review of it yourself. Depending on the IDE you are using, there are some sort of warnings thrown by it when trying to commit the code. Additionally, in the case of IntelliJ, you can use the Inspect code feature to identify changes that you need to do before making the code available to your peers. It's better to fix these issues yourself instead of allowing this kind of code to get as far as the code review process.

Level 1

Or the "i should have studied this a bit more" level

To say it nicely, these items will get a bad review if they are found inside the committed code. Developers would freak out if they would see these. Simply because they are bad coding. One example would be a 'for' inside a 'while' inside which there are 10 'ifs' and a few try catches. Basically gross code that totally contradicts any coding best practice and defies logic.

Tip: when you need to write code that seems too complex or difficult, read some documentation on the topic, or feel free to discuss with your developers. They can advise you on a solution that will be efficient and elegant. There is no shame in asking for advice in order to find the best solution to your automation code. It is way better to talk to them, instead of committing code that contradicts any coding good practice.

Level 2

Or the "there's another way to do it" level

This category contains rather suggestions than a clear indication that something needs to be changed. In the world of code, most tasks can be written in several different ways. Therefore people participating in a code review might have more inclination towards a library to be used for the code that is needed, or towards a strategy of writing it from scratch. So there might be a difference in opinion between the person who wrote the code and the reviewers. The code the initial writer thought of is not bad, but could be written in a different way.

Tip: when a suggestion comes to write code in another manner, first you need to evaluate the differences between the approach you, the code writer, took, and the approach that a reviewer suggested. You need to understand the pros and cons of both the approaches, and if you feel that what you wrote initially is the better choice, you can leave it as it is. However if the suggestion made by the reviewer seems a better fit, don't hesitate to refactor your code according to the reviewer's suggestion. Things to consider when choosing an approach or the other include: how much code is written, how long does it take for the test to run, any negative impact on the resources of the machine where this runs, and so on.

Level 3

Or the "looks are everything" level

Sometimes the feedback from a code review does not relate to how the code was written, but more to what it looks like. Many projects set their own standard for how to structure the code, which might include things like: naming conventions for certain types of files, patterns for messages that are written to the console, how many spaces to indent the code with, what to name utility classes, and so on. These are all meant to give the code a uniform and unitary look.

Tip: before writing code, for such projects, you need to get familiar with all the rules regarding how to structure the code. This will help avoid the need to fix the appearance of the code later on. Also, if such feedback is received during code review, an update should be done on the code, to make it follow the "design" guideline of the project.

Level Na-ah

Or the "this is not a valid point" level

Sometimes, it just happens that the points specified as feedback are not valid.

Tip: if you feel that the feedback received does not seem valid, again, check documentation and get in touch with developers. If it turns out the feedback is indeed invalid, just disregard it, and possibly provide some feedback-back (explain why the points made were not valid to the people who raised them).

Recent Posts

See All

Creating an Architecture for Your Automated Tests Writing Automated Tests in Small Increments 4 Times Your Automation Passed While Bugs Were Present Now That You’ve Created Automated Tests, Run Them!

This week my newest article was released, this time on how to use Log4j to log relevant test automation information:

bottom of page