

Discover more from Technical Excellence
As a developer adding or changing functionality in a large piece of “legacy”, untested code, you generally have 3 choices.
Make the change and rely on manual testing
The decision to take this path is not an unreasonable one, given the work required for the other two options. This decision is always an ROI one, there are benefits to making a quick change and moving on to something “more important”. Perhaps it’s an area that’s well looked after in manual tests, or a less important or low risk area. Or it could be a simple change that can be made with a certain degree of confidence. There are many reasons that this might be the approach taken.
Overall this approach is far from ideal. While the change will likely be tested manually, it could have unforeseen impacts due to the size and complexity of the code you’re working on.
Taking this decision is the path of least resistance, so leadership should expect that this is the default path for the majority of situations in a pressurised environment.
Add tests to the existing code to test the new functionality
The decision to add tests to a large method when making a change to a complex area of code is usually a lot of work.
Imagine you have a method which gets a list of food items from a database, maps them to a meal object, modifies a selection of the food items based on availability of an ingredient, then returns the modified list of meal domain objects. There are a lot of issues with this code but no doubt a scenario you will have encountered many times - where a method has too many responsibilities. The method is hundreds of lines long and there are no existing tests.
Your task as a developer is to cater for a scenario where if one of the fields is set to “apple”, then you have to change another field on that item when mapping it - you need to set its category to “Fruit”.
If you have made the decision to add tests to this existing code as part of your approach, there are a number of things you need to do just to write one test around this code. Once you have done this, writing further tests becomes easier, but there is a large amount of set up to be done.
To avoid the tests requiring a database connection, you need to mock the data access code to return a desired list of food.
In this instance, you’re not interested in the logic which checks ingredient availability, you’re assuming this works as expected, or at least will be tested separately later. So you write code in your test to simulate this categorisation to give the result you want.
You’re not worried about testing every possibility of meal result, or indeed any meal result, so again you simulate any output you would like here.
Now you have the rest of the scenario set up, you write a few tests. One to test the situation where the particular field is “apple”, one where it isn’t “apple”, and maybe another to check null or exception scenarios. These tests confirm the output object has the category “Fruit”.
In this contrived example, there are at least 3 pieces of simulation that you need to set up in your tests before you can get to testing the piece of logic you are changing or implementing. In the real world in a really badly written piece of code you can be expected to have to simulate many pieces of logic in order to test the functionality you’re interested in.
Also note that you are doing the bare minimum to ensure your change is tested. No tests are being performed on the database, ingredient availability, or other output logic of this code. You just simulated examples to get the test to work its way through this code and get to the bit you’re interested in.
The decision to take this approach is often driven by leadership pressure to “add tests to all PRs”, or an obsession with code coverage or other metrics. Or often it will be seen as a good middle ground between the three options in this article. You’re at least adding some tests, and not having to make any significant, possibly dangerous changes to the existing code.
Refactor the code to allow it all to be tested
Taking this choice usually involves the most work. Sometimes though, it will be less work than option 2, so in this case it’s certainly the best decision.
A lot of the time though, it will be a lot of work so business pressures usually dictate one of the other two options be taken. You might think this option also comes with the most risk, however if done correctly it does lead to more robust and well tested code.
For our example above, we would break up the code according to Single Responsibility Principles, which I have discussed at length in other articles. Once the code is sufficiently separated, each piece can be well tested. You can then build your new functionality taking a TDD approach, writing the new code in a much cleaner and more testable way, instead of making difficult patch changes to existing bad code.
This is the best approach overall and the one that should be taken in most situations, but often takes the longest, so is rarely the direction taken. This is a shame as it has the most long term benefits.
The code is cleaned up using SOLID principles, this makes it better to work with:
Future development is considerably faster and easier
Future maintenance, and bug fixing is quicker, easier and safer
There are less likely to be bugs in this code moving forward
Test coverage is increased with valuable tests
The code is more enjoyable for developers to work on
Bugs may be found in the refactoring process
Correct functionality is confirmed and documented during the test creation process
Given all these advantages, taking this approach should be a no-brainer, unfortunately there are many obstacles to making this choice over one of the others:
Making such large changes can be intimidating to developers
The time involved is usually prohibitive in a pressurised environment
This work is seen as less valuable by leadership, as it is not directly adding features or fixing bugs
This approach can be risky if not done correctly - in taking apart existing code you need to add a lot of tests to make sure you don’t break anything
Conclusion
There are definitely pros and cons to each of these approaches. For the first option, manual testing has its place and there may be more value in spending time elsewhere. GPT can assist in quickly bootstrapping tests for the second option.
Business pressures and ROI decisions will play a part but developers should always endeavour to do their work to the best of their ability. That means making the code right, as much as allowed. This might often be unpopular but the benefits of the third approach will always win out in the long run.
The question may come down to how much life is expected from the product. There’s no point in refactoring a large piece of code on a product which is unlikely to see much further development, or which exists with infrequent issues.
In an ideal situation, code you’re working on shouldn’t violate the Single Responsibility Principle in the first place, and writing tests should be easy, every time. But we all know that just doesn’t happen in the real world.