If you find yourself maintaining a bunch of untested code, with no documentation whatsoever, and nobody else in the team knows anything about, you are probably in legacy hell.
Here are some common aspects of a legacy system and what it is so difficult to work with one.
A legacy system generally has usually been around for a few of years and has become hard to maintain.
Too Many Cooks
Different generations of developers have worked on it. The overall knowledge on the design and purpose of each module has been lost and you find yourself digging in many places to figure out the logic of it all.
The bigger the project, the greater the chance of finding bugs. Even more so if there is co-dependency between the modules.
A large project tends to remain as legacy for life because it is hard and risky to update.
There usually is incomplete or non-existent documentation. You could be forced to try to figure out the intentions of the original design.
Even when there is documentation, it cannot always be trusted. Programmers generally hate to write and update technical documentation. You cannot not be fully trust it without looking at what the code is actually doing. It’s a vicious cycle.
No Test Suite
Legacy system usually lack lack of reliable tests. Well-written tests could actually function as documentation since tests usually remains synchronized with the code. Unfortunately, this is not always the case.
Developers usually code without realizing the importance of testing. In a legacy system, this is more difficult to do once the functional part is over and done with.
Oftentimes in the rush to get code to production, much needed refactoring is not done when the original code was written. This becomes even harder to do with as time goes by and new people get to work on the code.
Sometimes we resort to quick or temporary solutions, which are not always the most effective ones. Each time we do this, we accumulate technical debt which.
Over time, the quality of the product is diminished and it becomes harder to enhance the product.
According to Andrew Morton, who keeps the kernel code at Linux, code reviews can improve the quality of your code. It can also help prevent security leaks.
As you get to understand different areas of a legacy system, it is a good idea to have your code reviewed by your peers.
Doing this will help other members of the team understand more of the overall system.
Also, the people who participate in these reviews will have an easier working with the code when the need arises in the future.
Hard to Configure
Legacy systems are usually difficult to configure. It may take hours or even days to download and install a system.
Then you have to learn how to use scripts or tools to create necessary environment in your own machine. After all that, you start to work directly on pending tasks. No effort is made to improve this process and the next person will waste just as much time setting things up.
However, there are quite a few good reasons to keep the configuration of a project as streamlined as possible. The easier a project is to configure, the more people available to work on it.
The other work that is needed is to update dependent libraries to their latest and greatest version. This means you need to understand the implications of doing so to make sure you are not creating more problems than you are fixing.
This type of update must be done regularly. Otherwise, you might be facing security risks, for example.
Quite too often, in a legacy environment development, testing, and production environments drift apart from each other.
Keeping these environments aligned, even with the aid of automated processes, makes a programmer’s life easier when replicating behaviors. Otherwise, mismatched environments can generate confusion.
Ignorance and Fear
Legacy systems are usually complex, poorly documented, or poorly tested. Those who run them are not always aware of what goes on behind the scenes. This lack of knowledge may lead teams to be afraid of making changes. They start to look at every change as “unnecessary” or “risky.” This, in turn, causes the project to enter a state where programmers defensively try to protect the project from external agents, instead of taking a step towards change for its improvement.
This is a bit ironic. It makes the legacy system even more so. As time moves on, its lack of new features and updates makes it fall behind the competition. This may even lead to financial risks.
If you are working on a system with these characteristics, you are working on a legacy system.
On the other hand, the are systems that have been around for a while but have managed to keep up to date. For example, the Linux Kernel which was developed in 1991. It has been maintained by several generations of developers. It is big, it is estimated to have over 15 million lines of code. However, It keeps a high quality standard and has a low density of flaws. Linux users and its developers keep a continuous open communication flow.
Other examples that are worth mentioning are LibreOffice, Hadoop, Samba, Cassandra and Tomcat, each with more than one million lines of code and a low flaw density.
We don’t have to fix everything in one attempt. It’s necessary to revitalize our project one step at a time. The first step is to acknowledge that the problem does exist.
Then, do small refactorings and get ready for bigger ones.
You may consider if it is required to re-write the code of full modules, split larger modules into smaller ones, and include more effective tests.
The most important thing is to keep an open communication with the whole team to disseminate knowledge and establish a coherent design type to follow.
If we do all this and clean things up, the code will become more legible, we will be more capable of generating tests, introducing new features will become easier, and the fear of updating the code will diminish.