Writing Better Code; Keeping it Maintainable

This article provides guidance for writing better, more maintainable code.

The Developer's Dilemma

There are two main ideas to keep in mind and should follow in order to keep our code maintainable.

1. Complexity is the enemy!

2. Fragility is the enemy!

The problem we encounter as developers is that when we are speaking in terms of these two opposing forces, there is a inverse relationship between them. Creating more flexibility to reduce fragility in our code requires more complex code.

So what can we do? We basically have two options.

We can write very (very) simple code but it will not respond well to change and will be very (very) rigid and fragile. The extreme case would be to have procedural code that does not take advantage of any object oriented constructs. But what happens when something has to change? We need to go back into the code and rewrite large pieces of it that we can't finish before our deadline.

We can write code that takes every possible change into account and uses all the fancy design patterns we have ever found. When we first run across a cool new pattern we naturally look for places to throw it in. If we're really smart, we can even invent a few new ones to add to the code stew just for flavor. But what happens when we have a change? The code is so difficult to understand that we can't finish before our deadline.

What Should We Do?

Since we know that we (and our fellow developers) will need to maintain the code we write, we need to try and find the "Sweet Spot" that perfectly balances our two opposing forces and maximizes maintainability and keeps our code (and us) healthy. To create maintainability we need to minimize complexity and fragility.

How do we code in the "Sweet Spot"?

Developers are Habitual Gamblers

As developers, when we run across a potential place where code may change (which is nearly every single line of code) we need to take a gamble. If we decide to put our money (and time) into adding the additional code to handle future change and the change never occurs then we have unnecessary complexity and we loose the gamble. If we decide to put our money (and time) on keeping it simple and don't code for complexity that actually comes up later then we'll have less flexible code (more fragility) and again will have lost the bet.

As with all gambles, sometimes we'll win and sometimes we'll loose. The trick is to maximize our gains and minimize our losses. If we take the gamble and win, everything is good. So the first thing we need to do is to be able to see the future (like the main character in the movie "Next") and know what changes are coming down the road. If you are like me and don't have the required superpower or a crystal ball readily available then you can do the next best thing; rely on experience. Pay attention to when things change and keep this in mind for the next time you need to gamble.

Since we know we will sometimes (or even often) place loosing bets, the next thing we should do is try and minimize our losses. We have two potential "loosing" bets that we place: (1) we code for change that never happens and (2) we don't code for change that happens. What do we loose in each case?

In the first scenario where we code for change that never happens we loose two (or three) things. We loose the initial time and effort it took to add the flexibility in our code that is never used. We also loose maintainability because we now have code that is much more complex than it needs to be. We may also loose any additional time required to factor out the complexity if it becomes unbearable.

In the second scenario where we don't code for changes that actually happen we only loose one thing: the time it takes to go back and refactor our code to adapt to the change.

If we look at the effects over the long-run, what happens?

If we write code that is more complex than necessasry, the "Complexity" leg of our graph will grow and make it harder and harder to get maintainability. For example, if we have the dreaded "design-pattern-fever" and keep coding for changes that never happen and don't return to the operating room and fix our code then it will become more (and more) complex and less (and less) maintainable. If this continues for a long enough time, eventually our code will no longer be maintainable and will be so sickly that the only option will be euthanasia (which is always sad) and the functionality will need to be completely re-written or the carcass will just be discarded.

If we keep missing flexibility that is required in the future then we will need to continuously go back and refactor our code to handle new requirements, but at least we will "keep the balance" between fragility and complexity and our code will maintain a healthy state.

"When in doubt, leave it out"

To keep our "code garden" alive and well we need to constantly tend to it through the changing seasons to keep the balance between complexity and fragility. This means weeding out unnecessary complexity and adding the required flexibility through refactorings. To strategically minimize the amount of work we need to do I have found one simple rule that works well: "When in doubt, leave it out". If we are not 80% sure that change will happen then we shouldn't waste our time coding for it because if we don't use it (and want our code to be maintainable) we'll need to go back and factor out the complexity and this is just wasted time on both ends. Of course, another option is to never plan for change in which case we'll end up being less efficient in the long run and will spend an inordinate amount of time refactoring our code.

Another situation is when, as we are coding, sometimes we get a bit dizzy and light-headed as tempting ideas start floating in front of our eyes to the tune of "Hey! Wouldn't if be cool if the software could...?" Although it's fun to daydream about all the cool things we can do, we shouldn't loose sight of what we have to do. The dizziness and light-headed feelings are usually due to a very hard to contain disease that most developers have caught at one time or another. If we give in to the disorientation caused by this specific disease and take the scenic route we can actually end up with some really cool feature-rich software that does everything including driving the kids to school, scouring the web to gather research materials we need for the next meeting at work or school and even solving all the problems preventing world peace. All this is really cool, but if we just need something that helps to keep track of our record collection, the coolness will never be used (well, maybe except for the world peace thing) and we'll again have the additional complexity from unused features making our code unhealthy and harder to maintain. So again our rule "When in doubt, leave it out" applies. Once we have a specific requirement that specifies we need to solve world peace we should tackle it, but if it is not required for keeping track of our albums, we should wait until the "world peace features" are required. At that point we should consider making the new "world peace" features a plug-in because they will probably consist of a substantially large (and complex) chunk of code.

Tests, Tests, and More Tests

It is important, even crucial, to have tests in place to help us manage change and ensure we are not breaking anything. As our code matures, testing becomes increasingly important to ensure stability. The more useful testing we have in place, the easier it is to maintain our code. In a perfect world, we would have a separate test project (or two) for each assembly we write.

While not always possible, it is a good idea to have our high level business logic tests the method level tests separated into separate test projects or at least separate test classes. The business logic tests are usually more involved and coded against multiple public objects and public interfaces and test the interactions among objects. Method level tests check things like object property getters and setters and simple methods and can even check internal objects, properties, and methods. If we keep our tests organized in such a way then when our code does change we can easily remove tests that are no longer needed and still keep our business logic pinned down. Also, other developers can look at our high-level tests to help them understand how to consume the functionality we are exposing.

There are a few primary situations where we have opportunities to write tests. We can write tests as we code (or even before we code; if you are not familiar with this approach, check out my TDD article here.), as we refactor and as we are fixing bugs. We can also write tests to help us figure out existing code that we did not write while we are poking at it. Sometimes we need to change code that does not have unit tests. In this case, we should almost feel obligated to write some tests in order to pin down the functionality of the code we are about to change. It does take time and some effort to build the tests, but we can be more confident our changes don't break the existing functionality if we can repeat the same tests successfully before and after our changes. Also, when we run across bugs, we also have an opportunity to write a unit test that reproduces the buggy behavior and fails as a result. This way we can be sure that we (or someone else) does not re-introduce the same bug again in a later iteration.

While it is probably not worth the effort (or required complexity) to get 100% code coverage, we should pin down as much functionality of our code with tests as we can. Beware... it is easy to get distracted and caught in the trap of trying to get 100% coverage for the sake of coverage (which is not really the point). Besides there are many tests that will cover code but don't truly test it. If we have 70%-80% functional coverage then we are doing pretty well and anything higher usually starts coming with diminishing returns. Unit tests should not be an end in themselves. Rather, they should be put in place to test the functionality we are providing and ensure we can easily handle change in the future. For example, it doesn't make sense to put forth the effort to unit test the functional details of another assembly from ours if the only purpose is to get a higher percentage coverage. What does make sense is if we just happen to be testing the details of another assembly during a test of the code that we wrote.

Abstraction and Encapsulation

The final thing we can do to manage complexity is to encapsulate this complexity and "hide" it from the consuming code. This way we have "pockets" of more maintainable code and it is possible that the complex, hard-to-maintain code may be completely bypassed when change comes our way. If we can offer a simple interface (an easy-to-use API) with the proper level of abstraction that exposes the simplest hooks possible to consume more complex functionality underneath then we can avoid having the necessarily complex code thrown in with the simpler wiring code.

To get this type of abstraction we need to continually think from the consumer code's perspective. We need to keep the surface area of our assembly in mind and keep it as small and concise as possible. We can do this by being mindful to ensure there are no artifacts (classes, interfaces, enums, and so on) publicly exposed from an assembly that are not absolutely necessary for consumption of the functionality. Making sure all the classes in our assembly are either internal or private until they need to be consumed is a good practice. Also, if we are diligent about making sure that only abstract classes and interfaces are exposed, except for maybe some factories then we ensure that consumers don't need to deal with the nitty-gritty details of how the code works and have the added bonus of loose coupling.

Personally, I've found it often useful to put all the public interfaces and abstract classes in a separate assembly from the actual implementation. If I then make sure not to expose anything publicly except for the factory classes in the code that implements the interfaces and abstract classes then I can be sure not to accidentally make life difficult and/or confusing for anyone wanting to consume the utility. The resulting loose coupling also makes the code more flexible, simpler and easier to consume at the same time and truly isolates the consumer from complexity so I see it as a win-win situation. This approach also makes it easier for us to implement late-binding/pluggable functionality if we want to.

Another way of thinking about it is the car analogy. No matter who manufactures a car, we don't need to know the details of the mechanics in order to drive it. All the implementation details are covered up with the nice dashboard and we just get two pedals and a steering wheel to interface with this complex machine that is especially important when all we want to do is get to work in the morning. We need to find ways to make our code usable in this same way.

Factoring out needless complexity (and sometimes entire unused features) along with factoring in new features and providing flexibility where it is required are necessary to keep our code simple, flexible, healthy and alive. Code health and maintainability is a combination of an ongoing thought process and discipline as we make architectural and some of the more granular implementation decisions while coding (such as cohesiveness). Ensuring maintainability also entails cleaning up the small messes we run across as we are making changes. We also need to ensure we have as much functionality pinned down by unit tests as possible to helps us manage the refactoring process so we don't break functionality when code must change.

Until Next Time,

Happy Coding