2010/06/03

Rewarding maintainable code

In many software projects, developers are mainly rewarded for implementing new features. However, besides functionality, quality attributes like maintainability are also a key factor for staying competitive in the software industry. If the maintainability decreases over time, the cost for introducing new features will increase and in the worst case, the project will be unable to respond to new market demands in a timely manner.
In my opinion, developers should be explicitly and transparently rewarded for assuring the maintainability of the software, e.g. through refactorings or excellent technical design.
Unfortunately, the positive effect of new features is visible much earlier than the negative consequences of bad design or weak code quality. However, if we understand that investing in maintainability pays off on the long run, we could employ known methods for assessing the maintainbility of the software and reward developers for producing and sustaining maintainable code.

Code Reviews

Code reviews are an effective method for finding defects in software. Unfortunately, they are not very common in practice. Often, they are omitted due to time pressure and/or for cost reasons.
However, it has been known for a long time that the later a defect is found in the software lifecycle, the more costly it is to fix it. Reviews can be used to find defects early, even in the requirements elicitation or design phase. I think reviews should be used in conjunction with tests, since they are complementary. The one is more effective in finding certain types of defects than the other and vice versa.
In my opinion, many software projects could benefit from employing a lightweight review technique like Peer Reviews using state-of-the art review tools like Atlassian Crucible.

2009/05/27

Common Unit Testing Problems

In my opinion the following issues are among the most common problems (in order of severity) that one can encounter in unit testing.

-> Nondeterministic tests
Tests that fail and succeed at random can be a real problem. Actually a nondeterministic test can be considered worthless, since you cannot rely on its result. A possibility is that the test contains assertions on behavior that is influenced by a (pseudo-) random value. A more complicated cause can be a race condition between two threads. The only options you have, is to track down the race condition or - if possible - try to eliminate the multi-threadedness of the test.

-> Tests running into deadlocks (usually in conjunction with the first one)
To me this is the worst I can think of. Your build machine hangs and the value of continuous integration decreases radically. A good idea is to first set a timeout for the test runner, if possible. Then again, one has to either find the competing threads or eliminate the multi-threadedness.

-> Interdependent tests
Sometimes a test fails after a change to another tests. Especially irritating can be the fact that each of the tests may pass when executed individually and the failure occurs only when the test suite is executed completely. This indicates that these tests (possibly unintendedly) share resources between tests runs. Ideally a test should create all its resources from scratch and destroy them afterwards. However there are cases where this is not pragmatic, since the creation and/or destruction of the resource is too expensive. In this case one has to make sure that each test leaves the resource in a state that every other test that is using the resource expects it to be.

-> Long-running tests
Again, long-running tests also decrease the value of continous integration. Mostly this indicates that a test is testing too much at once i.e. it is too coarse, an integration test rather than a unit test. Of course, such tests might still be valuable, however they should not be executed in the continuous integration build. A nightly build is ideally suited for these kinds of tests.

-> Redundant tests
If a small change in the systems causes a huge amount of tests to fail, this can be a sign for redundant tests. This means that too many tests are exercising the same aspects of the system. This causes high effort in maintaining the test suite while there might be yet untested parts of the system, where the effort could be spent more reasonable. Obviously the solution is to carefully identify and remove the redundancy in the test suite.

-> Overly complicated tests
Tests that are hard to understand have the same problem as complicated code. Developers will have a hard time to maintain the test. If you reason 2 minutes about a test and don't get any idea of what this test is actually testing, this test is very likely just too complicated.

-> Tests with only weak assertions
This is a little more subtle issue. Sometimes tests do a lot of stuff and finally have an assertion that only verifies very little of the functionality that was executed in the test. In other words, it would be possible to put lots of errors in the production code and the test would still pass. This is not a real problem, however, as stated before, each test has to be maintained and therefore has to provide a decent value for us.

-> Trivial tests
This issue is very similar to the previous one. Again, trivial tests provide only a small value/effort ratio and should therefore be avoided.

What about you? Have you experienced other issues?
A great book covering so called test smells is xUnit Test Patterns by Gerard Meszaros.

2009/05/26

Eclipse Plug-in Dependency Graph

In an earlier post I wrote about the PDE Incubator project Dependency Visualization which allows to display the dependencies of Eclipse Plug-ins as a graph utilizing the Zest library. Since there seems to be considerable interest in this topic and there has been some progress in the project, I decided to have a second look at it.

By now it is also possible to toggle between two modes on the toolbar: you can either show the callers or the callees of a selected plug-in. As mentioned in the earlier post, I wanted to be able to exclude Eclipse plug-ins in cases where the focus is solely on the dependencies between the self-developed plug-ins. Therefore I added a very simplistic filter capability. I provide the modified and updated version of this plug-in here (Eclipse 3.4.x) for your convience.

[Update 3.1.2010] Download for Eclipse 3.5.x

2009/04/18

When to write a unit test?

In my opinion the following are situations to implement another unit test.
  • Before writing new production code (TDD)
  • Before fixing a bug (exposing the bug with the test)
  • In order to get to know an API where the documenation doesn't help much
  • When encountering untested yet critical code (e.g. using a coverage tool)
  • In case you are bored and have nothing else to do ;-)
Of course, when strictly following TDD, the fourth situation cannot occur. However in real life we have to cope with untested legacy code.

2009/03/20

Build times

When using Continuous Integration (CI), one common problem is that the build simply takes too long.

In this case, the first thing to do, is to ensure that you have a decent build machine. This should not be an old developer box someone left in a dark corner of your office. Instead, use a true high end system that is dedicated for CI and nothing else. This can speed things up considerably.

Very common is also the problem that the tests take too long. Usually this originates from tests that do too much. Mostly these are system tests rather than small unit tests. The best thing here is to divide your tests into two groups. The tests that run fast can be executed in each build. The slow tests can be executed in a second build that runs only once at night.

Also, the CI build only has to do what's necessary to get confidence in the code you just commited to the VCS, that is: compile and test. Packaging and the like might be time-consuming parts of your build, which do not necessarily have to be done in the CI build. Again, the nightly build would do all that stuff and provide you with a complete build every morning.

2009/01/16

Iteration Retrospectives

The idea of iteration retrospectives is to continuously improve the way your team develops software. Put simply, you sit together with your team at the end of each iteration and discuss what worked for the team during the iteration and what needs improvement. Additionally you derive a list of things you want to do differently in the next iteration.
To me, the benefit is obvious. You leverage the experience and creativity of the whole team for improving your process.
There is a complete book from the Pragmatic Programmers available on this topic: Agile Retrospectives: Making good teams great