In my opinion the following issues are among the most common problems (in order of severity) that one can encounter in unit testing.
-> Nondeterministic tests
Tests that fail and succeed at random can be a real problem. Actually a nondeterministic test can be considered worthless, since you cannot rely on its result. A possibility is that the test contains assertions on behavior that is influenced by a (pseudo-) random value. A more complicated cause can be a race condition between two threads. The only options you have, is to track down the race condition or - if possible - try to eliminate the multi-threadedness of the test.
-> Tests running into deadlocks (usually in conjunction with the first one)
To me this is the worst I can think of. Your build machine hangs and the value of continuous integration decreases radically. A good idea is to first set a timeout for the test runner, if possible. Then again, one has to either find the competing threads or eliminate the multi-threadedness.
-> Interdependent testsSometimes a test fails after a change to another tests. Especially irritating can be the fact that each of the tests may pass when executed individually and the failure occurs only when the test suite is executed completely. This indicates that these tests (possibly unintendedly) share resources between tests runs. Ideally a test should create all its resources from scratch and destroy them afterwards. However there are cases where this is not pragmatic, since the creation and/or destruction of the resource is too expensive. In this case one has to make sure that each test leaves the resource in a state that every other test that is using the resource expects it to be.
-> Long-running tests
Again, long-running tests also decrease the value of continous integration. Mostly this indicates that a test is testing too much at once i.e. it is too coarse, an integration test rather than a unit test. Of course, such tests might still be valuable, however they should not be executed in the continuous integration build. A nightly build is ideally suited for these kinds of tests.
-> Redundant tests
If a small change in the systems causes a huge amount of tests to fail, this can be a sign for redundant tests. This means that too many tests are exercising the same aspects of the system. This causes high effort in maintaining the test suite while there might be yet untested parts of the system, where the effort could be spent more reasonable. Obviously the solution is to carefully identify and remove the redundancy in the test suite.
-> Overly complicated tests
Tests that are hard to understand have the same problem as complicated code. Developers will have a hard time to maintain the test. If you reason 2 minutes about a test and don't get any idea of what this test is actually testing, this test is very likely just too complicated.
-> Tests with only weak assertions
This is a little more subtle issue. Sometimes tests do a lot of stuff and finally have an assertion that only verifies very little of the functionality that was executed in the test. In other words, it would be possible to put lots of errors in the production code and the test would still pass. This is not a real problem, however, as stated before, each test has to be maintained and therefore has to provide a decent value for us.
-> Trivial testsThis issue is very similar to the previous one. Again, trivial tests provide only a small value/effort ratio and should therefore be avoided.
What about you? Have you experienced other issues?
A great book covering so called
test smells is
xUnit Test Patterns by Gerard Meszaros.