Pesky pesticides

One of the best lessons I ever learnt from the ISEB Foundation course was the "pesticide paradox": if you're not familiar with it, the principle is that continually treating a field with the same pesticide will eventually lead it being overrun with bugs that are resistant to it. A simple statement but it has lots of implications.

The first use is in the obvious analogy with software bugs; continually running the same set of tests over and over will not find all the issues in a piece of software. There are two reasons for this; no set of real-world tests can ever be exhaustive so there will always be cracks that defects can slip through. Secondly the power of existing tests can decrease with time - once a test has been passed it is far less likely to fail in the future (although it's not impossible, otherwise there would be no need for regression testing), as you are coming from a position of having a solution that works.

The lesson to take from this is that on a project that runs over many months (or years) it's important to keep revisiting and updating your tests, even if it is just to vary the data used. The same principle should apply to automated tests as well; rerunning the same tests over and over may hide an error in one of your tests so that you think something is working when in fact it is not. For example, say you have a text input box that should allow text strings only, and you have a test where you enter "@#$" and expect to see a validation message; a (bad) code fix for this might be to look for that particular string only and then show the message, so that other strings like "#$@" will not be picked up. You won't find this unless you vary your original test.

Another scenario where the paradox can apply is with test metrics. If you always apply the same check to to make sure testing is progressing as expected (e.g. rate of raising vs rate of closing) you may be unintentionally hiding other problems with your test procedures; the rates may be improving but the severities may be getting worse, leading to an overall dip in product quality. There isn't a metric that can't be artificially bolstered; the focus should be on improving overall efficiency rather than just hitting a target number. Metrics can be great for identifying issues but can also throw up false positives and should be treated in context rather than having hard and fast limits.

One final way in which the paradox manifests itself is with your own professional development. If you continue to tread the same path there may be deficiencies in your testing armoury of which you are unaware. It's only by pushing yourself into trying new ways of working that you can identify areas of improvement (as well as pitfalls to avoid); the need to fail so you can succeed is one of the biggest paradoxes of them all.

Comments