Monday 30 March 2009

Testing and the scientist-programmer

[caption id="" align="alignleft" width="300" caption="Photo by dullhunk"]Photo by dullhunk[/caption]

You do test your code, right?

Well?  We certainly hope so.

Testing can be a pain.  But it should also be a habit.  And we think it's pretty uncontroversial to say that testing your code is a Good Thing.  The Scientist-Programmer often finds a tension between allocating effort to testing their code and effort to getting on with the science, but not only would skimping on testing be a mistake, but (if you write them correctly) the tests themselves can have scientific value.

Does it really work?
The bottom-line reason we test our code.  If you can't prove that your code does what it's supposed to, how can you possibly trust the results it's generating?  We hope this is obvious, but we know that it can sometimes be tempting to think "I'm pretty sure it works".  No good!  It's far too easy for this to be not true.  Test your code until you know that it works.


10% errors
Some bugs are more subtle than others and small (of order 10%, say) numerical errors are a good example.  If your code returns the age of the universe as 12.5 billion years, that sounds plausible, right?  Perhaps, but maybe it should be returning 13.7 billion years and you've got a bug in your code.  Scientific code very often involves numerical calculations, whether using data, producing simulations or whatever, which means that the Scientist-Programmer should be very concerned about these types of bugs and test accordingly.  We have some suggestions about testing your number-crunching code.


Proving it in a group meeting
If your testing is rigorous enough, you should be able to prove to your colleagues/collaborators that it works.  We think this is a great measure of whether or not you've tested your code well enough.  Imagine presenting your results in a group meeting.  Can you prove to your colleagues that your code is working?  This is not only a good sanity check, but we think it's a necessary step in the scientific process.  If your science relies on your software doing the right thing, then it's vital that you can prove that it really does do the right thing.  Think of it in the same way as an experimental set-up or a telescope configuration.  If you can't prove to a set of skeptical scientists that what you're doing is robust, then it's questionable if your results have any value.


The conference test
This is our favourite way of judging if our testing is rigorous enough.  Image that you're at a major international conference.  You're presenting your latest results to an audience of 200 of the most eminent scientists in your subject area.  You're showing the results from your software analysis when the world expert in your field puts up his hand and says, "I think there might be a numerical error in your code".  Are you confident that this situation won't arise?  If you aren't, then you need to do some more testing!


an upside: automate the plots for your paper
We think there's an upside to testing scientific code, that comes because you will ultimately want to publish your results somewhere (and/or put them into your conference talk).  Why not set up your tests so they generate automatically the plots you'll want for your paper?  They're probably the sorts of plots that are useful for testing in any case (as you ultimately care what your output looks like) and if you've invented a new statistical method (for example), then having some tests on synthetic data are a really great basis for the "our model really works" section in your paper.


Doing this also gives you the chance to develop good ways of presenting the outputs from your code.   Because you'll be working with the outputs repeatedly, as well as showing them at group meetings etc, you may well find that you not only have good ideas yourself but also the various audiences might have good suggestions to make.

In conclusion
Testing is important.  Vital even.  Do you really want to use code when you're not confident that it really works?  We thought not.  While good testing takes time, the sorts of outputs it produces can also be the sorts of things you'll need for your paper/group meeting/conference talk, so why not combine the two?

4 comments:

  1. popurls.com // popular today...

    story has entered the popular today section on popurls.com...

    ReplyDelete
  2. It is very easy to say 'test your code', but the problem is when you are a master student, nobody teaches you programming, and people expect you to produce big results without efforts.

    After one year or two that I have been programming, I have found a good solution in using three python tools:
    - doctests which allow you to write quick and easy to understand tests, especially useful for the documentation; I use them for all my 'on the fly' scripts, even if I am unaware that I am leaving some tests behind;
    - python's unittest, which is fine and allows to define fixtures, essential for bigger tests
    - nose, which is a tool for automatic test discovery in python code (it means that it looks for all the function with the word 'test_' in your code, and execute them)

    I am unaware of the testing solutions in other programming languages. I have tried some modules on perl a bit, but didn't like them very much.

    ReplyDelete
  3. Hi Gioby - thanks for the comment!

    >It is very easy to say ‘test your code’, but
    >the problem is when you are a master student,
    >nobody teaches you programming, and people
    >expect you to produce big results without
    >efforts.

    We hope our articles help a bit :-)

    I completely sympathise, having been there myself. I'm afraid the solution is often to teach oneself the relevant skills, as you have done.

    I'm still working on how to communicate to collaborators/supervisors etc that producing high quality code takes time. I think that because scientists are naturally skeptical, trying to provide proof/evidence is a good way forward.

    ReplyDelete
  4. [...] Testing and the scientist-programmer (Programming for Scientists) [...]

    ReplyDelete