{ by david linsin }

November 27, 2007

Profiling Follow up

InfoQ published a nice article about performance testing a couple of days ago. They argue it should be addressed the same way components are tested these days: in a continuous and automated unit test.
With continuous performance testing we need to focus on more granular aspects of our systems, components and frameworks. Just as is the case with unit testing, we can only expect to find certain classes of problems when we test these artifacts in isolation. A case in point is the contention between components of misuse of frameworks resulting in response times higher than expected; these are things that will only come out in a full integration test. However understanding how much CPU, memory, disk and network I/O we need, can help us predict and take preventive action (rather than apply a premature optimization).
I'm not too sure what to think of this. On the one hand it makes perfect sense to test performance with a unit test, in isolation, since it'll be easier to identify potential bottlenecks. On the other hand I tend to think that an integration test makes more sense. You only know your overall response time, if you test your components together - and that's what the user will get.

My strategy: don't go overboard. I wouldn't write too fine grained tests from the beginning. If your integration test highlights performance problems, you can still take apart your components and write test for units.

The article makes the same statement I did in my blog post about profiling last week: people tend to put off performance to the end, thinking of it as non functional requirement and therefore mostly address it when it's too late.

0 comments:


com_channels

  • mail(dlinsin@gmail.com)
  • jabber(dlinsin@gmail.com)
  • skype(dlinsin)

recent_postings

loading...