Friday, May 14, 2010

Bugged software checking bugged software for bugs

Some time ago I had nice conversation with Madis Jullinen about test automation. We starting wondering how business people understand automated testing. If they understand that any software has bugs (because they approve testing at all) then how come they still belive that automated tests don't have bugs in themselves? Let's forget all about theory of models ("All models are wrong, some are useful") and checking vs testing ("automated tests are only useful for checking") to simplify matters and only concentrate on the subject of automated tests as just another piece of software.

We assume that any software has unknown amount of bugs in them. Testers are there to find some of them (usually as many as possible within any given amount of time). Automated tests are another piece of software. Ergo automated tests have unknown amount of bugs in them and testers should test them as well and someone have to fix found bugs.

So I started approaching this subject mathematically (and financially)... To create automated tests you have to approach this as another development project. With it's own analysis, development, testing and support phase (plus some project management to schedule it to timetabels). You have to analysis what to automate and how to do it, develop necessary tests, test if tests works properly and support automated tests throughout project lifecycle (since project might change in time). I added up the numbers. It takes n hours to prepare what to test, m hours to develop it, p hours to test if it works, r hours to fix bugs in it and s hours to keep the tests up-to-date. So in general it takes n+m+p+r+s hours to prepare the tests and s hours every iteration to keep tests up-to-date. Sadly most of business people see test automation cost as only p hours total for developing tests.

Another bad trend I've seen lately is just the need to create automated tests because it is a good thing to do. Why is it a good thing to do? To spare tester some time from regression testing? How do you know it spares time? Have them done calculation of ROI to see whether or not it would be even useful to automate? So why to even bother to write test automation if you are not sure how much time and money they would actually save? The worst thing that could happen is when testers are required to create automated tests since some procedure dictates that n amount of tests have to be automated.

I won't say I'm against test automation. But everything must be done not because some procedure says it must be done, but because it must actually help to get better results in the long run. So instead of building one bugged system atop of another bugged system and hoping it would find some new bugs... Well... I think it would be wiser to spend that time either actually testing or calculating whether or not it's worth to automate these tests after all.

Thursday, May 6, 2010

Having fun with computer terminal in a bank.

I heared this story from my tester Rasmus. He went to a bank to complain that their web application doesn't work properly when he tried logging in with Estonian ID-card. So the bank worker showed him a local computer terminal to try if it works here. He inserted his card into the reader and watched what happened. Program waited a little bit and opened a pop-up where you had to insert PIN1 of the card to log in. But... There was no OK or Cancel button... And when he tried to enter numbers to the field, that didn't work either.

So he thought to have little fun (read about that kind of fun from James Bach's blog here and watch here). He removed his card from a reader hoping something fun to happen. And... Computer started to shut down by itself... oops... the whole system started behaving really odd because of just removing ID card...

That made me think... I hope that they are not testing all their systems like they had tested this terminal... Who knows what else could he have stumbled upon there if that computer hadn't crashed completly...

Test Camp: puhkaeestis.ee

Another month has passed and again it was time to call for Test Camp. This time I tried to do little more. Add more people and add more different approaches. One of participants from previous time (Madis) recommended very intresting application to test - web application without database. Date of the event was agreed upon 17.04.2010.

I got the permission to use this application (www.puhkaeestis.ee) and now I started putting together participants list. Sadly Tanel from Fraktal couldn't come, though he wanted, but rest of people were Oliver and Rasmus from Celeg Hannas and Madis, Artur, Eero and Polina from Webmedia.

This time we got people with really different approaches and backgrounds to participate - Testers, QA manager, Test manager and even .Net programmer (Eero). So the results were also very various. Eero (as only participant who actually works as programmer) gave some nice results and they made great team with Rasmus. Both generating ideas and improving each other - both had programmer background, but currently doing different type of work, thus thinking differently.

Ideas were generated so fast that Madis had hard time keeping up the pace when everyone were asking additional questions. Again the average rate of issues found in an hour per participant was around 2. But none-the-less it was fun and Webmedia thinks of startingsimilar events in-house also. Glad to hear that my idea was well-recieved.

Hopefully everyone also learned some new tricks to try with our own projects. At least I know I did.