by   |    |   3 minutes  |  in Service Management   |  tagged

I spent an hour today with the IFS person in charge of Quality Assurance in our products. It was a most pleasant, and interesting, conversation. I’ll share some of it with you. One of the basic points of understanding is: all software has bugs. Ours, Microsoft’s, SAP’s, the video game you picked up for your kids – all software. Some bugs are known, some are not. And some that are known get reclassified as ‘features’(that’s a topic for a separate essay). So given that, what can and should be done?

Our guru walked me through the IFS QA framework. At a very high level, he explained that automated testing, where pre-formatted inputs of data and simulated keystrokes are fed in to the system via a testing program, can reasonably exercise 75 to 80% of the code. The output of these exercises are compared (also automated) to a pre-built expected outcome reference file. When they match, life is good; the programs are functioning as designed. When they don’t match, something is wrong. An error exception report is generated and routed to, among others, the QA manager for further investigation.

So far, so good.

Manual test scripts are another piece of the plan. They are step-by-step instructions that are taken by a person at a device or workstation, using the system. They can visually observe the reaction of the system as the keys are hit, and record results. This type of testing is great for catching items that aren’t bugs per se, but may make the system annoying or difficult to use. An example – when a user hits backspace on a field on a web page, and the page fully resets and clears all of the values already entered. This is annoying (has it happened to you?) but the type of thing that is not easily built in to automated test scripts.

These manual test scripts should cover 20 to 25% of the system, making sure that each function type (add, delete, inquire, report etc.) is represented, as well as key functions.

A third component is called scenario testing. This may be driven by a particular client need. For example, we know that IFS client ‘X’ uses the Repair Center module in a certain un-typical way. We can have them provide us with a scenario, or set of testing scripts, which can be performed prior to any release or upgrade patch application to their environment. This technique would test 10 – 15% of the overall system, typically. (Percentages add up to more than 100% because of overlap; some items get tested in multiple ways.)

Scenario testing is also useful for exercising integrations.

Finally, perhaps the most ‘fun’ testing is ad-hoc. This is usually done by two types of individual: 1) a very, very knowledgeable person who knows the system well and thus knows where the fragile parts are; and 2) a very novice user that might put funny characters in a date field, because they aren’t sure what the software is looking for. Both of these testers look at their tasks as a game as they try to ‘out-smart’ the programs (and programmers). This process covers 10 – 15% of the system, and is looked at as a statistical proof step in the quality assurance process.

Our conversation covered many other points but I thought these items were interesting enough in general to share. Does your group engage in a QA process that is similar to IFS? If you have feedback to share, please leave a comment, thanks.

Leave a Reply