One reason why it takes so much longer to perform a test than it does to use the system in real life is because the tester must carefully observe the results and record them as the test is performed. Both of these tasks, observing and recording, can be difficult, depending on the application and environment. Difficulties arise when the results of a test are not easily observable, such as in the case of an internal calculation that is never seen by the end-user, but drives the functions that are subsequently performed by the user.
There is a basic rule in testing that says if you can't observe something, you can't test it. This means that you have to have a tool or some other way to see the outcome of a test to know whether or not it worked. Dirty systems tend to hide a lot of functionality from the user, which means that a lot goes on behind the scenes that the user is not aware of, and may not be able to gain access to. To observe results in these cases, a tool such as an interactive test and debug tool is needed. Or, an alternative method, which may or may not be an available option, is to place special code into the application being tested. This code will display the values of data, which the tester needs to see to verify a test result. There are problems with this method.
One problem with added code is that it is intrusive to the application, and that the presence of the code can cause unintended negative affects. Another problem with added code is that in the process of removing the code, defects can be injected into the application.
Then, there is the task of reporting what is observed during the test. If the test is a simple and observable cause/effect test, then printing a couple of screen images could handle the matter. However, if the test involves a series of steps or a large volume of data, the job becomes very difficult. There is also the consideration of transmitting and storing large volumes of test documentation, either physically or electronically.
One solution would be to minimize the amount of information reported for each test. However, the tester never knows when a defect will manifest and it may be impossible to recreate the circumstances exactly so the developer can see them. The problem is two-fold. Recreations can be faulty due to human error on the part of the tester. Additionally, recreations of tests can be faulty due to the high complexity of dirty systems. This high level of complexity almost ensures that the tester will not get precisely the same results twice in a row unless the environment is very tightly controlled. Tight control is not characteristic in dirty systems.
A solution that we have used in the test lab is to use a image capture tool, such as Hypercam or Camtasia, to capture a series of images. If the test passes, the image file can be overwritten with the next test. However, if a defect is observed, you have it documented. Hypercam and Camtasia only work in the Microsoft Windows environment, so if you are testing outside of that environment, you will probable have to resort to taking a series of screen prints.
Capture/playback tools are a possibility to document the test, but most of these tools are recording keystrokes and user actions to be replayed against software. This means that you will need to recreate the test conditions exactly to replay the captured script to see the same results as before. The expense of capture/playback tools is another issue if all you are using them for is to document the test results.
With any tool for documenting test results, you need to consider the ability to be able to view the electronic version of the results at a later date. Consider how quickly operating system and test tool versions have changed over the past 5 years and it's easy to see how test documentation could be inaccessible simply due to technical obsolescence. For some people, this may not be an issue. However, if you are in a regulated industry, you may be required to show the results of tests in years past.
Solution Strategies
- Use an interactive test/debug tool for observing hidden results in interactive applications. While this gets a tester down to the coding level, this may be the only way to see some results.
- Use screen capture tools for capturing a series of screen images. I prefer to use a tool like Windows Media Encoder for the purposes of test documentation because a) it is cheaper than a capture/playback tool (FREE) and b) it doesn't depend on the current state of the software for playback, therefore it is an accurate record of what happened during the test. If you would like more information on Windows Media Encoder, you can find it at http://www.microsoft.com.
Conclusion
Be sure and allow extra time in the testing process for evaluating and reporting results. Also, make sure you have adequate physical and electronic storage capacities for the reporting files. It's impossible to know in advance where and when a defect will manifest, so it pays to be recording the test each time as opposed to spending an hour or more trying to recreate the defect.
The better your test reporting, the better the developers will be able to isolate and, hopefully, fix the problem!