Communications of the ACM 50 (8): 78-85 (2007)

Testing of Object-Oriented Industrial Software
without Precise Oracles or Results
1

T.H. Tse 2 , F.C.M. Lau 3 , W.K. Chan 4 , P.C.K. Liu 5 , and C.K.F. Luk 6

[author-izer free download from ACM digital library]

 ABSTRACT

From 30% to 50% of the resources in an average software project are typically allotted to testing. Still, inadequate software testing costs industry $22 billion to $60 billion per year in the U.S. alone. We would all spend less if software engineers could use more effective testing methods and automated testing tools. On the other hand, testing is very difficult in real-world projects. Software testing is commonly accomplished by defining the test objectives, selecting and executing test cases, and checking results. Although many studies have concentrated on the selection of test cases, checking test results is not trivial. Two problems are often encountered:

How to determine success or failure. A test oracle is the mechanism for specifying the expected outcome of software under test, allowing testers to check whether the actual result has succeeded or failed.1 In theory, test oracles can be determined by the software specification. In practice, however, a specification may provide only high-level descriptions of the system and cannot possibly include all implementation details. Hence, software testers must also rely on domain knowledge and user judgment to evaluate results. Such manual efforts are often error prone.

Hidden results. In engineering projects, even when oracles are present, the results of embedded software may last only a split second or be disturbed by noise or hidden behind a hardware-software architecture, so they are not easily observed or recorded. Moreover, the observation or recording process may introduce further errors or uncertainties into the execution results. It is therefore impractical for testers of engineering projects to expect to have predetermined, precise test oracles in every real-world application. How to capture and evaluate test results poses another problem. Automating the testing process amid these uncertainties is especially difficult.

In this paper, we share our experience addressing these issues in a technology-transfer project funded in 2002-2004 by ASM Assembly Automation Ltd. and the Innovation and Technology Commission in Hong Kong, examining the application of advanced testing methodologies to alleviate these problems.

1. This project is supported in part by a matching grant of ASM Assembly Automation Ltd. and the Innovation and Technology Commission in Hong Kong (project no. UIM/77).
2. (Corresponding author.)
Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong.
Email:
3. The University of Hong Kong.
4. Hong Kong University of Science and Technology.
5. ASM Assembly Automation Ltd.
6. ASM Technology Singapore (Pte) Ltd.

 EVERY VISITOR COUNTS:

  Cumulative visitor count