Skip Headers
SigTest User's Guide
Version 2.2
  Go To Table Of Contents
Go To Index


4 Introduction to API Coverage Tool

The API Coverage tool is used to estimate the test coverage that a test suite under development affords to implementations of its related API specification. It does this by determining how many public class members the test suite references within the API specification that it is designed to test against. The tool uses a signature file representation of the API specification as the source of specification analysis. It does not process a formal specification in any form.

The API Coverage tool code is contained in both the sigtestdev.jar and sigtest.jar files. Additional installation is not required. See Chapter 5 for details about running API Coverage tool.

4.1 Static API Coverage Analysis

The tool operates on the principle that a reference to an API member from within a test class indicates a test of that member by the test suite. The ratio of referenced class members to the total number of class members calculated for an API yields a percentage of test coverage for the API.

This method is called static API coverage analysis because it does not actually run any tests from the test suite. Since it does not dynamically determine which API members are actually accessed by the tests, the coverage calculation expresses only an estimated percentage of test coverage.

4.1.1 Major Source of Error

Static analysis cannot correctly predict the outcome of virtual calls to overridden methods that are resolved at runtime through dynamic method dispatch. The frequency of this type of overridden method can vary between differing implementations of the same API specification. This makes it difficult to formulate an exact percentage of test suite coverage when using static analysis (in spite of the fact that the implementations may all be binary compatible and correctly implement the specification).

Tests that make dynamic calls to API members are not recognized by API Coverage tool. This may cause some test calls to not be accurately measured and may result in test coverage being underreported.

4.1.2 Advantages of Static Coverage Analysis

Static API coverage testing provides the following advantages over dynamic coverage methods.

  • Static testing is more lightweight. Testing is easier to setup and quicker to run than dynamic methods.

  • Static testing is easier to automate and provides more consistent results because it is not affected by external conditions such as machine load, or network traffic.

  • Static testing is more practical for gathering results for very large APIs. A test suite and its associated API might include many thousands of tests and associated API class members making it very cumbersome to instrument dynamic tests.

  • Static testing allows you to quickly estimate the quality of test coverage for all APIs, especially APIs that are difficult to test dynamically.

4.2 How It Works

The static API coverage algorithms examine precompiled test suite test classes to determine the members that they reference. This includes inner classes and fields as well as constructors, although constructors cannot be inherited.

The algorithms are based on the fact that the constant pool of any class file holds all of its external class references. This constant pool consists of the following related records:


Each of these records contains the fully qualified name of an external class and the name of the class member referenced. For a method, this includes the signature of the method and its return type.

4.2.1 Level of Accuracy During Analysis

Table 4-1 lists example scenarios encountered during static coverage analysis and their related potential for error. The table references the following objects:

  • Q.m is a method referenced by the test suite.

    Where Q is the fully qualified class name, and m is the descriptor of the called method (including the name, list of arguments, and return type).

  • SubQ is a subclass of, and SupQ is a superclass of class Q.

x is an object on the stack referenced by either the invokevirtual or invokeinterface instruction.

The main potential for an inaccurate coverage measurement exists when a great many members are overridden in subclasses of an implementation (as described in condition #2 of Table 4-1).

Table 4-1 Example Scenarios and Potential Errors

Condition Scenario Result

#1: Object x is of type Q

Class Q declares method m and Q.m is the method called

Method m is inherited from superclass SupQ and is not declared in Q; method SupQ.m is called

Q.m is correctly marked as covered

Correctly marks either Q.m or SupQ.m as covered, depending on the calculation mode in use (described later in Section 4.2.2, "Coverage Analysis Modes")

#2: Object x is of type SubQ, a subclass of Q

No subclass or superclass of Q overrides method m; Q.m is called

A subclass of Q does override method m and the overriding method is called; if there are multiple inherited subclasses, exactly which method is called cannot be correctly identified before runtime

Correctly marks Q.m as covered

Q.m is incorrectly marked as covered; this scenario is the main source of analysis errors

#3: A method is called by means of reflection

Uses: Method.invoke(Object, Object[])

No method is marked as covered, assuming that java.lang.reflect is not in the API under test; this case cannot be correctly resolved

4.2.2 Coverage Analysis Modes

The API Coverage tool uses these two modes of analysis:

  • Real World Mode: Returns a fairly accurate estimate based on input from one specific API implementation, such as a reference implementation. You can then compare the real world results to those of the worst case mode.

  • Worst Case Mode: Returns an estimate based on a hypothetical API in which every class overrides or hides all members from its superclass. This scenario is highly unlikely to occur in actual practice. You use this mode by extrapolating its results into those of the real world case to estimate the possible range in test coverage that a test suite will provide in the field over a number of differing implementations.

Chapter 5 describes how to set up and use the API Coverage tool.

4.2.3 Filtering Coverage By Marking Up Signature Files

Typically, API Coverage tool measures coverage for an entire API or package(s). However, in some use cases it is convenient to track the coverage of a particular feature that spreads throughout multiple packages and classes.

For example, you might want to track a new feature that adds a few methods or fields to preexisting classes, or you might want to track a feature that adds an extra interface implemented by a preexisting class. You will want to know which of those newly added methods are tested.

The way to get this information is to mark up a signature file with directives that indicate which API members belong to the feature that you want reported in the API Coverage tool report. A fragment of a signature file with filtering directives (in bold) is shown in Example 4-1.

Example 4-1 Signature File Mark-Up

#APICover file v4.1

CLSS public final java.awt.SplashScreen 
meth public boolean isVisible() 
#coverage on 
meth public java.awt.Dimension getSize() 
meth public java.awt.Graphics2D createGraphics() 
#coverage off 
meth public java.awt.Rectangle getBounds() 
meth public getImageURL() 
meth public static java.awt.SplashScreen getSplashScreen() 
meth public void close() 
meth public void setImageURL( throws 
meth public void update() 
supr java.lang.Object 
hfds image,imageURL,log,splashPtr,theInstance,wasClosed Filtering Markup Format

The following steps describe how to mark up a signature file to designate classes and members you want to track.

  1. Copy the signature file.

  2. In a text editor, change the first line of the copy of the signature file from #Signature file v4.1 to #APICover file v4.1.

  3. Use the #coverage on and #coverage off directives to mark the classes and members.

    Classes or members between the #coverage on and #coverage off directives are tracked during coverage calculation. All the other classes and members are excluded.