JUnit
Although many techniques can be used to implement unit testing, the most popular tool for Java development is JUnit. This tool provides a framework for developing unit tests, which fits very nicely into the Ant build and deployment process. You can find JUnit, along with a lot of documentation and add-on tools, at http://www.junit.org.
Installation of JUnit
To install JUnit, follow these steps:
Download JUnit from http://www.junit.org
Add the junit.jar file to your CLASSPATH or to the Ant lib directory. Because JUnit is an optional task, the optional.jar file that ships with the Ant distribution must be in the Ant lib directory as well.
Sample Class
Let's take a look at how to incorporate JUnit into your development process and seamlessly integrate it into the Ant buildfile. Listing 3.1 is a sample Java class for which you will create a JUnit unit test. This is a simple class with a constructor, along with setter and getter methods. This class is a simple domain object to store the information about a single sales item in the eMarket application.
Listing 3.1 Sample Class for Use with JUnit Unit Test
/*-------------------------------------------------------------------------- File: salesItem ------------------------------------------------------------------------*/ package com.networksByteDesign.eMarket.inventory; public class salesItem { /* ===================================================================== salesItem Constructor ================================================================== */ public salesItem(int id, String name) { mId = id; mName = name; } /* ===================================================================== getId ================================================================== */ public int getId() { return mId; } /* ===================================================================== setId ================================================================== */ public void setId(int id) { if(id <= 0) { throw new IllegalArgumentException("Id must be a valid id #"); } mId = id; } /* ===================================================================== getName ================================================================== */ public String getName() { return mName; } /* ===================================================================== setName ================================================================== */ public void setName(String name) { if(name == null || name.length() == 0) { throw new IllegalArgumentException("Name must be populated"); } mName = name; } private int mId = 0; private String mName = null; }
Sample Unit Test
Let's create a unit test to demonstrate how JUnit hooks into Ant. Listing 3.2 is the JUnit test that was written to test the sample class shown in Listing 3.1. We have included tests for both the constructor and the setter method. The getter method is tested as part of the other two tests. This JUnit class also includes a main() method for running the unit test from the command line.
Listing 3.2 JUnit Test for Sample Class in Listing 3.1
/*-------------------------------------------------------------------------- File: salesItemTest ------------------------------------------------------------------------*/ package com.networksByteDesign.eMarket.inventory; // Internal libraries import com.networksByteDesign.eMarket.inventory.salesItem; // Third party libraries import junit.framework.Test; import junit.framework.TestCase; import junit.framework.TestSuite; public class salesItemTest extends TestCase { //////////////////////////////////////////////////////////////////////// // salesItemTest(String) //////////////////////////////////////////////////////////////////////// /** * <p> * This is the constructor for the <code>salesItemTest</code> * class. It calls the super class and configures the instance. * </p> * * @param testName the name of the test to construct * * */ public salesItemTest(String testName) { super(testName); } //////////////////////////////////////////////////////////////////////// // main(String[]) //////////////////////////////////////////////////////////////////////// /** * <p> * This is the mainline for the <code>salesItemTest</code> * class. It runs the test suite that has been established. * </p> * * @param args any command line arguments to the test program * * */ public static void main (String[] args) { junit.textui.TestRunner.run(suite()); } //////////////////////////////////////////////////////////////////////// // suite() //////////////////////////////////////////////////////////////////////// /** * <p> * This is the static method that defines the specific tests that * comprise the unittest. * </p> * * @return the test suite that has been established * * */ public static Test suite() { TestSuite suite = new TestSuite(); // test constructor() suite.addTest(new salesItemTest("testConstructor")); suite.addTest(new salesItemTest("testSetter")); return suite; } //////////////////////////////////////////////////////////////////////// // testConstructor() //////////////////////////////////////////////////////////////////////// /** * <p> * Test for constructing a salesItem object * </p> * * */ public void testConstructor() { int id = 123; String name = "Router"; // Does "happy path" work? salesItem test1 = new salesItem(id, name); assertEquals("Happy Path id test failed", id, test1.getId()); assertEquals("Happy Path name test failed", name, test1.getName()); // Is negative id handled? try { salesItem test2 = new salesItem(-123, name); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is zero id handled? try { salesItem test3 = new salesItem(0, name); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is empty string handled? try { salesItem test4 = new salesItem(id, ""); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is null string handled? try { salesItem test5 = new salesItem(id, null); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} } //////////////////////////////////////////////////////////////////////// // testSetter() //////////////////////////////////////////////////////////////////////// /** * <p> * Test for setter for the salesItem object * </p> * * */ public void testSetter() { int id = 123; String name = "Router"; salesItem test = new salesItem(456, "Another"); // Does "happy path" work? test.setId(id); assertEquals("Happy Path id test failed", id, test.getId()); test.setName(name); assertEquals("Happy Path name test failed", name, test.getName()); // Is negative id handled? try { test.setId(-123); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is zero id handled? try { test.setId(0); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is empty string handled? try { test.setName(""); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} // Is null string handled? try { test.setName(null); fail("Expected exception was not thrown"); } catch(IllegalArgumentException e) {} } }
The unit test has two test methods: testConstructor() and testSetter(). These two tests were added to the test suite in the suite() method. When each test method is called, JUnit will call a setup() method if there is one. The optional setup() method can be used to perform one-time activities needed for the test such as instantiating a certain class, or making a database connection. Next the testConstructor() method is called. Within this test method, several tests are run to verify correct behavior of the tested class under all different conditions. Correct behavior is checked with various assertXXX() methods, such as assertTrue(). The assertXXX() methods can be used to check expected values and fail the test if the expected values aren't received. In our example of a unit test, the setter methods should throw an IllegalArgumentException if the parameter is invalid. If the exception is not thrown, we call the fail() method to indicate that the test has not performed as expected and has failed. Finally, if we had implemented the optional teardown() method, it would be called after the test method completed. teardown() is used to clean up after a test, such as closing a database connection. The cycle repeats for the next unit test. In the JUnit test class, the suite() method is used to add unit test methods to the suite of tests to be run. There are two ways to add tests. The first way is shown here:
public static Test suite() { TestSuite suite = new TestSuite(); // test constructor() suite.addTest(new salesItemTest("testConstructor")); suite.addTest(new salesItemTest("testSetter")); return suite; }
In this approach, each unit test is explicitly added in the suite() method. With this approach, each new unit test must be added as it is created. The other technique is to make use of the fact that JUnit uses Java reflection. If we name all of our unit test methods starting with "test*", we can add all of the tests with one statement, as shown here:
public static Test suite() { TestSuite suite = new TestSuite(salesItemTest.class); return suite; }
The advantage to the second technique is that you don't have to add each new unit-test method into the suite() as it is created. With the first approach, you have more control over which tests are run, in case you want to temporarily turn some of them off while debugging a problem.
Command-Line Unit Testing
Before we hook JUnit into Ant, let's begin by running the unit test interactively at the command line. We first need to compile both the sample class and the unit test, using the <compile> target. In order to run the unit test at the command line, junit.jar must be in the CLASSPATH. Listing 3.3 shows the output of running the unit test at the command line.
Listing 3.3 Output of Running Command-Line JUnit Test
% java com.networksByteDesign.eMarket.inventory.salesItemTest .F. Time: 0.033 There was 1 failure: 1) testConstructor(com.networksByteDesign.eMarket.inventory.salesItemTest) "Expected exception was not thrown" FAILURES!!! Tests run: 2, Failures: 1, Errors: 0
Simple JUnit Target
Of the two tests run, one failed. The constructor does not have the same level of checks as the setter, and it contains a bug if the object is constructed with a null or empty String. Before fixing this problem, let's hook the unit test into Ant, to see how failed tests are handled.
Listing 3.4 is a unittest target that simply calls the JUnit class shown in Listing 3.2. This is about as simple as a unit-testing target can be.
Listing 3.4 Simple unittest Target
<?xml version="1.0" ?> <project name="eMarket" default="compile" basedir="."> <property name="dirs.source" value="/usr/projects/eMarket/src" /> <property name="dirs.backup" value="${user.home}/backup" /> <!-- compile target --> <target name="compile" description="Compile all of the source code."> <javac srcdir="${dirs.source}" /> </target> <!-- unittest target --> <target name="unittest" description="Run the unit tests for the source code."> <junit> <test name="com.networksByteDesign.eMarket.inventory.salesItemTest" /> </junit> </target> </project>
As you can see, this target simply calls the unit test just as we did from the command line. In Chapter 4, "The First Complete Build Process," we will change this task to include sets of tests rather than listing each test individually. The output of this target appears in Listing 3.5.
Listing 3.5 Output of Simple unittest Target with Broken Test
% ant unittest Buildfile: build.xml unittest: [junit] TEST com.networksByteDesign.eMarket.inventory.salesItemTest FAILED BUILD SUCCESSFUL Total time: 2 seconds
Although running the unittest target shows the test class that failed, the output does not tell which test within the class failed or contain other useful information. If we fix the class by having the constructor call the setter rather than setting the member variable directly, all the unit tests should pass. Listing 3.6 shows the output when all tests pass. As you can see, there is no output from the target.
Listing 3.6 Output of Simple unittest Target with No Broken Tests
% ant unittest Buildfile: build.xml unittest: BUILD SUCCESSFUL Total time: 2 seconds
haltonfailure/haltonerror
Let's begin enhancing the unittest target by looking at what happens when you run multiple tests. JUnit has a concept of failures and errors. Failures are tests that do not pass, but in anticipated ways. For example, the sample unit test had a failure because we checked to make sure that a particular exception was thrown.
Errors are unanticipated problems. An exception that is thrown but not caught by the class or the test is a common occurrence of this. Errors are considered failures as well.
The JUnit task has two attributes for determining how Ant should behave if a failure or error occurs:
haltonfailure
haltonerror
If these attributes are set to yes, the build fails if a unit test experiences a failure or error.
Listing 3.7 shows a unit test run with haltonfailure set to no. Even though a test fails, the subsequent tests are still run.
Listing 3.7 Output from Unit Test Run with haltonfailure Set to No
% ant unittest Buildfile: build.xml unittest: [junit] TEST com.networksByteDesign.eMarket.inventory.salesItemTest FAILED [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED BUILD SUCCESSFUL Total time: 1 second
Listing 3.8 shows the same test run, but with haltonfailure set to yes. Notice that when the first test fails, the build ends. No further tests are run, and no opportunity exists to clean up after the test.
Listing 3.8 Output from Unit Test Run with hailtonfailure Set to Yes
% ant unittest Buildfile: build.xml unittest: BUILD FAILED file:/usr/projects/eMarket/build.xml:15: Test com.networksByteDesign.eMarket.inventory.salesItemTest failed Total time: 1 second
Sometimes you want the build to fail, but not until you have cleaned up after the unit test. In this case, you can use the attributes failureproperty and errorproperty. If you supply a property name to these attributes, the property will be set automatically if a failure or error occurs. Your Ant target can check these properties after cleaning up, to determine whether the build should be halted. Listing 3.9 shows an example of how failureproperty can be used.
Listing 3.9 Example of Using failureproperty to Clean Up After a Failed Test
<?xml version="1.0" ?> <project name="eMarket" default="compile" basedir="."> <property name="dirs.source" value="/usr/projects/eMarket/src" /> <property name="dirs.backup" value="${user.home}/backup" /> <property name="dirs.temp" value="/tmp" /> <!-- compile target --> <target name="compile" description="Compile all of the source code."> <javac srcdir="${dirs.source}" /> </target> <!-- unittest target --> <target name="unittest" description="Run the unit tests for the source code."> <junit haltonfailure="no" failureproperty="unittestFailed"> <test name="com.networksByteDesign.eMarket.inventory.salesItemTest" /> <test name="com.networksByteDesign.eMarket.inventory.customerTest" /> </junit> <antcall target="cleanupUnittest" /> <fail if="unittestFailed" message="One or more unit tests failed."/> </target> <!-- cleanupUnittest target --> <target name="cleanupUnittest"> <delete> <fileset dir="${dirs.temp}"> <include name="*${ant.project.name}.test" /> </fileset> </delete> </target> </project>
The unittest target makes use of the <antcall> task. The <antcall> task is used to invoke another target within the same buildfile. Doing this creates another instance of a project. All of the properties in the current project will be passed to the new project unless the inheritAll attribute is set to false. The nested <param> element can also be used with <antcall> to pass new property values.
The <fail> task is used to inform the Ant build process that the build should fail and provide an appropriate message. In this case, we use the if attribute of the <fail> task to only cause the build to fail if the unittestFailed property is set. This allows us to cause the build to fail, but provide appropriate cleanup code prior to the failure.
printsummary
By default, Ant displays only the tests that fail or have an error. Although this is often desirable, some feedback on how things are progressing can be helpful when the unit testing process takes a long time. Listing 3.10 shows the output of the unittest target when multiple tests are run. As you can see, two classes are shown in the unittest section of the output.
Listing 3.10 Build Output with Multiple Unit-Test Classes
% ant unittest Buildfile: build.xml unittest: [junit] TEST com.networksByteDesign.eMarket.inventory.salesItemTest FAILED [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED cleanupUnittest: [delete] Deleting 2 files from /tmp BUILD FAILED file:/usr/projects/eMarket/build.xml:26: One or more unit tests failed. Total time: 2 seconds
To show the output from the tests being run, whether the tests fail or not, use the printsummary attribute. When this attribute is set to yes, all tests are summarized in the build output. Listing 3.11 shows the output using the same classes as in Listing 3.10. A third class is now visible. This class was being tested before, but because the tests passed, it was not visible. Setting the printsummary attribute shows all tests regardless of whether they pass or fail.
Listing 3.11 Build Output with printsummary Set to Yes
% ant unittest Buildfile: build.xml unittest: [junit] Running com.networksByteDesign.eMarket.inventory.salesItemTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.03 sec [junit] TEST com.networksByteDesign.eMarket.inventory.salesItemTest FAILED [junit] Running com.networksByteDesign.eMarket.inventory.customerTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.015 sec [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED [junit] Running com.networksByteDesign.eMarket.inventory.companyTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.169 sec cleanupUnittest: BUILD FAILED file:/usr/projects/eMarket/build.xml:31: One or more unit tests failed. Total time: 2 seconds
Using the printsummary attribute, you see not only all the classes, but also the number of tests run, how many failures and errors occurred, and the time elapsed for each test. You might want to experiment with both approaches and see which style works best for your team.
showoutput
If your classes make use of logging, whether with a logging tool such as log4j or a simple System.out.println(), that information can be displayed by running the unit tests. By setting the showoutput attribute, any information written to stdout and stderr is displayed in the unit test's output. Listing 3.12 shows the sample class with a logging statement in each setter method.
Listing 3.12 Sample Class with Logging Statement
/*-------------------------------------------------------------------------- File: salesItem ------------------------------------------------------------------------*/ package com.networksByteDesign.eMarket.inventory; public class salesItem { /* ===================================================================== salesItem Constructor ================================================================== */ public salesItem(int id, String name) { mId = id; mName = name; } /* ===================================================================== getId ================================================================== */ public int getId() { return mId; } /* ===================================================================== setId ================================================================== */ public void setId(int id) { System.out.println("ID = " + id); if(id <= 0) { throw new IllegalArgumentException("Id must be a valid id #"); } mId = id; } /* ===================================================================== getName ================================================================== */ public String getName() { return mName; } /* ===================================================================== setName ================================================================== */ public void setName(String name) { System.out.println("Name = " + name); if(name == null || name.length() == 0) { throw new IllegalArgumentException("Name must be populated"); } mName = name; } private int mId = 0; private String mName = null; }
Listing 3.13 displays the output of the unittest target with showoutput and printsummary set to yes. This can be useful in debugging or when you create your nightly unit test process, later in this chapter.
Listing 3.13 Output from unittest Target with showoutput Set to Yes
% ant unittest Buildfile: build.xml unittest: [junit] Running com.networksByteDesign.eMarket.inventory.salesItemTest [junit] ID = 123 [junit] Name = Router [junit] ID = -123 [junit] ID = 0 [junit] Name = [junit] Name = null [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.034 sec [junit] TEST com.networksByteDesign.eMarket.inventory.salesItemTest FAILED [junit] Running com.networksByteDesign.eMarket.inventory.customerTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.009 sec [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED [junit] Running com.networksByteDesign.eMarket.inventory.companyTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.186 sec cleanupUnittest: BUILD FAILED file:/usr/projects/eMarket/build.xml:32: One or more unit tests failed. Total time: 2 seconds
Formatter
The <junit>JUnit task provides formatter classes to facilitate the handling of the output from unit tests. These classes listen to all of the output and act as both filters and formatters in presenting the final output. A formatter can be added by using the nested formatter tag inside the <junit> task. Three basic formatters are provided by the <junit> task:
Plain
Brief
XML
It's also possible to develop custom formatters. The following target shows how the formatter is set.
<!-- unittest target --> <target name="unittest" description="Run the unit tests for the source code."> <mkdir dir="${dirs.test}"/> <junit haltonfailure="no" printsummary="yes" showoutput="yes"> <formatter type="brief" usefile="true" /> . . . </junit> </target>
Plain Formatter
Plain is a flat-file text format that provides information about both the tests that failed and those that succeeded. If the output does not need to be parsed by another process and information on successful tests is desired, this is probably the formatter type to select. Listing 3.14 shows the output of a unit test run with the formatter set to plain.
Listing 3.14 Sample JUnit Output File with JUnit Task Set to Plain Formatter
% ant unittest Buildfile: build.xml unittest: [junit] Running com.networksByteDesign.eMarket.inventory.salesItemTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.025 sec [junit] Testsuite: com.networksByteDesign.eMarket.inventory.salesItemTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.025 sec [junit] Testcase: testConstructor took 0.017 sec [junit] Testcase: testSetter took 0 sec [junit] Running com.networksByteDesign.eMarket.inventory.customerTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.113 sec [junit] Testsuite: com.networksByteDesign.eMarket.inventory.customerTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.113 sec [junit] Testcase: testConstructor took 0.003 sec [junit] FAILED [junit] Expected exception was not thrown [junit] junit.framework.AssertionFailedError: Expected exception was not thrown [junit] at com.networksByteDesign.eMarket.inventory.customerTest. testConstructor(Unknown Source) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl. invoke(DelegatingMethodAccessorImpl.java:25) [junit] Testcase: testConstructorTestcase: testSetter took 0 sec [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED [junit] Running com.networksByteDesign.eMarket.inventory.companyTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.033 sec [junit] Testsuite: com.networksByteDesign.eMarket.inventory.companyTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.033 sec [junit] Testcase: testConstructor took 0.003 sec [junit] Testcase: testSetter took 0.001 sec cleanupUnittest: BUILD FAILED file:build.xml:80: One or more unit tests failed. Total time: 2 seconds
Brief Formatter
Brief is the same as the plain formatter, except that detailed information on successful tests is filtered out. If the output does not need to be parsed by another process and detailed information only on failed tests is desired, this is probably the formatter type to select. Listing 3.15 shows the output of a unit test run with the formatter set to brief.
Listing 3.15 Sample JUnit Output File with JUnit Task Set to Brief Formatter
% ant unittest Buildfile: build.xml unittest: [junit] Testsuite: com.networksByteDesign.eMarket.inventory.salesItemTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.005 sec [junit] Testsuite: com.networksByteDesign.eMarket.inventory.customerTest [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0.007 sec [junit] Testcase: testConstructor(com.networksByteDesign.eMarket. inventory.customerTest): FAILED [junit] Expected exception was not thrown [junit] junit.framework.AssertionFailedError: Expected exception was not thrown [junit] at com.networksByteDesign.eMarket.inventory.customerTest. testConstructor(Unknown Source) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl. invoke(DelegatingMethodAccessorImpl.java:25) [junit] TEST com.networksByteDesign.eMarket.inventory.customerTest FAILED [junit] Testsuite: com.networksByteDesign.eMarket.inventory.companyTest [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.001 sec cleanupUnittest: BUILD FAILED file:build.xml:82: One or more unit tests failed. Total time: 2 seconds
XML Formatter
The XML format provides the most information and should be used whenever the output will be parsed by another process, such as an XSLT to generate an HTML report. However, because some constructs are illegal in XML, the output from your unit tests may be filtered to prevent the inclusion of information that would invalidate the XML file. Listing 3.16 shows the output of a unit test run with the formatter set to xml. The XML formatter can be used to supply the test results in XML to the <junitreport> task, which provides HTML reports and is discussed in Chapter 5, "Creating the Automated Nightly Build."
Listing 3.16 Sample JUnit Output File with JUnit Task Set to XML Formatter
% ant unittest Buildfile: build.xml unittest: [junit] <?xml version="1.0" encoding="UTF-8" ?> [junit] <testsuite errors="0" failures="0" name="com.networksByteDesign.eMarket.inventory.salesItemTest" tests="2" time="0.169"> [junit] <properties> [junit] <property name="dirs.temp" value="/tmp"></property> [junit] <property name="java.vm.version" value="1.4.1_01-12"></property> [junit] <property name="java.io.tmpdir" value="/tmp"></property> [junit] <property name="os.name" value="Mac OS X"></property> [junit] <property name="ant.home" value="/usr/software/ant/"></property> ... [junit] </properties> [junit] <testcase name="testConstructor" time="0.0050"></testcase> [junit] <testcase name="testSetter" time="0.0"></testcase> [junit] <system-out><![CDATA[]]></system-out> [junit] <system-err><![CDATA[]]></system-err> [junit] </testsuite> [junit] <?xml version="1.0" encoding="UTF-8" ?> [junit] <testsuite errors="0" failures="1" name="com.networksByteDesign.eMarket.inventory.customerTest" tests="2" time="0.027"> [junit] <properties> [junit] <property name="dirs.temp" value="/tmp"></property> [junit] <property name="java.vm.version" value="1.4.1_01-12"></property> [junit] <property name="java.io.tmpdir" value="/tmp"></property> [junit] <property name="os.name" value="Mac OS X"></property> [junit] <property name="ant.home" value="/usr/software/ant/"></property> ... [junit] </properties> [junit] <testcase name="testConstructor" time="0.0070"> [junit] <failure message="Expected exception was not thrown" type="junit.framework.AssertionFailedError"> junit.framework.AssertionFailedError: Expected exception was not thrown [junit] at com.networksByteDesign.eMarket.inventory.customerTest. testConstructor(Unknown Source) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl. invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl. invoke(DelegatingMethodAccessorImpl.java:25) [junit] </failure> [junit] <system-out><![CDATA[]]></system-out> [junit] <system-err><![CDATA[]]></system-err> ... [junit] </testsuite> cleanupUnittest: BUILD FAILED file:build.xml:82: One or more unit tests failed. Total time: 4 seconds
Direct Output with the usefile Attribute
Another attribute of the formatter tag is usefile. Normally, all information published by a formatter is sent to a file. Especially in the case of the XML formatter, where later processing is planned to take place, this file-based approach is usually the best way. However, if the intent is to provide feedback directly to the user or to add the information to a build log, setting usefile = "no" will send the formatter information to the screen instead.
The files created by the <formatter> are named TEST-<the name of the class>.txt for plain and brief and TEST-<the name of the class>.xml for xml. The filename for the companyTest class would be TEST-com.networksByteDesign.eMarket.inventory.companyTest.txt or TEST-com.networksByteDesign.eMarket.inventory.companyTest.xml.
Alternative TestRunners
JUnit uses a class called TestRunner to execute the individual tests and display the results. JUnit provides alternative TestRunner classes to meet the needs of various users. All of the tests we have been running use the textui TestRunner as can be seen in Listing 3.17.
Listing 3.17 The Output from Running a Unit Test Using the textui TestRunner
% java junit.textui.TestRunner com.networksByteDesign.eMarket.inventory.salesItemTest .F. Time: 0.006 There was 1 failure: 1) testConstructor(com.networksByteDesign.eMarket.inventory.salesItemTest) "Expected exception was not thrown" FAILURES!!! Tests run: 2, Failures: 1, Errors: 0
Ant offers an AWT TestRunner that provides a graphical user interface to the test results. This can be used by calling java junit.awtui.TestRunner com. networksByteDesign.eMarket.inventory.salesItemTest and can be seen in Figure 3.1.
Figure 3.1 The AWT TestRunner provides a graphical user interface for the test results.
Ant also offers a Swing version of the TestRunner, which provides a more modern graphical interface. This can be used by calling java junit.swingui.TestRunner com. networksByteDesign.eMarket.inventory.salesItemTest and can be seen in Figure 3.2.
Figure 3.2 The Swing version of the TestRunner provides a more modern graphical interface.
Forking Unit Tests
It's possible for unit tests to have undesirable side effects on the build process. For example, if someone put a condition in their code that called System.exit(), this could cause the entire build process to exit. We would rather have just the unit test fail, but the entire build process continues to completion. Ant provides a way to insulate the build process from unintended side effects caused by unit testing, by allowing unit tests to be forked into their own JVM. There are several reasons why you might want to fork the JUnit tests into a separate process:
As just discussed, forking will isolate a unit test from the build process. If the unit test caused a condition that prompted the process to exit, this would prevent the build process from exiting because the unit test is executing as a separate process.
Forking will allow the JVM to take advantage of a multiprocessor platform.
Forking unit tests into separate processes can also insulate them from picking up side effects that might occur in the build process. For example, if a unit test modifies the state of a singleton class, another unit test running in the same JVM might execute differently than if the singleton's state had never been altered.