Testing Applications
Having the capability to build Ajax applications with Java gives you many tools that let you maintain larger applications with less work. One very important aspect of maintaining a large application is being able to easily create unit tests for most, if not all, functionality. This need comes from a common problem with software development: the code size grows to a point where small changes can have cascading effects that create bugs.
It has become common practice to incorporate heavy testing into the development cycle. In the traditional waterfall development cycle you would write code to a specification until the specification was complete. Then the application would be passed to testers who would look for bugs. Developers would respond to bug reports by fixing the bugs. Once all the bugs were fixed, the product would be shipped. Figure 4-39 illustrates the steps in traditional software development testing.
Figure 4-39 Old-style testing = bad
The problem encountered with this type of development cycle is that during the bug finding and fixing phase, code changes can easily cause more bugs. To fix this problem, testers would need to start testing right from the beginning after every code change to ensure new bugs weren't created and old bugs didn't reappear.
One successful testing methodology has developers write automated unit tests before they write the features. The tests cover every use case of the new feature to be added. The first time the test is run, it will fail for each case. The development process then continues until each test case in the unit test is successful. Then the unit test becomes part of a test suite for the application and is run before committing any source code changes to the source tree. If a new feature causes any part of the application to break, other tests in the automated test suite will identify this problem, since every feature of the application has had tests built. If a bug is found at this point, it is relatively easy to pinpoint the source since only one new feature was added. Finding and fixing bugs early in the development lifecycle like this is much easier and quicker than finding and fixing them at the end. The test suite grows with the application. The initial investment in time to produce the unit tests pays off over the long run since they are run again on every code change, ensuring each feature's health. Figure 4-40 illustrates this process.
Figure 4-40 Test-first testing = good
In practice, when comparing this approach to the one illustrated in Figure 4-39, there is a large time saving from finding bugs earlier and less of a need for a large testing team since the developer is responsible for much of the testing.
This technique is relatively novel for client-side web applications. Testing is reduced to usability testing and making sure that different browsers render pages properly with traditional web applications. This is one of the great things about HTML. It's a declarative language that leaves little room for logical bugs. It's easy to deploy HTML web pages that work (browser-rendering quirks aside). However, using JavaScript introduces the possibility of logic bugs. This wasn't too much of a problem when JavaScript was being used lightly, but for Ajax applications heavily using JavaScript, logical bugs are somewhat of a problem. Since JavaScript is not typed and does not have a compile step, many bugs can only be found by running the application, which makes the creation of unit tests difficult. Furthermore, it is difficult to test an entire application through its interface. Many simple bugs, such as trying to call an undefined function, cannot be caught without running the program and trying to execute the code that has the bug, but by using Java you could catch these bugs immediately in the IDE or at compile time. From a testing perspective, it does not make sense to build large Ajax applications with JavaScript.
Using JUnit
JUnit is another great Java tool that assists in creating an automated testing for your application. It provides classes that assist in building and organizing tests, such as assertions to test expected results, a test-case base class that allows you to set up several tests, and a mechanism to join tests together in a test suite. To create a test case for JUnit you would typically extend the TestCase class, but since GWT applications require a special environment, GWT provides a GWTTestCase class for you to extend.
Let's walk through the creation of a test case for the Multi-Search application in Chapter 7. The first step is to use the GWT junitCreator script to generate a test case class and some scripts that can launch the test case. The junitCreator script takes several arguments to run. Table 4-1 outlines each argument.
Table 4-1. junitCreator Script Arguments
Argument |
Description |
Example |
junit |
Lets you define the location of the junit jar file. You can find a copy in the plugin directory of your Eclipse installation. |
-junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar |
module |
Specifies the GWT module that you'll be testing. It is required since the environment needs to run this module for your test. |
-module com.gwtapps.multisearch.MultiSearch |
eclipse |
Specifies your Eclipse project name if you want to generate Eclipse launch configurations. |
-eclipse GWTApps |
The last argument should be the class name for the test case. You would typically use the same package as the one being tested. |
com.gwtapps.multisearch.client.MultiSearchTest |
To run this script for the Multi-Search application we can use the following command:
junitCreator -junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar -module com.gwtapps.multisearch.MultiSearch -eclipse GWTApps com.gwtapps.multisearch.client.MultiSearchTest
Figure 4-41 shows the output from this command. The script created two scripts, two launch configurations for launching the test in web mode or hosted mode, and one test case class that is stored in the test directory. In Eclipse the test case class will look like Figure 4-42.
Figure 4-41 Using junitCreator to generate a test case
Figure 4-42 A generated test case in Eclipse
The generated test case has two methods. The first, getModuleName, is required by GWT and must specify the module that is being tested. The junitCreator script has set this value to the Multi-Search module because it was specified with the module command line argument. The second method, a test case, is implemented as a simple test that just asserts that the value true is true. You can build as many test cases as you like in this one class.
You can run the tests by running the scripts generated by junitCreator. Alternatively, you can launch JUnit inside Eclipse for a visual representation of the results. Running inside Eclipse also lets you debug the JUnit test case, which can greatly assist in finding bugs when a test case fails. Since junitCreator created a launch configuration for Eclipse, we can simply click the Run or Debug icons in the Eclipse toolbar and select the Multi SearchTest launch configuration from the drop-down menu. After launching this configuration, the JUnit view automatically displays in Eclipse. When the test has completed, you will see the results in the JUnit view, as shown in Figure 4-43. Notice the familiar check marks, which are displayed in green in Eclipse, next to the test case indicating that the test case was successful.
Figure 4-43 Running a JUnit test case from Eclipse
Now let's create a test case for each type of search engine that the application uses. Adding the following code to the test class creates four new tests:
protected MultiSearchView getView(){ MultiSearchView view = new MultiSearchView( new MultiSearchViewListener(){ public void onSearch( String query ){} }); RootPanel.get().add( view ); return view; } protected void doSearchTest( Searcher searcher ){ searcher.query( "gwt" ); } public void testYahoo() { doSearchTest( new YahooSearcher( view ) ); } public void testFlickr() { doSearchTest( new FlickrSearcher( view ) ); } public void testAmazon() { doSearchTest( new AmazonSearcher( view ) ); } public void testGoogleBase() { doSearchTest( new GoogleBaseSearcher( view ) ); }
The first two methods, getView and doSearchTest, are helper methods for each test in this test case. The getView method simply creates a view, the MultiSearchView defined in the application, and adds it to the RootPanel so that it is attached to the document. Then the doSearchTest method sends a query to a Searcher class implementation. Each test case instantiates a different Searcher implementation and sends it to the doSearchTest method. When JUnit runs, each test case runs and submits a query to the respective search engine. Figure 4-44 shows what the result looks like in the Eclipse JUnit view.
Figure 4-44 Running several tests in one test case
If any search failed by an exception being thrown, then the stack trace for the exception would display in the right pane of this view and a red X icon would display over the test case.
The problem with this test case is that it doesn't verify the results. JUnit provides many assertion helper methods that compare actual results to expected results. However, in this case our results are asynchronous; that is, they don't arrive until after the test case completes. GWT provides help with this since much of Ajax development is asynchronous with the delayTestFinish method.
To use this method we need to have a way of validating an asynchronous request. When we have validated that an asynchronous request is complete, then we call the finishTest method. In the case of the MultiSearch test, we will validate when we receive one search result. To do this we need to hook into the application to intercept the asynchronous event. This requires a bit of knowledge about the application and may seem a little obscure otherwise. We will create a mock object, which is an object that pretends to be another object in the application, to simulate the SearchResultsView class. By simulating this class we will be able to extend it and override the method that receives search results. The class can be declared as an inner class on the test case like this:
private class MockSearchResultsView extends SearchResultsView { public MockSearchResultsView( SearchEngine engine ){ super(engine); } public void clearResults(){} public void addSearchResult( SearchEngineResult result ){ assertNotNull(result); finishTest(); } }
The class overrides the addSearchResult method, which one of the Searcher classes calls when a search result has been received from the server. Instead of adding the result to the view, this test case will use one of JUnit's assert methods, assertNotNull, to assert that the search engine result object is not null. Then it calls the GWT's finishTest method to indicate that the asynchronous test is complete.
To run this test we need to change the doSearchTest method on the test case to insert the mock view and tell JUnit to wait for an asynchronous response:
protected void doSearchTest( Searcher searcher ){ searcher.setView( new MockSearchResultsView(searcher.getView().getEngine())); searcher.query( "gwt" ); delayTestFinish(5000); }
In this code we set the view of the searcher to the mock view that we've created, and then call the delayTestFinish method with a value of 5,000 milliseconds (5 seconds). If the test does not complete within 5 seconds, it will fail. If the network connection is slow, you may want to consider a longer value here to properly test for errors.
Running these tests at this point tests the application code in the proper GWT environment and with asynchronous events occurring. You should use these testing methods as you build your application so you have a solid regression testing library.
Benchmarking
When using GWT to create Ajax applications, taking user experience into consideration almost always comes first. Part of creating a good user experience with an application is making it perform well. Fortunately, since GWT has a compile step, each new GWT version can create faster code, an advantage that you don't have with regular JavaScript development. However, you probably shouldn't always rely on the GWT team to improve performance and should aim at improving your code to perform better. Starting with release 1.4, GWT includes a benchmarking subsystem that assists in making smart performance-based decisions when developing Ajax applications.
The benchmark subsystem works with JUnit. You can benchmark code through JUnit by using GWT's Benchmark test case class instead of GWTTestCase. Using this class causes the benchmarking subsystem to kick in and measure the length of each test. After the tests have completed, the benchmark system writes the results to disk as an XML file. You can open the XML file to read the results, but you can view them easier in the benchmarkViewer application that comes with GWT.
Let's look at a simple example of benchmarking. We can create a benchmark test case by using the junitCreator script in the same way we would for a regular test case:
junitCreator -junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar -module com.gwtapps.desktop.Desktop -eclipse GWTApps com.gwtapps.desktop.client. CookieStorageTest
In this code we're creating a test case for the cookie storage feature in Chapter 6's Gadget Desktop application. The application uses the Cookie Storage class to easily save large cookies while taking into account browser cookie limits. In this test we're going to measure the cookie performance. First, we extend the Benchmark class instead of GWTTestCase:
public class CookieStorageTest extends Benchmark { public String getModuleName() { return "com.gwtapps.desktop.Desktop"; } public void testSimpleString(){ try { CookieStorage storage = new CookieStorage(); storage.setValue("test", "this is a test string"); assertEquals( storage.getValue("test"), "this is a test string" ); storage.save(); storage.load(); assertEquals( storage.getValue("test"), "this is a test string"); } catch (StorageException e) { fail(); } } }
You can run this benchmark from the Eclipse JUnit integration or the launch configuration generated by the junitCreator script. The test simply creates a cookie, saves it, loads it, and then verifies that it hasn't changed. The generated XML file will contain a measurement of the time it took to run this method. At this point the benchmark is not very interesting. We can add more complex benchmarking by testing with ranges.
Using ranges in the benchmark subsystem gives you the capability to run a single test case multiple times with different parameter values. Each run will have its duration measured, which you can later compare in the benchmark report. The following code adds a range to the cookie test to test writing an increasing number of cookies:
public class CookieStorageTest extends Benchmark { final IntRange smallStringRange = new IntRange(1, 64, Operator.MULTIPLY, 2); public String getModuleName() { return "com.gwtapps.desktop.Desktop"; } /** * @gwt.benchmark.param cookies -limit = smallStringRange */ public void testSimpleString( Integer cookies ){ try { CookieStorage storage = new CookieStorage(); for( int i=0; i< cookies.intValue(); i++){ storage.setValue("test"+i, "this is a test string"+i); assertEquals( storage.getValue("test"+i), "this is a test string"+i ); } storage.save(); storage.load(); for( int i=0; i< cookies.intValue(); i++){ assertEquals( storage.getValue("test"+i), "this is a test string"+i ); } } catch (StorageException e) { fail(); } } public void testSimpleString(){ } }
This code creates an IntRange. The parameters in the IntRange constructor create a range that starts at one and doubles until it reaches the value 64 (1, 2, 4, 8, 16, 32, 64). GWT passes each value in the range into separate runs of the testSimpleString method. GWT knows to do this by the annotation before the method, which identifies the parameter and the range to apply.
Notice that there is also a version of the testSimpleString method without any parameters. You need to provide a version of this method with no arguments to run in JUnit since it does not support tests without parameters. The benchmark subsystem is aware of this and is able to choose the correct method.
After running this code we can launch the benchmarkViewer application from the command line in the directory that the reports were generated in (this defaults to the Projects directory):
benchmarkViewer
The benchmarkViewer application shows a list of reports that are in the current directory. You can load a report by clicking on it in the list. Each report contains the source code for each test along with the results as a table and a graph. Figure 4-45 shows the result of the testSimpleString test.
Figure 4-45 Benchmark results for the cookie test
The benchmark system also recognizes beginning and ending methods. Using methods like these allows you to separate set up and take down code for each test that you don't want measured. For example, to define a setup method for the testSimpleString test, you would write the following code:
public void beginSimpleString( Integer cookies ){ /* do some initialization */ }