- Frameworks for Test Automation
- Switching from Ad Hoc to Framework
- Switching from Framework to Ad Hoc
- Getting Started with Ad Hoc Scripting
Switching from Framework to Ad Hoc
On another project, I ran across an interesting bug that had to do with the way in which you traversed screens in the application. The application had more than 30 screens, and you could go from any screen to another, but typically a user would go from screen A to screen B to screen C, and so on, navigating in order from the first screen to the last. The bug I found was that if I navigated from screen G back up to screen B, the system would crash with a fatal exception. Oops.
Most of our testing focused on that primary path, and all of our automation focused on that primary path. We had built a data-driven test automation framework for regression testing that moved from the first screen to the last, entering the information supplied in the data files. Because we were looking for volume of data entry in our regression tests, it didn’t make sense to design the framework to navigate other paths easily. It could do so, but it would result in script that would be very difficult to maintain.
After finding the bug and logging the initial defect, I asked myself the following questions:
- How can I write a script, one that we can run on every release, which will test all possible pairs of page navigation to look for these types of errors? Note that this meant around 900 (30 [ts] 30) page transitions—a script that would execute for about five hours, best case.
- What’s the minimal data I need to enter in each screen to allow the page transitions to execute successfully, and how will I know that a page transition didn’t fail? (I’m looking for positive feedback from the application to tell me that it’s where I think it should be.)
- How can I design the test to be maintainable going forward, but still keep it in our standard framework?
- How will we set this script apart from our other scripts (which run for an average of 20 minutes per script) so that when we run a regression we can sequence our scripts in a way that makes sense for our environment?
I answered these questions as best I could, and started writing the script using our standard framework. As I was working on the script (a slow process, since I was using the framework), the initial defect was resolved by the developer and sent back to me for retesting. Looking at the defect, I became suspicious from his resolution comments and gave him a call:
Mike: "Hey, Tim. I was looking at the defect you just sent back to me for retesting, and I saw a note you made that said the problem was related to the architectural changes we made for this release. Can you tell me more about that?"Tim: "Yeah, we changed the way we were using Struts on some of the pages. The error you saw was because each page had an object on it with the same name, and it didn’t know how to handle that. You’ll only see that problem if two pages happen to have two objects with the same name. All I had to do was change the name."
Mike: "Interesting. Have you checked the other pages for a similar problem?"
Tim: "I thought about it, but for the most part we all used the naming conventions for each page when we developed them. The one you found just happened to be an exception where the naming convention failed us because the pages are so similar. For me to test all the pages would take hours."
Mike: "Is it something you can tell just by looking at the code?"
Tim: "Sure, but the way the code’s structured, it would be faster to test it."
Mike: "So what’s the likelihood of this problem coming back in a future release?"
Tim: "Slim to none. It’s not something we change. And even so, it should be caught in a code review."
Mike: "If I could write a script in about 30 minutes and leave it running in the background to test all the other pages, do you think that would be useful?"
Tim: "Yeah, if it only takes you half an hour, I do. Go ahead and do that before you close the defect. I think it’s a good idea just so we can be sure there are no other similar problems. But really, man, if it’s more than thirty minutes, forget about it."
Mike: "Cool, man. Thanks. I’ll let you know how it goes."
Armed with this new information, I scrapped the work I was doing in our regression test framework and instead turned out a quick Watir script that ran in the background for the rest of the day. No other problems were found. I closed the defect and threw away the script. Based on the information I had, it didn’t make sense to run that test for five hours on every release.