Mobile Application Testing on a Shoestring
Perhaps you want your company to be more than simply "online"; you want it to be available while people wait for the dentist. Maybe you want to equip your sales force to make customer demos on the iPad. Or possibly the CEO's nephew called him a "poser" over dinner because your company's software doesn't work on the iPhone. If the scenario I've just described doesn't apply to you right now, it probably will eventually. For whatever reason, suddenly you'll have to develop a mobile strategy.
In this article, I'll lay out a few tips, tricks, and suggestions that can help to accelerate the pace of your mobile testing effort while setting reasonable expectations. To keep things simple, we'll focus on web-based mobile applications, but many of these techniques will also work on native iOS, Android, or BlackBerry applications.
In the worst case, you have major production issues right now. But then the blocking issue isn't testing, it's fixing—and you can use the time spent fixing to develop a strategy and get a lot of testing done.
What Does 'Support' Mean? Which Devices?
In a recent international keynote address at the Software Test Professionals Conference, Matt Johnston of uTest laid out the combinatorial problem of mobile application testing. Starting with testing each feature in each OS and browser, he added on handset makers and models, wireless carriers, and geolocation. Unless your company name is Google, Nokia, or Research in Motion, and unless the application is critical to your company's success, it's unlikely that you'll get the time to test all these combinations exhaustively.
The trick is to pick just one—your most common testing issue—and come up with strategies to retest very quickly. Over time, you can expand the strategy to address other systems and combinations, especially if problems are reported from the field.
In the past, when asked to test new systems, I've used a variation of Jerry Weinberg's famous "orange juice test". [1] That is, I reply, "Yes, we can do that, and it's going to cost you..." time, staff, and sometimes money for equipment and licenses.
To determine what systems to test first, you could take a hard look at your users. Sure, general browser statistics are available all over the Internet, and you could go with the most popular devices, but in most cases your particular users will have specific adoption patterns—different patterns than those of the general public. You could be better off pulling statistics from your own server logs, or getting to know the customer, than going with general-use statistics.
In many cases, you can combine coverage. For example, you might test functionality on the iPhone heavily and then "skim test" the iPad and iPod. (You could also choose popular Android and Blackberry devices for a similar strategy, as those devices have their own massive expansion of combinations that most companies reduce to a small set.)
Beyond that first heavily supported device, you might see testing on a few very different devices as a major risk. The challenge is determining how much effort to put toward which devices, and when to stop.
Want to know whether a feature is used "much"? Check your server logs for that URL signature and the number of times it occurs. If you can, sort by the count of uses within a week, and start testing with the features people use the most often.
Common Failure Modes
People with testing experience can usually test basic functionality, but mobile devices have an entirely new set of failure modes to check. Here are a few:
- Low power and memory thrashing. Low memory can cause system crashes and bizarre behavior; loss of power while in use can create all sorts of havoc. Most simulators have a low memory force option, but try it on the real thing first. One common difference between the iPhone and the iPod is that the iPod generally has a weaker processor and less memory. Depending on the customer base of your application, the iPod could be more popular than the iPhone; plenty of parents won't trust Junior with a cell phone, but gave him Mom or Dad's "old" iPod when the iPhone came out.
- Loss of signal. Like loss of memory or power, low signal can cause painful delay. Complete loss of signal (as I experienced recently in the Smith Center in Las Vegas) might leave your application spinning forever. This result might be acceptable, or it might not. In some cases, the operating system can seize up—that's probably not acceptable. It's best to find out about this problem and inform management before customers start calling.
- Changes of orientation and user experience. Sure, the application works fine on your lowest supported resolution. Great. Now turn it sideways and—whoa, look how badly that's cut off! This sort of situation is just a small part of the change to mobile, which brings with it an entirely new user-interface metaphor.
- Security and privacy. Leaving a laptop in a bar is extremely rare, and personal computers almost never. Even when it happens, most corporate laptops are secure. iPhones and mobile devices really aren't; the default security position of these devices is no password required. In most cases, consumer apps such as Gmail, Twitter, and Facebook automatically remember passwords, to save the users the hassle of multiple logins. Yet mobile devices are left all over the place; remember the Gizmodo report on the iPhone 4 prototype a few years ago? If your application stores critical information, you may want to reconsider your timeout period, or the "keep me logged in" checkbox in your application. If it's an internal application, say to equip your sales force, that stores proprietary management on the device, you may want to "wipe" the device remotely—or push new builds of native applications automatically. The is known as mobile device management, and a host of companies have sprung up to help with this problem—most noticeably BoxTone, which I mentioned earlier.
A related problem is testing with a much stronger signal than the typical customer has. Study the customer; find out what kind of wireless plan is in use by the median customer and the bottom quartile, and test according to those standards. In other words, don't test on the company's free wireless; do it from home, on a personal device, or in a cyber café. Try a few. Take the device with you and test while walking under a bridge. Experiment.
A second example comes from Brian Reed, Chief Product Officer of BoxTone: hovering the mouse pointer over an item to get a description to pop up a menu. Intuitive on a personal computer, that approach simply doesn't work on a mobile device. In the same way, you'll want to put more space between elements that users can click, making sure that those elements are large, because most human fingers just aren't very precise. Test this change with the actual device, too. I've had great success clicking things in a simulator that my finger just can't do with the real thing.
This list should get you started, but if you want to take a deeper dive into new testing ideas for mobile applications, I highly recommend Jon Kohl's article "Test Mobile Applications with I SLICED UP FUN."
Tools and Tricks
Not many mature test-automation tools are available for mobile devices. One thing you might try for iOS is http://www.iphonetester.com, an iPhone simulator for personal computers that can run through a browser. Using a PC's browser allows you to test-drive the application with a tool like Selenium. This won't help you with sudden drops in bandwidth, low power or memory, or any of the new challenges of a mobile device, but if your version of Safari matches the version on the handheld device, you might at least be able to find major application failures and rendering issues.
Beyond driving a browser on a personal computer, most vendors make simulators, and testing with simulators for the iPhone, Android, and BlackBerry may save you a fair bit of headache and pain over testing on the physical devices.
If you need more power, you may be able to test real devices, not simulators, in different places in the world on real wireless networks. There are two basic ways to do this: crowdsource your test efforts, or use a mobile host-device provider.
To do crowdsource testing, you hire a third party, such as uTest or Mob4Hire, to collect a huge number of testers in a database. When your QA environment is up, you make it public to the world, give the login keys to your host, and they "turn on the tap" of testers. One common pattern is to turn on the tap on Friday night, to get final information Sunday night for a Monday go-live. This is extra, bonus testing that won't slow down the project. A second approach is to turn on the tap at go-live, hoping your testers find the bugs before the customers do.
Mobile device-hosting companies have data center space all over the world, connecting real physical devices on wireless networks to a video camera and some software, which allows you to rent devices by the minute and explore them from your office, home, or anywhere else with the Internet. Perfecto Mobile is the first company I know of in this growing space; it won't be the last.
Putting It All Together
We've identified a core system, figured out how to test it, and then negotiated schedule, scope, and expectations for support. We built a test strategy that combines traditional testing, common failure modes of mobile devices, and a few emerging technologies that you might use to accelerate your testing or gain confidence faster.
In the end, the problem of any new testing regimen is the same as with traditional testing: We always have too many combinations of things to test, and not enough time. Our challenge is to look at the time we do have, run the most effective tests right now, determine what those tests can tell us about the software, and plan for what and how to approach tomorrow.
For now, mobile application testing will be about rolling with the punches, learning new techniques and technologies—and trying to keep one step ahead of chaos.
Did someone just say "Kindle Fire"? "iPad 4"?
If you'll excuse me, I gotta go test.
A consulting software tester and self-described software process naturalist, Matt Heusser tests, manages, and develops on software projects. Lately Matt has been coaching testers while contributing in short bursts—a sort of "boutique test consulting" that actually leads to quicker time-to-market right now, on this project right here. A contributing editor for Software Test & Quality Assurance Magazine, Matt blogs at "Creative Chaos" sits on the board of directors of the Association for Software Testing, and recently served as lead editor for How to Reduce the Cost of Software Testing (Taylor and Francis, 2011). You can follow Matt on Twitter, email him, or read more on his blog.
References
[1] Gerald M. Weinberg, The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully. Dorset House Publishing, 1986.