Video: Continuous Delivery: Architecture and Process for Acceptance Tests
Transcript
The first principle in test automation is that tests are first class citizens of your project. This is important in a couple of ways. Firstly in terms of preventing decay in test code. We've already spoken about how important it is to treat your test code as if it were real production code. Test code is at least as hard to write and maintain and debug as production code, so you need to treat it with the same love and care and respect. That means refactoring relentlessly to make that your code is readable and expresses intent clearly and is maintainable. Don't repeat yourself. Anywhere you have code that is duplicated across tests find ways to remove that duplication, but again, not at the expense of readability. Of course, it's important not to repeat yourself. That is vital to creating well-maintained code.
Then, perhaps controversially, don't use record and playback tools to build your test suites. These kinds of tools you record clicking on the UI and then you play it back to run the test. The problem with those tools is twofold. Firstly it can tightly couple your tests to the way that the UI is organized. If you start moving UI elements or changing the layout of screens suddenly your test suites fail and then you have to go back and fix them. Then also as I mentioned earlier, many of these tools have licenses which prevent developers from using them. In general I've seen that using record and playback tools, unless you're very, very careful, those suites of tests are very painful and expensive to maintain and it's very hard for developers to be able to actually reason about them and reproduce them.
One idea that's been very important in the development of test automation is the idea of executable specifications. This is the idea that we can use a natural language to express the acceptance criteria for our system under test while using a general purpose programming language to express test mechanics. Using a general programming language is important because there are tools that help you actually maintain that code. Keep it refactored, you've got Ids which make it very easy to refactor your code base. We should be applying those tools to our test suites in order to keep them maintainable. Then use a tool that allows you to operate in either domain transparently.
There's a number of tools that let you operate in this way. There's Cucumber, Ghurkin, Concordia, these all open source tools and I'm going to show you a relatively new open source tool right now called Gauge. Gauge is a tool that lets you write scenarios in plain text in markdown format. What you can see here is a scenario I wrote just for this example. Then you write the code which actually drives the application under test in the language of your choice. Gauge has runners for Java, C sharp, Ruby. You can write your own runners. What Gauge lets you do is you run your commands on the command line and it executes these specifications.
What you can see here on screen is a specification for buying a book. There's an initial step which sets up a user with a balance of 39 dollars 99 in their account and then I have a scenario which is to buy a product. We're going to buy the continuous delivery book and make sure we only have zero dollars left in our account. When I run the tool what it's going to do is look for some codes that's actually going to execute these steps. This is some Java code I wrote which will do the real work and you can see that each method in this code has an annotation. The annotation matches the text in the scenario file. Actually creating the user will run this method that creates a new account manager and a new shopping cart and then buying the products will go and actually look for a product with the specified title and then buy it. Then validating at the end that the user has zero dollars left is going to actually execute an insertion to make sure there's no money left in the account.
This is very powerful. Testers and business people can collaborate on editing the scenario files without having to know anything about the code and the developers can focus on making sure the code works and does what it's supposed to do. We can actually execute these specifications quite literally. However there is an overhead to doing this. I need to make sure that the annotation text matches up with the text in the scenarios. As you change those scenarios you're going to need to makes sure that the text in the annotations is up to day. However, if I look at this test and expressed it just as a J unit method you'd see that that code is much shorter and more concise. I just got a method called buying products and debit account which just executes exactly the same code and it's shorter and it's easier to read.
The executable specifications methodology here, that has some real benefits. It means that if you have non-technical users who you're working with to define your specifications, you can work with them just using the text. That's a very powerful communication tool. If you do need to have your specifications clearly listed as part of your product delivery process or you're working to regulate a domain being able to take what's in the code and turn it into actual real specifications just in a programatic way is also very, very powerful. You really want to avoid the situation where the specifications are in a document that's not regularly updated and doesn't keep in sync the codes. It's very easy for those kind of documents to drift out of spec as we introduce changes to the code. This is a really nice solution to that problem. But, if you don't need any of those things, if you're just knocking up the quick products or if you don't have non-technical users who are developing your product or service then the overhead might not be worth it and it might just be worth using a simple tool like J unit in order to write your automated acceptance test.
One thing which is definitely essential whichever of these methods you use is to decouple the test script information here on what's actually happening with the driving of the system under test. Underneath both this layer and the layer in the executable specifications example we've got actually a product catalog cart which knows how to interact with the system under test by actually clicking on text boxes and buttons and reading back what happens in the browser. Her we can see that there's no interaction with the system under test in this layer of the code base. All we're doing is interacting with the main objects on the test side and feeding it information on the initial state and validating that the system is in the expected state at the end. All the interaction with the system under test is in this separate layer.
This is an example of an application of a pattern called the page object pattern. In the page object pattern we write a separate class for every page in the system that we want to interact with. This example is for a login page. We have a method to log in with the username and password. Then we have another method to log in where we're expecting an error where again we pass in the username and the password. The idea here is to make sure that if we make changes to the UI of the system under test we only need to change our test code in one place, right here. I don't need to go and update all the test scripts, specifications or the top level of test code where we actually go through the scenario. I just need to change the place in the code which interacts with those particular UI elements.
That's one very powerful benefit of the page object pattern. Another benefit is that it allows me to reuse the same test scripts and scenarios against multiple different implementations of the system. For example I could have a login page as you can see here, which interacts with a web front end via Selenium. I could write another implementation of this which interacts via a service layer over HTTP, not via the UI. By turning this into an interface and having multiple different implementations that allows me to use the same tests and REOs against both the UI layer or the service layer. I could even write an implementation which talks to say an app, iPhone app or Android app and reuse the same scenarios against multiple different devices. It's a pattern that also allows you to reuse test scripts against multiple different end points.
I'm going to change tack a bit and talk about the process behind implementing automated tests. When you come up with a new idea, story, requirement you'll start by describing what the value of that story is and also defining acceptance criteria, the statements by which you know that that work will be done. Those are typically written by customers, analysts, testers working together. However actually implementing the test codes is going to be done by developers and testers working together. In some traditional organizations all the test scripts are written and maintained by testers. This is very problematic because unless developers and testers work together it's very hard for developers to learn how to write maintainable, testable code. Actually having the developers see the system from the point of view of the tester, the person interacting with the system and helping to design the tests is really powerful in helping them build testable software.
If you write the software and then the testers come along afterwards and build the test suites it's often actually quite hard to build test suites against software that's not been designed to be testable. Having that feedback loop of developers and testers working together is really powerful in making sure the software's testable in order to make sure that the test suites are maintainable. Having the testers in a different room or a different building or in different country or even outsourced, my experience has been that that always reduces quality. It always makes for poor quality software. Having the testers and developers on the same team collaborating on developing the tests leads to much better outcomes because it means the software's more testable. It also means that the developers actually understand how to write maintainable test suites and that the test suites are better maintained and that the developers pay attention to them.
My recommendation is developers and testers should actually pair program on test implementation. This has the advantage that the testers don't need to understand test automation. One of the problems that I've seen before is testers are afraid of adopting test automation because it's a whole bunch of new skills they have to learn. If you paired them up with developers that's a great way for them to actually be able to work together and learn those new skills and still add a ton of value to the process. Developers also then learn a lot about what it is to effectively test your code. They also learn a lot about the user's viewpoint, which is what the tester provides in that process.
I want to talk a little bit more about the role of a tester or quality analyst. The tester is a role, it's not a person. You don't actually necessarily need people who are dedicated to testing. You can have developers play the testing role on other people's work typically because people are bad at testing their own work. That's a perfectly acceptable thing to do. Testers are not failed developers. This unfortunately is a very commonly held view, but when you consider that maintaining test code is as hard as maintaining a production code the idea that you should put somebody who's not as skilled on that job is crazy. Testers have a very unique set of skills. It's important that we treat them with the same respect as we treat developers.
The main job of a tester is to advocate for the user and to make the quality of the system transparent so the team can make effective decisions around their test automation and their test strategy. They're not primarily working on manual regression testing. This is a case where the computers are laughing at us if we're having human beings manually do regression against the system under test. That's something that should be automated. What they should be doing is focusing on exploratory testing and helping to build and maintain the suites of automated tests. That should be the main work of testers.
Takeaways from this section of the course. It's important to make sure that your acceptance tests are running before you can say that you're done with any functionality. Being able to say that you're done requires that you complete the tests. I'm not that religious about writing acceptance tests before you write the code. I've seen it work well either way. Either write the acceptance test before or afterwards, but the important thing is you should finish the acceptance test before you say you're done with the functionality. It's very important to use nice layered architecture for your test harness and make sure that you encapsulate interaction with the system under test using a pattern like the page object pattern. Those acceptance tests must be owned by and maintained by the whole team. They're not just the responsibility of the testers. Where that's true often the developers just don't pay attention to them and then they degrade and become very painful to maintain and often become of zero value.