- 12.1 Understanding
- 12.2 Defects
- 12.3 Bisection
- 12.4 Conclusion
12.2 Defects
I once started in a new job in a small software startup. I soon asked my co-workers if they’d like to use test-driven development. They hadn’t used it before, but they were keen on learning new things. After I’d shown them the ropes, they decided that they liked it.
A few months after we’d adopted test-driven development, the CEO came by to talk to me. He mentioned in passing that he’d noticed that since we’d started using tests, defects in the wild had significantly dropped.
That still makes me proud to this day. The shift in quality was so dramatic that the CEO had noticed. Not by running numbers or doing a complex analysis, but simply because it was so significant that it called attention to itself.
You can reduce the number of defects, but you can’t eliminate them. But do yourself a favour: don’t let them accumulate.
Zero bugs isn’t as unrealistic as it sounds. In lean software development, this is known as building quality in [82]. Don’t push defects in front of you to ‘deal with them later’. In software development, later is never.
When a bug appears, make it a priority to address it. Stop what you’re doing5 and fix the defect instead.
12.2.1 Reproduce Defects as Tests
Initially, you may not even understand what the problem is, but when you think that you do, perform an experiment: The understanding should enable you to formulate a hypothesis, which again enables you to design an experiment.
Such an experiment may be an automated test. The hypothesis is that when you run the test, it’ll fail. When you actually do run the test, if it does fail, you’ve validated the hypothesis. As a bonus, you also have a failing test that reproduces the defect, and that will later serve as a regression test.
If, on the other hand, the test succeeds, the experiment failed. This means that your hypothesis was wrong. You’ll need to revise it so that you can design a new experiment. You may need to repeat this process more than once.
When you finally have a failing test, ‘all’ you have to do is to make it pass. This can occasionally be difficult, but in my experience, it usually isn’t. The hard part of addressing a defect is understanding and reproducing it.
I’ll show you an example from the online restaurant reservation system. While I was doing some exploratory testing I noticed something odd when I updated a reservation. Listing 12.1 shows an example of the issue. Can you spot the problem?
The problem is that the email property holds the name, and vice versa. It seems that I accidentally switched them around somewhere. That’s the initial hypothesis, but it may take a little investigation to figure out where.
Have I not been following test-driven development? Then how could this happen?
Listing 12.1 Updating a reservation with a PUT request. A defect is manifest in this interaction. Can you spot it?
PUT /reservations/21b4fa1975064414bee402bbe09090ec HTTP/1.1 Content-Type: application/json { "at": "2022-03-02 19:45", "email": "pan@example.com", "name": "Phil Anders", "quantity": 2 } HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "id": "21b4fa1975064414bee402bbe09090ec", "at": "2022-03-02T19:45:00.0000000", "email": "Phil Anders", "name": "pan@example.com", "quantity": 2 }
This could happen because I’d implemented SqlReservationsRepository6 as a Humble Object [66]. This is an object so simple that you may decide not to test it. I often use the rule of thumb that if the cyclomatic complexity is 1,a test (also with a cyclomatic complexity of 1) may not be warranted.
Even so, you can still make mistakes even when the cyclomatic complexity is 1. Listing 12.2 shows the offending code. Can you spot the problem?
Given that you already know what the problem is, you can probably guess that the Reservation constructor expects the email argument before the name. Since both parameters are declared as string, though, the compiler doesn’t complain if you accidentally swap them. This is another example of stringly typed code [3], which we should avoid7.
Listing 12.2 The offending code fragment that causes the defect shown in listing 12.1. Can you spot the programmer error? (Restaurant/d7b74f1/Restaurant.RestApi/SqlReservationsRepository.cs)
using var rdr = await cmd.ExecuteReaderAsync().ConfigureAwait(false); if (!rdr.Read()) return null; return new Reservation( id, (DateTime)rdr["At"], (string)rdr["Name"], (string)rdr["Email"], (int)rdr["Quantity"]);
It’s easy enough to address the defect, but if I can make the mistake once, I can make it again. Thus, I want to prevent a regression. Before fixing the code, write a failing test that reproduces the bug. Listing 12.3 shows the test I wrote. It’s an integration test that verifies that if you update a reservation in the database and subsequently read it, you should receive a reservation equal to the one you saved. That’s a reasonable expectation, and it reproduces the error because the ReadReservation method swaps name and email, as shown in listing 12.2.
That PutAndReadRoundTrip test is an integration test that involves the database. This is new. So far in this book, all tests have been running without external dependencies. Involving the database is worth a detour.
12.2.2 Slow Tests
Bridging the gap between a programming language’s perspective on data and a relational database is error-prone8, so why not test such code?
In this subsection, you’ll see an outline of how to do that, but there’s a problem: such tests tend to be slow. They tend to be orders of magnitudes slower than in-process tests.
Listing 12.3 Integration test of SqlReservationsRepository. (Restaurant/645186b/Restaurant.RestApi.SqlIntegrationTests/SqlReservationsRepositoryTests.cs)
[Theory] [InlineData("2032-01-01 01:12", "z@example.net", "z", "Zet", 4)] [InlineData("2084-04-21 23:21", "q@example.gov", "q", "Quu", 9)] public async Task PutAndReadRoundTrip( string date, string email, string name, string newName, int quantity) { var r = new Reservation( Guid.NewGuid(), DateTime.Parse(date, CultureInfo.InvariantCulture), new Email(email), new Name(name), quantity); var connectionString = ConnectionStrings.Reservations; var sut = new SqlReservationsRepository(connectionString); await sut.Create(r); var expected = r.WithName(new Name(newName)); await sut.Update(expected); var actual = await sut.ReadReservation(expected.Id); Assert.Equal(expected, actual); }
The time it takes to execute a test suite matters, particularly for developer tests that you continually run. When you refactor with the test suite as a safety net, it doesn’t work if it takes half an hour to run all tests. When you follow the Red Green Refactor process for test-driven development, it doesn’t work if running the tests takes five minutes.
The maximum time for such a test suite should be ten seconds. If it’s much longer than that, you’ll lose focus. You’ll be tempted to look at your email, Twitter, or Facebook while the tests run.
You can easily eat into such a ten-second budget if you involve a database. Therefore, move such tests to a second stage of tests. There are many ways you can do this, but a pragmatic way is to simply create a second Visual Studio solution to exist side-by-side with the day-to-day solution. When you do that, remember to also update the build script to run this new solution instead, as shown in listing 12.4.
Listing 12.4 Build script running all tests. The Build.sln file contains both unit and integration tests that use the database. Compare with listing 4.2. (Restaurant/645186b/build.sh)
#!/usr/bin/env bash dotnet test Build.sln --configuration Release
The Build.sln file contains the production code, the unit test code, as well as integration tests that use the database. I do day-to-day work that doesn’t involve the database in another Visual Studio solution called Restaurant.sln. That solution only contains the production code and the unit tests, so running all tests in that context is much faster.
The test in listing 12.3 is part of the integration test code, so only runs when I run the build script, or if I explicitly choose to work in the Build.sln solution instead of in Restaurant.sln. It’s sometimes practical to do that, if I need to perform a refactoring that involves the database code.
I don’t want to go into too much detail about how the test in listing 12.3 works, because it’s specific to how .NET interacts with SQL Server. If you’re interested in the details, they’re all available in the accompanying example code base, but briefly, all the integration tests are adorned with a [UseDatabase] attribute. This is a custom attribute that hooks into the xUnit.net unit testing framework to run some code before and after each test case. Thus, each test case is surrounded with behaviour like this:
Create a new database and run all DDL9 scripts against it.
Run the test.
Tear down the database.
Yes, each test creates a new database only to delete it again some milliseconds later10. That is slow, which is why you don’t want such tests to run all the time.
Defer slow tests to a second stage of your build pipeline. You can do it as outlined above, or by defining new steps that only run on your Continuous Integration server.
12.2.3 Non-deterministic Defects
After running the restaurant reservation system for some time, the restaurant’s maître d’ files a bug: once in a while, the system seems to allow overbooking. She can’t deliberately reproduce the problem, but the state of the reservations database can’t be denied. Some days contain more reservations than the business logic shown in listing 12.5 allows. What’s going on?
You peruse the application logs11 and finally figure it out. Overbooking is a possible race condition. If a day is approaching capacity and two reservations arrive simultaneously, the ReadReservations method might return the same set of rows to both threads, indicating that a reservation is possible. As figure 12.2 shows, each thread determines that it can accept the reservation, so it adds a new row to the table of reservations.
This is clearly a defect, so you should reproduce it with a test. The problem is, however, that this behaviour isn’t deterministic. Automated tests are supposed to be deterministic, aren’t they?
It is, indeed, best if tests are deterministic, but do entertain, for a moment, the notion that nondeterminism may be acceptable. In which way could this be?
Tests can fail in two ways: A test may indicate a failure where none is; this is called a false positive. A test may also fail to indicate an actual error; this is called a false negative.
Listing 12.5 Apparently, there’s a bug in this code that allows overbooking. What could be the problem? (Restaurant/dd05589/Restaurant.RestApi/ReservationsController.cs)
[HttpPost] public async Task<ActionResult> Post(ReservationDto dto) { if (dto is null) throw new ArgumentNullException(nameof(dto)); var id = dto.ParseId() ?? Guid.NewGuid(); Reservation? r = dto.Validate(id); if (r is null) return new BadRequestResult(); var reservations = await Repository .ReadReservations(r.At) .ConfigureAwait(false); if (!MaitreD.WillAccept(DateTime.Now, reservations, r)) return NoTables500InternalServerError(); await Repository.Create(r).ConfigureAwait(false); await PostOffice.EmailReservationCreated(r).ConfigureAwait(false); return Reservation201Created(r); }
Figure 12.2 A race condition between two threads (e.g. two HTTP clients) concurrently trying to make a reservation.
False positives are problematic because they introduce noise, and thereby decrease the signal-to-noise ratio of the test suite. If you have a test suite that often fails for no apparent reason, you stop paying attention to it [31].
False negatives aren’t quite as bad. Too many false negatives could decrease your trust in a test suite, but they introduce no noise. Thus, at least, you know that if a test suite is failing, there is aproblem.
One way to deal with the race condition in the reservation system, then, is to reproduce it as the non-deterministic test in listing 12.6.
Listing 12.6 Non-deterministic test that reproduces a race condition. (Restaurant/98ab6b5/Restaurant.RestApi.SqlIntegrationTests/ConcurrencyTests.cs)
[Fact] public async Task NoOverbookingRace() { var start = DateTimeOffset.UtcNow; var timeOut = TimeSpan.FromSeconds(30); var i = 0; while (DateTimeOffset.UtcNow - start < timeOut) await PostTwoConcurrentLiminalReservations( start.DateTime.AddDays(++i)); }
This test method is only an orchestrator of the actual unit test. It keeps running the PostTwoConcurrentLiminalReservations method in listing 12.7 for 30 seconds, over and over again, to see if it fails. The assumption, or hope, is that if it can run for 30 seconds without failing, the system may actually have the correct behaviour.
There’s no guarantee that this is the case. If the race condition is as scarce as hen’s teeth, this test could produce a false negative. That’s not my experience, though.
When I wrote this test, it only ran for a few seconds before failing. That gives me some confidence that the 30-second timeout is a sufficiently safe margin, but I admit that I’m guessing; it’s another example of the art of software engineering.
It turned out that the system had the same bug when updating existing reservations (as opposed to creating new ones), so I also wrote a similar test for that case.
Listing 12.7 The actual test method orchestrated by the code in listing 12.6. It attempts to post two concurrent reservations. The state of the system is that it’s almost sold out (the capacity of the restaurant is ten, but nine seats are already reserved), so only one of those reservations should be accepted. (Restaurant/98ab6b5/Restaurant.RestApi.SqlIntegrationTests/ConcurrencyTests.cs)
private static async Task PostTwoConcurrentLiminalReservations( DateTime date) { date = date.Date.AddHours(18.5); using var service = new RestaurantService(); var initialResp = await service.PostReservation(new ReservationDtoBuilder() .WithDate(date) .WithQuantity(9) .Build()); initialResp.EnsureSuccessStatusCode(); var task1 = service.PostReservation(new ReservationDtoBuilder() .WithDate(date) .WithQuantity(1) .Build()); var task2 = service.PostReservation(new ReservationDtoBuilder() .WithDate(date) .WithQuantity(1) .Build()); var actual = await Task.WhenAll(task1, task2); Assert.Single(actual, msg => msg.IsSuccessStatusCode); Assert.Single( actual, msg => msg.StatusCode == HttpStatusCode.InternalServerError); }
These tests are examples of slow tests that ought to be included only as second-stage tests as discussed in subsection 12.2.2.
There are various ways you can address a defect like the one discussed here. You can reach for the Unit of Work [33] design pattern. You can also deal with the issue at the architectural level, by introducing a durable queue with a single-threaded writer that consumes the messages from it. In any case, you need to serialise the reads and the writes involved in the operation.
I chose to go for a pragmatic solution: use .NET’s lightweight transactions, as shown in listing 12.8. Surrounding the critical part of the Post method with a TransactionScope effectively serialises12 the reads and writes. That solves the problem.
Listing 12.8 The critical part of the Post method is now surrounded with a TransactionScope, which serialises the read and write methods. The highlighted code is new compared to listing 12.5. (Restaurant/98ab6b5/Restaurant.RestApi/ReservationsController.cs)
using var scope = new TransactionScope( TransactionScopeAsyncFlowOption.Enabled); var reservations = await Repository .ReadReservations(r.At) .ConfigureAwait(false); if (!MaitreD.WillAccept(DateTime.Now, reservations, r)) return NoTables500InternalServerError(); await Repository.Create(r).ConfigureAwait(false); await PostOffice.EmailReservationCreated(r).ConfigureAwait(false); scope.Complete();
In my experience, most defects can be reproduced as deterministic tests, but there’s a residual that eludes this ideal. Multithreaded code infamously falls into that category. Of two evils, I prefer nondeterministic tests over no test coverage at all. Such tests will often have to run until they time out in order to give you confidence that they’ve sufficiently exercised the test case in question. You should, therefore, put them in a second stage of tests that only runs on demand and as part of your deployment pipeline.