An Interview with Dave Thomas and Andy Hunt, the Pragmatic Programmers
Larry O’Brien: The name “Pragmatic Programmers” is “opinionated,” to use a word that Rails popularized. Why “programmer” rather than “software developer” or “software engineer”?
Dave Thomas: Software developer is a $10 word for a $2 job: the inflation in programming job titles that started in the 80's and continued through to today has always amused me. When asked, I'll typically say that I'm a coder, but the alliteration in Pragmatic Programmer works better.
People then say "but a Software Developer is a more responsible person: they do design (or analysis, or whatever).” But saying that just shows they don't understand what coding is. The very idea that there are different roles--and that different people have to fulfill those roles--makes me shudder. Everyone involved should be 100% involved. Of course people will specialize, but hopefully it will be by doing what they're good at, and not because their titles fall into a predefined “tester/coder/programmer/developer” bin. Also, conventional wisdom is that "designer" is a promotion from "developer," but that's not the way I see it. I'd like to see career paths where a really good coder could stay a coder and still get the recognition and rewards they deserve, while someone who was an OK coder but who loved (say) testing or integration work could specialize and rise through the ranks that way. We shouldn't promote into different roles like that.
Andy Hunt: I actually like the term "software developer," but it's longer to type. I'll accept “programmer” as a suitable synonym.
However, the concept of "engineer" is barking up the wrong tree. There is an engineering component to development, surely, but it's not the dominant aspect. And we oft get into trouble when we pretend that it is, or that some sort of pristine engineering solution lies at the end of a mythical rainbow. It doesn't. This isn't engineering, it's gardening. But "software gardener" as a job title hasn't quite caught on yet, despite it being a more accurate description.
Dave: The idea that software is in any way yet an engineering discipline would be funny if it wasn't for the sad fact that some people actually believe it. :)
Larry: And as far as “pragmatic,” what distinguishes a “pragmatic” programmer or approach to programming from different approaches?
Andy: I view the opposite of pragmatic to be "dogmatic." That covers a whole host of rigid, inflexible approaches (including those that give lip-service to being flexible). It's very easy to slip into a dogmatic approach: we used the techniques before, so surely they'll work now. I read about this methodology in a book, so surely it will work for me. My boss told me to do this, so I will. It's the first thing I thought of, and I always go with that.
A dogmatic approach follows the "rules" (whether individually or institutionally imposed), at the expense of understanding the present context. Context rules the day, and that's at the heart of pragmatism: seeking that which works in the here and now, and an unflinching ability to change approach as needed.
Dave: We say that pragmatic means "doing what works and working at what to do." It means you don't approach every job as a nail just because you know how to hammer, but that instead you look at what you have on hand, what needs doing, and then experiment in applying one to the other.
Pragmatic isn't different to other approaches: it means that you know what the other approaches are, and then choose one that works for the situation at hand.
Larry: But what about a pragmatic decision that incurs technical debt? To switch to the gardening metaphor, if you don’t rather dogmatically insist on keeping up with the pruning, isn’t it common for management to ask “So we can trade off those things for work on lovely new features?” “Yes, but we can achieve better velocity over the long term...” “Great. Let’s do the lovely new features.” And doesn’t that generally lead to the same-old problems, finger-pointing, and condemnation?
Dave: Then I'd argue that it isn't working, and you shouldn't be doing it (“Doctor, it hurts when I do this”). Even if one believes that there's some objective "best way," the pragmatic approach doesn't seek to use it. Instead, pragmatism is always contextual.
Andy: At the risk of getting tons of hate-email, sure, you may well make a decision to deliberately incur some level of technical debt. Technical debt is like financial debt; you need to manage it well. That's different from eliminating it altogether. The risks are great -- not only are you running with the debt load, but you run the risk of collateral damage to other areas of the project as well. So it's certainly not something to take lightly, but that doesn't mean it might not be the right answer.
Dave: Having said that, I also think it's not pragmatic to adopt the "I was just following orders" defense and produce bad software. Developers have a responsibility to their profession as well as to their employers. If you're in a situation where your employer doesn't let you do what needs doing, and if you've tried your level best to fix the situation, then maybe it's time to find a new situation.
Larry: Is it fair to say that one of the major characteristics of “agile practices” is the elevation of “code” above representations of code -- analysis and design diagrams, requirements documents, etc.? Or is that just a side-effect of a more fundamental aspect?
Andy: I don't think "elevation" of code is the right way to look at it. And it's not that "representations of code" have a particular back seat, it's just a recognition of the true purpose and value that all these pieces serve.
The purpose of analysis and design is a cognitive aid to creating code. It advises the creation of code; it doesn't dictate it. Once the code has matured to a certain point of critical mass, the value of these initial thought-doodles evaporates. They were a plan, not a map.
And that's the critical difference between agile and plan-based methodologies. Agile methods follow the terrain as it unfolds, in the spirit of the plan but not valuing the plan over the actual current context of the project, the team, the market, the organization, and so on.
Dave: I don't think it's a fair characterization, simply because "code" really means less and less as programming becomes higher and higher level. When in Rails, I say "class Customer; has_many :orders; end" I'm coding Ruby, but I'm really making a declarative statement.
The key point is that, if they are to be useful, abstractions have to be capable of being validated. And, in practice that means they have to be executable--if it can't be executed, it can't easily be validated. And if it can't be validated, why would I want to spend my time, and why would my customer spend their money, coding to it?
It's a cliché, but we've all had projects where we've worked incredibly hard to deliver exactly what the customer asked for. We got them to sign off on every document and every diagram. We tested everything. We were godlike in our CMM adherence. And yet, when we showed them the working system, the customer said "but that's not what I thought it would do."
This isn't a symptom of a stupid customer. No one really knows what they want at the level of detail required by a programmer. I know I don't. When we were developing our online store, working with other developers, I was constantly asking for one thing and then when I got to play with it, changing my mind. It's just being human--we need to kick the tires to know what we think.
And that's where "executable" comes in. If I can rapidly code what the customer asks for, I can show it to them. I say "it's ugly, and it's missing validation (or whatever) but here's a skeleton" and then get their feedback. I haven't built the whole house of cards, and so it's easy to make changes and to learn incrementally. And when I say code here, I don't necessarily mean Java, or Ruby. It could be diagrams. It could be intelligent lego blocks. It could be a scripted performance. But it has to be executable, and to produce results in a way that's meaningful to the end user, the customer.
So the real point of agile in this axis is the elevation of "validateable" above "abstract."
Larry: Even though as a developer I know the hard part is going to be the business logic, I’ve become hesitant about saying “here’s a running skeleton app, let’s use it to validate some logic...” More than once I’ve seen people frustrated at the naturally difficult task of working out a complex piece of business logic and, in their frustration, lashing out at the shortcomings of the skeleton app. The fact that one can see that it’s transferred frustration doesn’t help the very real damage that can result. A developer knows it’s fine to be wrong a hundred times a day, but isn’t there a danger that a domain expert will be threatened by the kind of black-and-white stark results of a skeleton app?
Andy: Perhaps. But if that's the case, the customer or domain expert will be even more frustrated to be working with the finished app that still contains the same domain shortcomings. Find the errors early, and they're cheaper. There's also less skin in the game, so the discussions can be less heated.
Larry: Fewer programmers are spending their days fine-tuning the layout of class structure diagrams--
Andy: I'd call that a code smell!
Larry: -- but UML is semantically precise. When you think of, say, sequence or timing diagrams, is there an argument to be made for tools that work at abstraction levels higher than that of code blocks?
Dave: Sure, as long as they can be executed up front. Some MDD stuff provides that, and that's cool. Circles and arrows are great if it helps, and if it lets you test what you're doing WITH THE CUSTOMER as you're doing it. (And, remember, no customer really understands UML, so simply showing them an acre of boxes isn't validating the design.)
Andy: Unless the tool directly creates an executable representation, then you'll still be faced with a certain amount of semantic drift between your glorious cherubic visions and the axle-sucking mud of reality. Given that choice, you're better off working with a tool that's in the actual mud.
Dave: The other issue here is that, while UML allows you to be precise, it's irrelevant unless you know precisely what's required. And I say you don't--you'll always be iterating towards Nirvana. So choose whatever tools you find that let you incrementally try out your understanding with the customer.
Larry: Agile approaches emphasize tight collaboration between the client and the developer: iterations that are much shorter than the 18-24 months that were standard in the 80s and 90s, moving away from the developer toiling at odd hours in the darkened office down in the basement, etc. On the other hand, we’ve seen the rise in distributed teams, global outsourcing, and FOSS succeeding with non-hierarchical teams and non-standard management. What are the pragmatics of two truisms: people work best when they’re co-located, but the best people don’t necessarily live in the same place?
Dave: I think that's a false dichotomy. FOSS works with large groups of (often) dispersed programmers. I think a key reason it works is because of the transparency: people can say and do things in an open source project that would get them fired from a conventional job. But, interestingly, when open source teams want to get really productive, you know what they do? They arrange to meet up. They get together for release parties or bug bashes. They take over the hallways of conferences.
Andy: The "best people" are the most motivated people. A crack team of highly skilled, highly motivated people can successfully launch a cardboard and duct-tape spaceship, all from their home offices around the globe. An unmotivated team of average or low skill won't succeed even if they're in the same room with the best equipment and free food.
FOSS succeeds because one of the greatest freedoms is the freedom to throw out code. With enough contributors, you can be picky and choose among the best of several possible implementations or architectures -- not just the first one that someone happened to code. This implicit attitude of abundance over scarcity is critical: too many projects fail because of an inherent attitude that code is scarce. It was hard to write the first time, it will be impossible to write the second time. This leads to brittle, unmaintainable code.
Dave: You could probably argue that the lack of colocation in FOSS projects has driven the development of substitutes for colocation. FOSS projects pioneered the use of version control, social tools, and the like in order to give them the communication they felt they were lacking by not being in the same room.
Andy: Distributed teams of any sort have to make a special point to emphasize communications. Just making that extra effort can make all the difference: a co-located team that doesn't talk to each other, or doesn't communicate well, is at a greater disadvantage than a geo-dispersed team that stays in close communication.
Larry: Fred Brooks argued that there were essential challenges in software development that no management method or tool could automate away -- no silver bullets. Do you agree?
Dave: No one argues with Fred Brooks, so yes.
But, at a deeper level, that's exactly the point of agility (and any other successful method). There are no cookie-cutter solutions. Software is never going to be a deterministic process--at some level, any technology that amplifies the capabilities of its users is bound to have positive feedback loops, and is therefore bound to be unstable. The trick is to embrace it, and to put other feedback systems in place to stop things running away from you.
Andy: Dr. Brooks was right in that what we’re doing is essentially hard. There is no single solution that could magically make it orders of magnitude easier, because again you have to look at the larger context. A successful software project encompasses much more than just the code.
Also, speaking of ancient wisdom in our industry, everyone should go back and read Edsger W. Dijkstra's papers, especially "The Humble Programmer" (his Turing Award lecture from 1972). The conclusion of that talk says it all: "We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers."
Larry: When the automatic response to “How do I...?” is to search on Google, how does the role of the publisher/author/teacher change?
Andy: I don't think it does. Google is a reference, not a teacher. You can find some great papers, great presentations on YouTube, etc., but by itself that isn't well curated or vetted in any meaningful way. You still need a trusted, interactive source for real learning.
Dave: To ask "How do I...?" you have to know that you are missing some piece of information. The role of a teacher is to help you see what's out there, and to inspire you to take risks, combining those things in new ways. The mechanical "how" questions that can be answered by Google are best answered by Google.
Think of Google in this context as bit like the book of formulas that kids are now allowed to take into science and math exams. It simply means they don't have to memorize the exact equations, and they can instead focus on the correct application of those equations to solving problems.
Andy: As publisher, one of our chief responsibilities to our readers is to curate the information presented: to vet the authors and the material, to select topics we feel are important and relevant (not merely popular), and so on. Google helps us spread those ideas that we feel are worth spreading, to coin a phrase from TED, but that's because we're seen as a trusted source.
Larry: One response to the Web seemed to be to make books longer and longer. Your books tend to be shorter. Is that something that you consciously push for or is it something that just naturally arises from the focus you choose for the books?
Andy: Another important aspect of curation is distillation; we very deliberately coach our authors to distill initial, meandering prose into a laser-sharp focus. A good analogy to think about is a fine sauce made by a chef; you start out with a rich stock and reduce it, boil it down, etc., into an incredible wonderful essence with nothing left to remove.
Dave: We try hard to keep them short. (Of course you're talking to the person who wrote the 900-page doorstop that's the latest Ruby book.) Every subject has a core, an essence. One of the values an author brings is to work to find that core, and then to express it as concisely as possible.
Larry: In the aughts--
Andy: Are we really calling them that? Really?
Larry: If by “we,” you mean lazy writers who have grown sick of saying “In the first decade of the century,” then the answer is “yes.” But if you prefer... In the first decade of the century, we saw the rise of dynamic languages and a rejection of the common wisdom that strong typing was necessary for reliability--
Andy: What's the line? "Strong typing is for people with weak minds." I'd say a fair number of old hands at Lisp and Smalltalk never accepted that "conventional" wisdom, and I'd agree with them.
Larry: Let’s follow that for a second... When you look at Haskell, what do you see? Wouldn’t most of us be happy to be half as “weak-minded” as people like Simon Peyton-Jones, Phil Wadler, and Erik Meijer? Does strong typing really have nothing to recommend it?
Andy: Ok, that was a bit tongue-in-cheek. The reality is that every tool and technique has its place. Even the punching bag that is the Waterfall approach has environments and situations where it really is the best solution. Not many, granted, but they are out there. Nothing, not even agile methods, works in all environments all of the time.
The real risk with any language feature or design, be it the type system, exception handling, inheritance, etc., lies in relying on it as a crutch. Languages don't kill projects, people do. Relying on static typing is tantamount to relying on the compiler to validate the goodness of your code. That's a slippery slope; it's a nice start, but it's nowhere near sufficient, and beginners can easily be lulled into a false sense of security because "it compiled."
Dave: Let me jump in: to my mind, it isn't an issue of static vs. dynamic (not weak) typing (both Ruby and Java are strongly typed, but Java is mostly statically typed and Ruby is dynamically typed). The issue is simply one of choosing the tool that's most effective at getting the job done.
When I was working on the discounter for our online store, I wrote the prototype in Haskell, because it let me express the complex tree search algorithms effectively. But I couldn't easily deploy that chunk of Haskell into the actual store, so I translated the code into Ruby.
Good developers try hard not to become bigoted about their tools. I happen to love using Ruby, but I'm always trying new languages, and before any project I consciously ask "what are the best tools for this job?”
Larry: We now face the many-core era and a lot of uncertainty about what that’s going to mean in terms of programming. To what extent will developers have to adjust their approaches in the manycore era? Is the hardware reality going to drive a change, or are developers going to be able to largely ignore it and trust in the OS and runtimes to keep their performance up?
Dave: The hardware is already driving a change. Just about every web application runs on multiple processors (either multicores or different CPUs). And the underlying statelessness of HTTP drives developers to handle some of those concurrency issues manually. That's pretty coarse grained, but it's still a change. But that's just the tip. Write an application on the Mac, and you'll be expected to use Grand Central Dispatch to make that application concurrent--it's considered bad taste for an application to display a wait cursor. But even that's pretty coarse-grained. In the future, when our processors have so many cores that we'll no longer consciously count them, every program will be highly parallel. If you're writing scientific calculations using vector Fortran, you might be able to rely on the runtime environment to manage all that transparently. But for most cases, you, the programmer, will be in the thick of it. Expect to see new languages and architectures that make that easy.
Think back to the 70's and the Unix shell. The pipeline construct allowed users of the shell to string programs together in arbitrary chains. But here's the cool thing: those programs run in parallel--they are implicitly concurrent. And yet each individual program is a simple sequential filter.
There've been many attempts at implementing "pipelines for the internet." None has quite got the idiom right. One day someone a lot cleverer than I will do something so smart that everyone will go "well, obviously..."
Andy: I don't think that you really can "just ignore it" and trust in the platform. At a certain point, you have to change your thinking, and probably your programming language. Perhaps Scala, or Erlang, or maybe something that you haven't even heard of yet.
And it's not just multicore that’s driving the change. The world is a very different place from what it was 5, 10, or 20 years ago. I was doing some spring cleaning recently, and looking at the magazines, magnetic media and hardware from years past really impressed on me the sea change that's been happening. Each little change happens slowly, though, so just like the boiled frog, you don't necessarily notice until it's too late.
But consider the iPad, and the inevitable copy-cat, multitouch portable devices that will follow it. Will that effect changes to how we program? On how we use computers? You bet your sweet bippy it will.
Larry: What practices do you see out there now that you think may become significant pragmatically?
Andy: Well that gets to the heart of pragmatism: what I think about it, or any other pundit thinks about it, is irrelevant. What matters is what works for you. I am pleased to see basic hygiene practices such as version control, unit testing, and use of automated and continuous builds taking hold. I'm pleased to see people embracing refactoring, and more willing to try alternate approaches or even throw code away (I've always said the best thing to improve code quality is a good hard disk crash ;-)
I think what we'll see in the future is an increased adoption of practices that really work, whether they bear the Agile™ mark or not. Unit testing really works in practice; Scrum stand-up meetings are a very powerful communication tool, reflecting and correcting performance at a team level (as with frequent retrospectives) is the only way you'll improve, etc.
So regardless of what we or anyone else says, people who stay in business will ultimately do what works well for them. That's pragmatism, and yes, that is the future, because folks who ignore the reality on the ground and stick to their pre-conceived plan will fail. The rest of us will improvise, adapt, and eventually, overcome.
Dave: Programming can be one of the most challenging tasks out there. It's hard, being caught between an ambiguous, changing real world, and an unforgiving, mechanical machine. Sometimes the job gets so difficult that we give in: we take shortcuts, compromise on what we know we should do, and deliver when we're not proud. So the practices that I think should be applauded are being courageous and showing integrity. When you're courageous, you'll try new things. You'll make mistakes, but your lower level project practices will catch them before they get out of hand. But without courage and integrity, you'll never attempt the really hard stuff, and you'll never stick with it in the face of all the obstacles reality will throw at you. Practice those two, and the rest will follow.