What are you buying with tests?
What we're buying is flexibility. We're buying the ability to safely change our designs later. We're making a little downpayment in our ability to understand our future designs, and to understand the impact of changes that we make.
Hello, once again, and welcome. I'm Nat Bennett, and you're reading Simpler Machines, a weekly newsletter that is sometimes about software design and sometimes about talking to people and today is about talking to people about software design.
Here's a trick for talking to VPs and managers: Talk about what they're "buying" with the engineer-time they're spending on a problem, or whatever other resource they're allocating.
Managing a "P&L" β a profit and loss sheet β is a big status symbol in manager-world, so even managers who don't have budgets directly tend to pick up that frame. And it's just fun β makes management feel more like a strategy video game. So if you adopt that, you can signal credibility, and make it more likely for managers to hear what you're saying.
This frame is especially useful in areas where people's ideas about the value of an activity are often wrong β like testing.
What are we buying, when we write a test?
I think a lot of folks implicitly believe that you're buying quality. "Fit and finish" or "polish."
I've written about this before, but I think that most people are basically wrong about what testing is. They think it has something to do with quality, and their model of "quality" is something like "fit and finish" or "build quality." The artifact matches the spec. There are no visible seams.
More tests, in this model, equals more quality. If you want a better experience, you write more tests and you spend more time testing. If you're willing to tolerate some visible seams, or a less glossy surface, you can get away with less testing.
But testing is actually a part of design. Testing is a way to understand what you're building and whether you're building the right thing. (The test-driven development folks are half right about this, incidentally, when they say that TDD is a design practice. They're wrong when they say that this means it's not a testing practice.) This means it's related to quality, since quality is "value to some person," and building the right thing is usually valuable. But it's not primarily a quality practice. It's primarily a design practice.
When you spend more time testing during development (whether or not you recognize what you're doing is testing), what you're largely "buying" with that time is better design.
And automated testing is a meta-design practice. Everyone tests while they're writing code. The differences in testing practice are mostly whether you do that initial testing in a way that needs to be thrown away afterwards, or whether you do it in a way that you can keep later.
Which brings us back to that opening question. When we write a test down, and we save it, and we run it many times during later stages of software design, what are we buying with the time to write and maintain that test?
What we're buying is flexibility. We're buying the ability to safely change our designs later. We're making a little downpayment in our ability to understand our future designs, and to understand the impact of changes that we make.
(This does make one big assumption, which is that the tests we've written are any good!)
So if you try to deprioritize testing β when you say, "well, let's not write tests for this part of the code, let's try to get this out a bit faster" β you're probably not buying what you think you're buying. You might think that you're buying a little bit more capability for a little bit less fit-and-finish, which is often a very good tradeoff to make, especially early in the life of a product. But what you're probably giving up instead is flexibility β the ability to make changes later.
Which, if you're developing software, is probably not the tradeoff you want to make. Because "later" is when you know more about what the software needs to do! More people will have used it, you'll have gotten more feedback, done more research. So by not writing tests early on, you're often sacrificing engineering efficiency right at the point when you can use it the most effectively.
Oddly, the "fit and finish" tradeoff is one that I think that TDD makes, because in TDD we often test less before and during design than we might without TDD. TDD is really a practice of testing minimalism β you write just enough test to drive out the next tiny change β and TDD'd projects often spend a long time technically functional but missing a lot of affordances.