How I write code
I've been writing a ton of code the last few weeks, and I am just about ready to come up from that for air, properly, and attend to other things. Still not quite ready to share what I've been working on with the world in general, but: soon.
This project, codenamed Zero, is the most substantial bit of code I've ever put together on my own. As of this writing, it contains:
- 78 commits
- 52 tests, which finish in 0.7 seconds
- ~4200 lines of code (including tests, not including dependencies)
A few thousand lines of that came in from the framework, Phoenix, but it's still a decent chunk of work.
I've had to make a lot of decisions, decisions I would normally rely on a team for. What language to use? What tests? How should I structure the tests? When to refactor? What database? It's given me time to consider why I make the decisions I make. This has provided some clarity.
I test first. Usually I write those tests as code, because why go to the trouble of figuring out how to test something and not keep that test around? But even when I don't write a test, I always start thinking about a change by figuring out how I'll know that I've made it correctly.
I make small commits. Not the smallest commits – I'll often have multiple tests in one commit, and wait until an entire "atomic unit" is finished before I push – but small. On a good day I make 7 or 8 commits today. It keeps the work manageable.
I git stash a lot. The way I use git stash sometimes called "the Mikado method." I start making a change, and then I discover a reason why the change is hard. I stash, fix the problem that makes the change hard, and then unstash. I also stash and unstash failing tests a lot, since often I want to make intermediate commits, but I always want to see the tests pass before I commit.
I keep my tests very fast. Tests should run in less than a second. Ideally even faster than that. This drives a lot of my technical decisions, especially language choice.
I keep my tests consistent. If a test fails I stop and fix it. If a test fails and then succeeds, I will figure out why and fix the underlying issue immediately. Often this involves changing the design to make it easier to test deterministically. Inconsistent test suites are a special circle of hell and once even one intermittent failures creeps in they will slowly take over.
I try to write one assertion per test. When I write more than one assertion per test it's usually because there's a step that must pass early in a test for an assertion later in a test to be valid. I prefer for it to be very clear exactly why a test has failed when it fails, so I prefer to write many tests with different assertions and some repeated setup if I want to make many assertions about a particular behavior.
I start with "big" tests, and over time shift to "small" ones. This is one way that I'm different from some folks who take test-driven development very seriously, but I find I usually need to work myself up to writing the smallest tests, the kind that cleanly separate anything that touches IO from the rest of the program. I start by writing mostly pretty big, pretty slow tests, that exercise most of the real program. When I don't have many tests yet this is fine, because I can still get the test suite to run in under a second, and it makes it easier for me to refactor as I discover the real structure of the program. I only start to test with fake IO once my test suite runtime starts to creep up to a second or so.
I refactor a little bit at a time, all the time. I refactor for comprehensibility a lot. This is one of the core reasons I'm so serious about testing. I have to make changes to code to be able to understand what it is, or what it could be. Conversely, though, I don't really do "big" refactors – the kind that take more than a day. Someday I'm sure I'll encounter a situation where that seems correct but right now I'd almost always rather live with something kind of awkward than make only design changes for that long.
I don't DRY tests. I repeat myself a lot in tests, and I tend not to refactor that repetition out until I actually have to change something. And even them, I'm relatively likely to just use Vim's equivalent to find-and-replace to make the same change in a bunch of places. I think it's more important for tests code to be very clear and avoid indirection than to make it easy to change a lot of similar tests at once. "Lots of similar tests" are usually a sign that I'm not approaching the architecture right, honestly.
I don't write tests for UI interactions. I'm not sure that this is correct but, man, I do not like dealing with Selenium or any of its siblings. So I choose technology like Phoenix that lets me test most of a program's behaviors without being able to run Javascript, and I test interactions by actually using the program.
I use hexagonal architecture, mostly. I separate IO and logic. I separate different kinds of IO (user input, database access, filesystems, memory) from each other. I'll sometimes start early in a project by mixing application logic into the view layer or the data access layer, but as the project evolves the refactoring direction usually involves extracting more and more calculation and comparison operations into a module that owns "application" or "domain" logic. But I also try not to spend too much time thinking about archicture – at least until it starts slowing down the tests.
I avoid writing 'for' loops. I dislike 'em. Why should I be telling the computer how to track where it is in a data structure? Ruby was my first language, so I discovered ".each" early on and never looked back. This is a big part of why I've been working in Elixir, and why I'm drawn to functional programming in general.
I try to do things in the database. In general I live by the heuristic that code that's running in the database will be better/faster/more optimized/whatever than code that I write for my application. So when I can put something in a query, and not application logic, I do.
I'm happy to do stupid things if they're well-tested and it keeps the code simple. There's a thing in Zero right now where it makes tons of repetitive database queries in response to people typing into a particular field, because basically, every Zero session resets its entire state everytime anything changes. This will almost certainly need to change if Zero starts handling more traffic, but right now it's handling basically no traffic, so making tons of database queries is fine. The alternatives I can think of all involve some kind of cacheing fanciness, which I'm happy to put off for as long as I can, until I know more about the workload Zero will actually be handling. For now, the design is simple, so it'll be easy to make improvements on it in whatever direction I need to go.
I'm willing to be flexible on most of this if I'm in a pair or on a team, but this is how I work when it's just me.