Software specs are not the enemy.
When it comes to discussions about writing software specs, I really like to quote Sen-Rikyu, the master of the modern Japanese tea ceremony: "Tea is naught but this. First you make the water boil. Then infuse the tea. Then you drink it properly. That is all you need to know." But if you know anything about the actual practice of the tea ceremony, there is -- for laypeople who have not mastered the Zen of simply boiling the water and infusing the tea -- an immensely complex protocool, replete with symbolism that must be adhered to.
Writing specs is simple. All you do is document what has been decided and capture known uncertainties, and underlie everything with a giant disclaimer that until the code is written you won't know exactly how it's going to work.
In fact, perhaps we shouldn't think of specs as specifications of software. They are specifications of goals for the software.
I'm about to start working on a UI spec for some upcoming changes to the Google Base UI. I'm always looking to improve, and this is a pretty complex series of workflows that we have to get just right. So I want to make sure that I capture and communicate all of the decision points that I've traversed up until now, and that I specify behavior that may not be fully obvious in the prototype that I built.
Let me highlight some key concepts from that last paragraph: Capture. Communicate. Spell out corner cases.
So I searched for write UI specs. The first result is 37 signals explaining why specs don't work. I read this type of article -- along with some Agile and XP techniques -- as a radical overreaction to the other extreme (of overplanning and presuming you can actually precisely specify software on a piece of paper before anything is written). In that sense, the 37 signals article is irresponsible. It's speaking to a particular historical context of software engineering without acknowledging that there are a lot of ways that specs can be written, usefully, in the context of more modern software engineering practices. Moreover, when you're not working at a small startup, it is irresponsible to not write specs. Specs help other teams anticipate and plan for what's happening down the line, and when you don't have the same 10 people doing everything -- when you have a marketing department, a customer service team, writers who have to do help documentation -- you need to let them know as early as possible what you think is coming down the pipe, so they have maximum time to plan their own resources and schedules.
1. Software specs can create a false sense of security.
And as we all know, a false sense of security is worse than no sense of security. If you get signoff on a spec -- whether early- or late-stage -- and the signers-off don't understand that some things are subject to change, your budget and schedule will be in for some very nasty surprises.
One function of a spec is to document and decide on everything that it's possible to decide, and to minimize (not eliminate) uncertainty. Sometimes you just have to put a stake in the ground so you can make a decision and get something done, even if you know it's probably going to be wrong down the road. Decide what you're going to do and do it, and when you need to fix it, fix it.
The fact that a lot of organizations cram unreasonable, un-vetted features into the spec and call it a "spec" is not a problem with specs. It is an organizational problem. Blaming the existence of specs for problems caused by bad management is not fair to the spec.
2. A software spec cannot overspecify or underspecify. To be useful, it has to specify just right.
Any overspecification wastes time because it could have just been implicitly specified in the code (or otherwise downstream). Underspecification wastes time because downstream people have to wonder how the hell it's supposed to work, or what the goal is.
3. Specs can indeed be written in the face of uncertainty, and they can be useful in the face of uncertainty.
For example, you can double-spec a feature: "If it turns out that it's cheap and easy to hook up to the live WhizBangerNiftySearch engine, then do (A). The fallback is (B)." Then everyone sits down and someone says "If we can't get (A) working in two weeks, then we'll fallback to (B)." You document this in the spec, so the team remembers what the decision was, and you stick with a plan long enough to see things through to fruition.
4. Voice-of-God specs are generally disconnected from reality.
You cannot write a spec without input from every department: you can't make estimates about how hard it'll be to write that new piece of functionality without some sort of estimate from your engineers, and without cost estimates, you can't make decisions about whether or not the functionality is actually worth its expense for the current release. But after enough back-and-forth, you somehow have to say "we agree that this is the minimum functionality that we need for this release, this other stuff is nice to have, this other stuff is version nn+1."
The goal with the spec is to iterate *quickly* on paper -- or any other medium that is faster and less expensive than code -- to get buyin and agreement from all relevant parties. If you know on Day 2 that senior management isn't going to let you menion "commerce" in the product, you can build around that without much hassle. If they tell you on Day 180 that they didn't want you to refer to "commerce" or commerce-like activities, and the entire thing is a product feed engine, you're screwed, and you've just wasted a lot of time.
Paper is cheap. Code is absurdly expensive. Prototype-based specifications help people know what is being built before it actually gets built. It takes me 10 minutes to do a paper flow sketch and maybe a day to build out a lightweight dynamic PHP prototype of say, 5 HTML pages. It'll take engineering 4 weeks plus two weeks to send all the strings to translation. Every change in code is exponentially more expensive than changes in prototypes.
5. Dynamic prototypes can't capture everything and communicate it as fast as a spec.
Yeah, I could make my engineers click on my dynamic prototype until they run into every single error case, or I could give them a screenshot and a tabular view of error conditions and associated error messages.
You prototype until you start writing the details that don't need to be tested in a prototype -- like form validation, error messages, and corner cases. You can leave those to the actual code. Write a spec for how they should be handled and forget about it. Use the prototype to capture the dominant 80% use cases and leave the rest to paper.
22 minutes later: Of course, now I find out that Joel on Software said it better.