The story of the Conficker worm is a fascinating one. On the face of it, it was just another worm infecting susceptible computers, but it turns out to be well-thought out, highly-designed and incredibly hard to remove. It’s a story worth telling. However, this article in The Atlantic is not that story. These are the opening two paragraphs to the article:
The first surprising thing about the worm that landed in Philip Porras’s digital petri dish 18 months ago was how fast it grew.
He first spotted it on Thursday, November 20, 2008. Computer-security experts around the world who didn’t take notice of it that first day soon did. Porras is part of a loose community of high-level geeks who guard computer systems and monitor the health of the Internet by maintaining “honeypots,” unprotected computers irresistible to “malware,” or malicious software. A honeypot is either a real computer or a virtual one within a larger computer designed to snare malware. There are also “honeynets,” which are networks of honeypots. A worm is a cunningly efficient little packet of data in computer code, designed to slip inside a computer and set up shop without attracting attention, and to do what this one was so good at: replicate itself.
There’s plenty wrong with the second paragraph, the first meaningfully sized one. The unnecessary attention to detail when specifying the date. The arse-backwards explanation of what a honeypot is, before explaining what a worm or malware is. But most of all, it’s that third sentence. I mean, just look at it:
Computer-security experts around the world who didn’t take notice of it that first day soon did.
One of the first sentences to hook the reader in, and it’s a paragon of inanity and tautology. Either people noticed it on the first day, or another day soon after? Really? Knock me down with a feather.
Nevertheless, the jumbled nature of the sentences and useless cruft are not what make this an especially awful tech article. It’s the endless conflicting analogies used to make the article more accessible for the lay reader that are the real problem. Take this analogy of computers as starships:
Imagine your computer to be a big spaceship, like the starship Enterprise on Star Trek. The ship is so complex and sophisticated that even an experienced commander like Captain James T. Kirk has only a general sense of how every facet of it works.
Taking something not in the layman’s terms of reference and turning it into something else which is also not in the layman’s terms of reference. Bravo. But no, wait, computer aren’t just starships. Later on in the article, they’re also castles. With maps!
When there is only one fort, and it is well policed, the lock is fixed and the vulnerability disappears. But when you are defending millions of forts, and a goodly number of the people responsible for their security snooze right through Patch Tuesday, the security bulletin doesn’t just invite attack, it provides a map!
As for the battle between security white hats and the bot’s authors, it’s a chess game:
The struggle against this remarkable worm is a sort of chess match unfolding in the esoteric world of computer security.
Except when it’s not:
In chess, when your opponent checkmates you, you have no recourse. You concede and shake the victor’s hand. In the real-world chess match over Conficker, the good guys have another recourse.
(Aside – I also dislike the typically American tendency to use the phrases “bad guys” and “good guys” in articles like this, but let’s save that for another blog post)
But by far the best awful analogy in the piece is provided in an interview with a security expert, who is trying to explain whoever wrote Conficker had to be an expert at very the top of their game:
“Not only are we not dealing with amateurs, we are possibly dealing with people who are superior to all of our skills in crypto,” he said. “If there’s a surgeon out there who’s the world’s foremost expert on treating retinitis pigmentosa, he doesn’t do bunions. The guy who is the world expert on bunions—and, let’s say, bunions on the third digit of Anglo-American males between the ages of 35 and 40, that are different than anything else—he doesn’t do surgery for retinitis pigmentosa.”
I have to admit I was in the process of Googling what the hell retinitis pigmentosa is, before realising this is the worst analogy I have ever read. It manages to turned the simple statement: “there are only a handful of people in the world who can do this” – which does not need an analogy to put it into layman’s terms – into some sort of arcane and utterly irrelevant reference to bunions.
To be fair to the author of the article, the last analogy is from someone he interviews, not himself. But to include it verbatim in the article, rather than judiciously excise and summarise it is a grave error.
Now you might at this point argue that the author is a professional journalist, and I am a nobody blogger who writes long and sometimes very boring blog posts with the odd spelling mistake, so I have no business criticising his writing. And you would be absolutely correct. I have very few stones to aim at his particular glass house.
But please bear me out, for there is a general point which this article exemplifies, and it is by no means the only one. In technological writing for the lay reader, there is an overarching tendency to patronise the reader by breaking everything down into analogy. Computers become starships, or castles, or cars, or anything you like in your attempt to explain one single aspect of their working. Google for the phrase “Imagine your computer is…” to see what people have come up with.
To be fair, there are good reasons to analogise, from time to time. A lay audience isn’t necessarily going to be familiar with the intricacies of computer security, or even computer anything. The growth in the computer and information industry has far outstripped the natural evolution of the English language. Analogies themselves are not inherently bad; they can be a way of illuminating a difficult or novel subject.
But analogies have their shortcomings. Typically, they can only explain one aspect of how something works, hence why authors are forced to use more than one analogy, and the inevitable contradictions, in an article. Even by themselves, these analogies can only be stretched so far before they reach a breaking point, and become not just useless but dangerously inaccurate.
Despite these shortcomings, all too often an author is too lazy or incapable to devote the effort or skill to explain something for what it is, in plain language. This is to the detriment of our discourse; by opting in to analogise everything by default, the writer creates a consistently patronising attitude to the reader. How are people expected to learn anything about a subject if it is continually abstracted away from them, to the point of infantilism?
Furthermore, the long-term consequence of bad analogies is even worse; they become part of the vernacular themselves. Coming back example above, once to look at the language involved, it’s only fair to ask why something us called a “honeypot” to attract worms; real life worms aren’t known for either producing or being attracted to honey. Rather than make the topic more accessible to the layperson, we risk making it even more illogical and confusing.
This doesn’t just contaminate our thinking in writing, but also in everyday speech. Over at the excellent Clients From Hell blog, two recent stories (here and here) show this taking place in the wild. In both conversations, the designer or developer is struggling to justify why their client should pay the price they promised to pay (apart from it being a breach of the law and all). Yet despite having right on their side, they end up using baffling analogies, involving lobsters and plumbers, that antagonise the person they’re trying to negotiate with.
This isn’t just a problem in technology (you see it also in finance and science) but it is one of the fields of writing where it’s most prevalent. While trying to dictate how to use language has always been futile, it’s especially futile in an Internet age where neologisms and jargon can be adopted by miliions near-instantly. So as much as I’d like to see more intelligent and readable tech writing, I’m not holding my breath. But hopefully this blog post will go a little way to raising awareness and provoke a little self-questioning.
So a message to all tech writers writing for a lay audience: in an age where column space no longer usually dictates the length of articles, the reasons against writing things out in a more rounded way have diminished. Spend a little more time thinking about how to write clearly rather than reaching for the first vaguely similar concept in your mind. Don’t just use jargon mindlessly, but question it and use it only if it makes sense (both logically and semantically) to the reader. And it’s not just writers’ responsibility. Meanwhile, readers, of all ability, need to start adopting better filters against shoddy analogies and other figures of speech in tech writing, and we need prepared to call bullshit on them when we see them.