Last week I finally, finally, signed up to be a member of the Open Rights Group and attended a talk chaired by Cory Doctorow and Charlie Stross on privacy and the digital age. And thanks to the red wine and general convivality on the night, plus my own badly-taken notes, it’s taken me a little time before I’ve finally been able to distill a few thoughts on the matter.
The talk was slightly disappointing in that it was a bit too general and strayed into unrelated aspects security debate as well. For example, talking about Richard Reid the shoebomber and how if he had been successful, we would not be taking our shoes off at airports now as nobody would have known how he did it, is a lovely anecdote but didn’t do much to inform the debate. Ditto complaining about school web filtering or the state of the tabloid press.
Still, there were plenty of good points and interesting notes. Charlie kicked things off when he pointed out how privacy is a relatively recent social construct; in medieval times peasants would live communally and the nobility would be constantly under the eyes of their servants; only through the industrial age and the institution of a middle class, segregated in their own houses, has privacy become a norm to expect.
Spin forward a couple of centuries later, and we are no longer merely industrial, but sufficiently technically advanced that we produce data about everything, which spills out of our houses and lives. As a result the normative privacy we expect is an ever-more elusive ideal, now teetering on the edge of a paradox. One the one hand, people publish vast amounts of information willingly, whether it be on blogs, Twitter, Flickr or whatever. On the other, we clamour for the government to stop snooping on us, gathering massive databases or performing mass-mining operations.
Are the two mutually incompatible – are we just hypocrites? Not quite. As our information-generative capacities have evolved, privacy has evolved into a part of a more sophisticated framework; privacy is no longer a simple norm, but a beneficial side-effect of a much greater good – the ability to control what information about is is dispersed. With this control, we choose what we can and cannot publish (for example, you’ll read my views on tech and politics here, but only very rarely will I disclose information about my family).
But Cory very astutely pointed out that this choice is rarely informed. We are very bad at valuing the impact of information at the time we release it, compared to how we might value it in future; a teenager posting pictures of him drinking and smoking weed onto Bebo might not matter to him now, but when he’s applying for a job or running for office in a few decades’ time, they may come back to haunt him. You can add into that the mix the fact that taboos and social conventions change over time. Charlie provided the example of parents would take photos of their young children playing in the nude – regarded as innocent in the 1970s, but an activity now tainted with anti-paedophile hysteria of the present day.
So, we’re producing shitloads of personal data, the control over which we are unable to judge accurately. That’s part one of the problem. Part two is that that data may not be accurate or can be misleading in the first place; if your browser history is full of pages about HIV, is it because you are a sufferer or just researching on behalf of a friend, or helping your child with their biology homework? Without wider context, it’s easy to draw the wrong conclusion.
It gets really bad in part three – what happens when governments and corporations start using these vast pools of data, possibly mistakenly disclosed, possibly inaccurate, to make judgements about us; would a health insurer, getting hold of your browser history in the example, force you to undergo an HIV test or increase your premiums?
In this sense, privacy, in the sense of safeguarding information about us that is true, but which is not widely in the public domain and we’d rather not let people know about, is not the real problem. The problem is about what institutions do with all information about us, true or not, publicly available or not, embarrassing or not. And that’s a bigger question than privacy – we get back to control again.
As Charlie put at the end: “the relationship between privacy, security and the state is broken.” The social contract we once relied on for privacy is being pulled and reshaped into a wider one about information control. But while personal control of all our own data is an ideal goal (that us geeks love to profess), it’s really one fraught with complications – if we ourselves can’t make the right judgements about deciding what to do with our personal information, what hope do institutions have?
Doubt is the key to reworking out this social contract – a healthy dose of uncertainty and a warning about context whenever we deal with personal information. In some ways this means accommodating conflicting ideas – a “Digital Britain” made more reliable and efficient through IT excellence is underpinned by an assumption of good-quality data, but we must also entertain the possibility it is wrong or inappropriate and build mechanisms around it so we don’t make mistaken judgements, and that mistakes can be easily corrected or reversed. Oddly, the closest equivalent to this I can think right now is how any sane person should read Wikipedia – treat what you read as plausible, but never be willing to accept it as truth on blind faith alone; the community behind it does its best to keep it accurate, while full well knowing mistakes will always be there.
Never perfect, but honest with yourself about your fallibility. That might just bring some sanity to the situation right now. But good luck recommending that to anyone in a position of power.
Footnote: Better and more coherent posts on the Doctorow/Stross talk can be found from Richard King and Chris Swan.