Do cancer scare stories give you the Daily Mail?

One of the little things I’ve been running since the New Year is The (New) Daily Mail Oncological Ontology Project, a project devoted to tracking “the Daily Mail’s classification of inanimate objects into two types: those that cause cancer, and those that cure it.” It is a resurrection of The Daily Mail Oncological Ontology Project, started by an anonymous person (I have no idea who) and sadly defunct.

While this may look spectacularly anoraky, thanks to mah geek skills, it doesn’t take up much of my time; I have a simple Google News feed for anything with the word ‘cancer’ in it from the Daily Mail, and I autopost any relevant ones to Tumblr with a simple bookmarklet. Two minutes of my time, most days.

I’ve been going at this since the start of the new year, and I’ve realised after three months that there’s some interesting data. Most pertinent is the frequency of these stories at particular times. Here beginneth the geekery, after manually counting through the archives:

For the period January 12th (when I started the Tumblelog) to February 14th (34 days, inclusive), there were 26 cancer scare or cure stories on the Daily Mail website (that’s 0.76 a day). In the same time period – i.e. 34 days – afterward, from February 15th (up to & including March 21st) there were 14 (only 0.41 a day). In fact, in the month of March entirely, there were only 9 (0.29 a day).

What happened on February 14th? That day, this news broke:

Jade Goody ‘has months to live’
Jade Goody has been told she has only months to live, her publicist Max Clifford has said. Mr Clifford said doctors at the Royal Marsden Hospital in London broke the news to the former reality television star on Friday

Undoubtedly, Jade Goody’s plight had been charted in the press since her initial diagnosis back in August 2008. But with the news of her imminent death, the volume of Jade-related coverage shot up – from 28 stories mentioning her between January 12th and February 14th, to 66 between February 15th and March 21st.

So, as coverage of Jade’s cancer shot up, coverage of the speculative pseudo-science of “will x cure/cause cancer?” plummeted. Did the cancer researchers, whose findings are swallowed and regurgitated as cast iron fact by Mail hacks, suddenly stop publishing their research, out of respect for Jade? Or was it business as usual – the research still being published, but the hacks, with a much juicier, PR-friendly story on their hands, were too busy to write them up or even care? Not much chance of it being the former, I reckon.

If that is the reason, it absolves the Daily Mail of the accusation (as outlined in last week’s Charlie Brooker’s Newswipe) laid against the press: that the excessive coverage of Jade Goody’s death would only alarm and upset cancer sufferers and their families, something that only serves to hamper them in their battle against the disease.

But the Daily Mail weren’t irresponsible in their sensational Jade Goody coverage. No, they’re irresponsible all the bloody time. The truth of the matter is that the Daily Mail loves to scare the fuck out of you about cancer no matter when. And if there isn’t a celebrity slowly dying in the news for them to gawp at, then they’ll resort to publishing anything they can find with the word ‘cancer’ in it. It doesn’t matter if it cures it or gives you it. It doesn’t matter if it’s a dying reality TV star or a paper on the dietary effects of cabbages. As long as the spectre of the disease is there to keep you on your toes (that is, if you haven’t lost them to cancer), then they’ll use it against you.

Keeping calm

Amidst all this talk of war, international terrorism and financial meltdown, one of the odd curiosities of the world today is the Keep Calm and Carry On meme. As has been covered recently both on the BBC and the Guardian. It’s been rich pickings for academics to fall over analysing quite what it is about it – nostalgia for a more sedate and less emotional time possibly, or as a counterpoint to the usual hysteria we’re continually exposed to.

Not everybody likes it. James Graham dislikes the message it conveys:

?Carrying on? is a much overrated concept. The fact is we can?t carry on as we have done for the past twenty, thirty years. The economic collapse was caused by people spending far too much time ?keeping calm and carrying on? instead of questioning what they were doing. Climate change is a similar tragedy waiting to happen. In whose interest is all this ?calm? supposed to serve?

The meaning depends on your meaning of “calm” – “calm” could mean sedated, blissful, oblivious, or it can also mean unflappable, clear-headed, rational, thoughtful. The designers of the poster probably had in mind the former – with a putative invasion of Britain under way it was designed to reassure the population, but in an age where the government and media both love to terrify the fuck out of us with tales of non-existent threats, keeping calm has come to mean the latter sense. Propaganda from the last genuine threat to this country’s existence has been reclaimed by geeks to rebuff the insanity surrounding a much lesser threat today.

“Carrying on” I have much less time for; James is absolutely correct in pointing out that carrying on – whether it’s racking up debts to fund our fetish for property, burning away every last hydrocarbon in the Earth’s crust , or thinking democracy can be inflicted with the barrel of a gun – is simply not an option unless we want to bring down civilisation with it. But all the changes we are going to have to make have to be orderly and not in panic or anger – so keeping calm is essential.

There may be other reasons why the poster is so popular. Its rediscovery has coincided with a trend in sans serif fonts and in particular Gill Sans and Johnston, both of which strongly resemble the font on the poster. As a result designers love it – its boldness and minimalism reflect a current trend, while its simple design lends itself to be remixed or mashed up with ease. There’s a Flickr pool devoted to variants on the poster, and Ben Terrett has summed it up: “We might as well admit we’re addicted“. Not only is it easy to replicate or parody but the web allows us to pass on homages and pisstakes with ease. It’s ironic that a paper propaganda poster, supposedly the antithesis of Web 2.0 and digital, conversation-led media, ends up being so popular.

We'd Like To Give You A Good Talking ToAs a result, it was only a matter of time before the meme goes full circle and it ends up being used for the current government’s propaganda. And lo and behold, the Home Office are using the design in their current “You have the right NOT to remain silent” campaign. Unfortunately for them, it bombs really horribly. The adverts are too clever for their own good, relying on a play on words: “We’d like to give you a good talking to” in this case means they want to share more information with local communities and get their feedback. I’m sure there was much back-slapping at the ad agency when they came up with it

When you have a design that is all about one simple, bold message, smug irony and nuance go out of the window. At first glance I thought the posters were just a new level of Home Office authoritarianism with a design twist; the real message behind them wasn’t clear until I looked up on the Home Office website; I wonder how the 99% of the population who couldn’t be bothered to read the small print think the campaign is really about. The perils of trying to co-opt anything that has been subverted and taken away from you have been laid bare.

A final point on the art form, and even when your message is simple, it can create more questions instead of clarity. Matt Jones’ retort to Keep Calm And Carry On: “Get Excited and Make Things” has had an enthusiastic reaction from bloggers, suggesting that not everyone is content with keeping calm either. But it leaves me with more questions than clarity. Excited about what exactly? Should we really be making more stuff (physical stuff, that is)? Doesn’t this go against the we’re-consuming-too-many-resources concept of unproduct? Making things is cool, but perhaps we need to be a bit, er, calmer about it.

Blast from the past

Five years ago I found a website called Futureme which allows you to send emails on a very long time delay. My interest piqued, I sent one to myself. And five years later – this weekend – it arrived.

To be brutally honest, it’s not that interesting, and I’m not going to reveal all the contents (mainly as it mostly about my friends rather than me and I want to keep their privacy). But here’s a snippet of what the 22-year-old me told myself:

Currently you’re looking forward to starting your Master’s at Edinburgh, your 23rd birthday next month, Arsenal winning the Champions League and not much else. Hope your love life isn’t as shit as it is now, and that you have a good job that isn’t boring software engineering.

It’s charmingly glib and hopeful – definitely a little less wordy than how I write now. But despite the change of style, the main questions I ask are about my job, my love life and how well Arsenal are doing – pretty much the same three things that preoccupy my mind now (though not in that order). And it’s a nicely hopeful email – perhaps more positive in outlook than I am now – and in retrospect 2004 turned out to be a pretty awesome year, so there’s something to be said for positivity.

I’m not the only one to have emailed myself, and it’s funny why we do things like this. Partly out of hope, but also partly out of fear; the letter from the past also acts as a marker in time, to crystallise what we were like in case we ever forget. And it scratches a certain itch – for all the ability of blogs and Facebook to track every day and every minute of our lives, there’s still things we can and should only tell ourselves.

Right, I’m off to write another letter to myself in the future…

Finding Ada in Lyon’s tea shops

As part of Ada Lovelace Day, I’ve signed up to the pledge “I will publish a blog post on Tuesday 24th March about a woman in technology whom I admire”, so here it is…

…actually I am going to cheat, ever so slightly, as my post is going to be about many women, not one woman. This post is about the women of the the J. Lyon’s company.

Lyon’s Tea Shops were the Starbucks of their day – one of the first national chains of teahouses, with over 200 nationwide at their peak, with uniform pricing, food and drinks. 150 million meals were sold a year, and that meant a huge overhead in processing receipts, orders and stock distribution. Enter LEO, the Lyons Electronic Office, the first computer specifically designed for business applications.

Part of LEO’s responsibilities was to manage each tea shop’s daily orders – time-sensitive (food went off quicker in those days), high-volume (thousands of meals a day) and sensitive to fluctuations such as weather or local events. As a result, the work was never-ending, involved a mountain of paperwork and prone to mismatching needs (if a shop over-ordered one day, it would typically under-order the next, thus going empty too early in the afternoon).

By computerising the tea shops, LEO was not just a triumph of engineering but of user-focused design; the system’s programmers toured the country’s tea shops, which were usually not only staffed but also managed by women. They would talk to the manageresses and area supervisors about their daily work, the problems they faced, and how they could design the system to help. The result was a system designed not just from the top down but the bottom-up, one that fit the needs of its users. Forms were designed to be as easy to fill in as possible. Orders were telephoned in to a dedicated professional computer operator (also usually female). And data was sent back to manageresses about their shop’s performance allowing them to make informed decisions.

Among them was Jean Cook, […] area supervisor for five teashops in the City of London. ‘To me it looked like a row of kitchen cabinets,’ she says. Soon though, like other manageresses, she was converted to the advantages of working with the computer. Today she comments that later in her career, when she had moved to another company and computers were far more advanced, she was never provided with the quality of information she had been able to extract from LEO.

The result was that far from being technophobic or hostile to the newfangled computer and its new ways, the manageresses embraced it – after a little initial apprehension:

Ethel Bridson, who had only recently been promoted from manageress to assistant area manager, was apprehensive when she heard that her London shops were the first to be computerised. ‘I’ve stepped into trouble!’ was her first thought. But she was pleased to find that ‘her’ manageresses made the transition easily, and she appreciated the computer’s advantages. ‘We got everything we wanted in a much shorter time,’ she says. Soon the daily reports filed by manageresses to their supervisors began to include paeans of praise to LEO. ‘This is a great timesaver, work saver, and we are grateful for it,’ wrote the Wembley manageress.

By the way, did I mention the year? This all happened in 1954. Yes, 1954.

It could be levelled that despite the co-operation from the manageresses and their hand in shaping the technology, the LEO’s construction and programming was still a male-only affair. Even this is not true – one of the programmers on the LEO team was Mary Blood (later Mary Coombs), who worked on the LEO and its successors throughout the 1950s.

Mary Blood did not come from a mathematical Cambridge background like many of her colleagues – she studied modern languages but was accepted onto Lyon’s programming team after passing the internal computer course. She eventually became one of the team leaders on the project, staying with LEO Computers until Lyon’s sold off its computer division in 1964.

By any standard, women did not play a predominant role within LEO’s development, yet they still played an essential one, by breaking down the barriers and convincing Lyon’s mostly-male management – whether they were programmers like Mary Blood or tea shop supervisors like Ethel Bridson, and many more besides. While it may not look much today, it stood in marked contrast to the typical gender politics of the time. Over a half a century ago, it was one of the first progressive (if now little-remembered) projects involving women in technology, and a lesson to us all today.

Notes & References: Most of this account of LEO is sourced from Georgina Ferry’s excellent A Computer Called LEO: Lyons Tea Shops and the World’s First Office Computer. The quotes above are from p. 127 of the paperback edition. There’s also a Radio 4 documentary on it and a dedicated webpage to its history.

WWF Earth Hour

A quick intermission to tell you something I’ve been getting up to at work this week. We’re working with WWF Earth Hour which is this weekend at 8.30pm – people from around the world will be turning off their lights for an hour to make their stance known against global warming and to lobby world governments to take more effective measures against climate change. There’s a stack of things you can do to help – including signing up, resources for spreading the word on your blog or website (such as the light switch widget you can see top right) and sharing your experience on YouTube and Flickr. Do check them all out – it’s for a very good (and very important) cause. Thanks.

Distracting ourselves to attention

The origins of this post came from a question I was asking myself: Why is Twitter, a ludicrously low-bandwidth service, popular now and not in 2000 or 2004? And at the same time I read a lot of posts about how Web 2.0 was going to turn our brains to goo. This post attempts to tackle both topics.

It seems you can’t move right now for excitement about Twitter and other forms of microcontent creation. Everything else is following Twitter’s “go small, go often” philosophy as well – just look at Facebook’s new lifestream interface. And with Twitter dominating current web thinking comes fears of digital overload, warnings of destroyed attention spans (as Maggie Jackson argues in Wired), or even ludicrous claims that it will make you ill. On the other hand, this “snack culture” could just be handled by our brains just fine, ultimately making us mentally fitter.

One problem: these are the wrong questions to be asking. The dichotomy of “brief and shallow” media vs. “long and deep” media is a false one. As Steven Berlin Johnson argued last year:

If we’re truly living in a snack culture, how come so many forms of entertainment – TV shows, games, movies – are getting longer? Most of us, I suspect, have had this experience lately: You tell a friend that they simply have to start watching one of the new long-format dramas, like Heroes or The Wire.

Right now, the movie everybody is going on about on the web is Watchmen, which clocks in at an impressive 2 hours 42 minutes, and itself an adaptation (for better or worse) of one of the most complex and layered graphic novels of the 20th century, the only such book to make TIME’s 100 novels of all-time, one held in almost impossible high esteem by the exact same demographic (geeks) that also love to be on Twitter and other Web 2.0 sites.

Watchmen (and the other examples Johnson cites and expounds upon in Everything Bad Is Good For You) show that when consuming media, depth and brevity are not totally irreconcilable; you can concentrate on something difficult and concrete as well as enjoying content 140 characters at a time. And yet Twitter often gets demonised as a posterboy for the inanity of Web 2.0. Perhaps that’s no surprise, with its chief characteristics of brevity and ephemerality, the exact opposite of how we have consumed media in the past. Given that “value” of old media was often measured on its length (writers being paid by the word) or durability (all those books and records on your shelves), what’s produced in new media is often characterised as comparatively worthless, particularly by those who cut their teeth in the old media.

It’s tempting to say this was inevitable, the Internet coming in and destroying business models, rational thought, the very fabric of the universe as we know it. Yet Twitter didn’t just become popular because of all that free bandwidth, server power and cute pictures of whales. In fact, we were technically capable of producing Twitter five years ago, with broadband adoption relatively high among early adopters (geeks), and Twitter’s paltry bandwidth needs being easily handled by WAP or SMS on mobile. But it arose for two reasons: firstly, because people wanted to take part in this communal self-expression – giving us the so-called “adrenaline shot” of information. But we are not rats being stimulated by electrodes at a lab assistant’s will – Twitter is popular not just because we wanted to take part, but because we are now able to as well.

What if the vast rise in information volume and variety was not an inevitable result of the digital landscape, but rather a consequence of how we behaved socially and digitally? That as the “early adopters” – the bloggers and wiki editors and geeks – in the mid-2000s got more used to dealing with information – continually writing, creating, photographing, rating, tagging, chatting, texting, poking, etc. – and improved their brevity & clarity, that they have become more efficient at producing information? And once they became more efficient, they sought (or created) channels and media that better exploited that efficiency.

I’ll lead from my own personal experience here. Looking over this blog now and comparing it with 2004, you’ll see I write a lot less than I used to. What thought would have taken me an entire blog post back then, I can now Tweet once, or post to Delicious with a little extended commentary. How? Because I got better at communicating more briefly, and what used to take a blog post now can take a linklog post or a Tweet.

This can even improve the quality of one’s writing – once you’re confined to 140 characters (often less, if there are hashtags or a username to put in too) – and in a world where a lot of people are Tweeting, you have to make it more interesting, not less, to stand out from the crowd. Extraneous words, useless adjectives and sucking up have to get the boot, if you want to fit what you’re saying in succinctly. This is the exact opposite of growing sentences with David Foster Wallace, and while it may send fans of dense prose into despair, it does on the other hand encourage more simplicity and less pretension.

Of course, not everything on Twitter is gold – let’s face it, a lot of it is shit – but then again, a lot of everything is shit. But those of us who are fans of Twitter are comfortable with the levels of shit and have filtering mechanisms that can deal with them, so that we can find the stuff we really do enjoy. This comes back to what Clay Shirky called “filter failure“: the idea that “information overload” is not the problem – it has been a problem since the invention of the printing press – just that our filters have been radically realigned and are now the responsibility of consumers rather than publishers.

The key is this – that early adopters have not become distracted to death by consuming services such as microblogging, because we only moved onto it once we’d become publishers ourselves, mastering producing information online through other forms of media. By doing so, immersing ourselves in those information flows and working out what works and what doesn’t, we became better consumers of information as well – although that took a little time; hence why Twitter lagged several years behind the rise in blogging.

Discussion here is dominated by Twitter, but for “Twitter”, read anything new, distracting and disturbing in Web 2.0 that is emerging or will emerge in the near future, the same principle should hold – we choose to adopt them because we are able to as well as because we’re willing. This can appear puzzling to the outsider, and the only way to join in (if you want to join in) is to do what the geeks did, to produce and consume more information to help train up one’s filters. As I have discussed before, we will still need some technological help in the filtering process. But no technology can relieve the brain of all of its filtering burden. It’s somewhat ironic that the best way of preventing information overload is to indulge in even more of it – but who said anything in this world was that straightforward?