Archive for March, 2009

Blast from the past

31 March 2009

Five years ago I found a website called Futureme which allows you to send emails on a very long time delay. My interest piqued, I sent one to myself. And five years later – this weekend – it arrived.

To be brutally honest, it’s not that interesting, and I’m not going to reveal all the contents (mainly as it mostly about my friends rather than me and I want to keep their privacy). But here’s a snippet of what the 22-year-old me told myself:

Currently you’re looking forward to starting your Master’s at Edinburgh, your 23rd birthday next month, Arsenal winning the Champions League and not much else. Hope your love life isn’t as shit as it is now, and that you have a good job that isn’t boring software engineering.

It’s charmingly glib and hopeful – definitely a little less wordy than how I write now. But despite the change of style, the main questions I ask are about my job, my love life and how well Arsenal are doing – pretty much the same three things that preoccupy my mind now (though not in that order). And it’s a nicely hopeful email – perhaps more positive in outlook than I am now – and in retrospect 2004 turned out to be a pretty awesome year, so there’s something to be said for positivity.

I’m not the only one to have emailed myself, and it’s funny why we do things like this. Partly out of hope, but also partly out of fear; the letter from the past also acts as a marker in time, to crystallise what we were like in case we ever forget. And it scratches a certain itch – for all the ability of blogs and Facebook to track every day and every minute of our lives, there’s still things we can and should only tell ourselves.

Right, I’m off to write another letter to myself in the future…


Finding Ada in Lyon’s tea shops

24 March 2009

As part of Ada Lovelace Day, I’ve signed up to the pledge “I will publish a blog post on Tuesday 24th March about a woman in technology whom I admire”, so here it is…

…actually I am going to cheat, ever so slightly, as my post is going to be about many women, not one woman. This post is about the women of the the J. Lyon’s company.

Lyon’s Tea Shops were the Starbucks of their day – one of the first national chains of teahouses, with over 200 nationwide at their peak, with uniform pricing, food and drinks. 150 million meals were sold a year, and that meant a huge overhead in processing receipts, orders and stock distribution. Enter LEO, the Lyons Electronic Office, the first computer specifically designed for business applications.

Part of LEO’s responsibilities was to manage each tea shop’s daily orders – time-sensitive (food went off quicker in those days), high-volume (thousands of meals a day) and sensitive to fluctuations such as weather or local events. As a result, the work was never-ending, involved a mountain of paperwork and prone to mismatching needs (if a shop over-ordered one day, it would typically under-order the next, thus going empty too early in the afternoon).

By computerising the tea shops, LEO was not just a triumph of engineering but of user-focused design; the system’s programmers toured the country’s tea shops, which were usually not only staffed but also managed by women. They would talk to the manageresses and area supervisors about their daily work, the problems they faced, and how they could design the system to help. The result was a system designed not just from the top down but the bottom-up, one that fit the needs of its users. Forms were designed to be as easy to fill in as possible. Orders were telephoned in to a dedicated professional computer operator (also usually female). And data was sent back to manageresses about their shop’s performance allowing them to make informed decisions.

Among them was Jean Cook, [...] area supervisor for five teashops in the City of London. ‘To me it looked like a row of kitchen cabinets,’ she says. Soon though, like other manageresses, she was converted to the advantages of working with the computer. Today she comments that later in her career, when she had moved to another company and computers were far more advanced, she was never provided with the quality of information she had been able to extract from LEO.

The result was that far from being technophobic or hostile to the newfangled computer and its new ways, the manageresses embraced it – after a little initial apprehension:

Ethel Bridson, who had only recently been promoted from manageress to assistant area manager, was apprehensive when she heard that her London shops were the first to be computerised. ‘I’ve stepped into trouble!’ was her first thought. But she was pleased to find that ‘her’ manageresses made the transition easily, and she appreciated the computer’s advantages. ‘We got everything we wanted in a much shorter time,’ she says. Soon the daily reports filed by manageresses to their supervisors began to include paeans of praise to LEO. ‘This is a great timesaver, work saver, and we are grateful for it,’ wrote the Wembley manageress.

By the way, did I mention the year? This all happened in 1954. Yes, 1954.

It could be levelled that despite the co-operation from the manageresses and their hand in shaping the technology, the LEO’s construction and programming was still a male-only affair. Even this is not true – one of the programmers on the LEO team was Mary Blood (later Mary Coombs), who worked on the LEO and its successors throughout the 1950s.

Mary Blood did not come from a mathematical Cambridge background like many of her colleagues – she studied modern languages but was accepted onto Lyon’s programming team after passing the internal computer course. She eventually became one of the team leaders on the project, staying with LEO Computers until Lyon’s sold off its computer division in 1964.

By any standard, women did not play a predominant role within LEO’s development, yet they still played an essential one, by breaking down the barriers and convincing Lyon’s mostly-male management – whether they were programmers like Mary Blood or tea shop supervisors like Ethel Bridson, and many more besides. While it may not look much today, it stood in marked contrast to the typical gender politics of the time. Over a half a century ago, it was one of the first progressive (if now little-remembered) projects involving women in technology, and a lesson to us all today.

Notes & References: Most of this account of LEO is sourced from Georgina Ferry’s excellent A Computer Called LEO: Lyons Tea Shops and the World’s First Office Computer. The quotes above are from p. 127 of the paperback edition. There’s also a Radio 4 documentary on it and a dedicated webpage to its history.


WWF Earth Hour

24 March 2009

A quick intermission to tell you something I’ve been getting up to at work this week. We’re working with WWF Earth Hour which is this weekend at 8.30pm – people from around the world will be turning off their lights for an hour to make their stance known against global warming and to lobby world governments to take more effective measures against climate change. There’s a stack of things you can do to help – including signing up, resources for spreading the word on your blog or website (such as the light switch widget you can see top right) and sharing your experience on YouTube and Flickr. Do check them all out – it’s for a very good (and very important) cause. Thanks.


Distracting ourselves to attention

17 March 2009

The origins of this post came from a question I was asking myself: Why is Twitter, a ludicrously low-bandwidth service, popular now and not in 2000 or 2004? And at the same time I read a lot of posts about how Web 2.0 was going to turn our brains to goo. This post attempts to tackle both topics.

It seems you can’t move right now for excitement about Twitter and other forms of microcontent creation. Everything else is following Twitter’s “go small, go often” philosophy as well – just look at Facebook’s new lifestream interface. And with Twitter dominating current web thinking comes fears of digital overload, warnings of destroyed attention spans (as Maggie Jackson argues in Wired), or even ludicrous claims that it will make you ill. On the other hand, this “snack culture” could just be handled by our brains just fine, ultimately making us mentally fitter.

One problem: these are the wrong questions to be asking. The dichotomy of “brief and shallow” media vs. “long and deep” media is a false one. As Steven Berlin Johnson argued last year:

If we’re truly living in a snack culture, how come so many forms of entertainment – TV shows, games, movies – are getting longer? Most of us, I suspect, have had this experience lately: You tell a friend that they simply have to start watching one of the new long-format dramas, like Heroes or The Wire.

Right now, the movie everybody is going on about on the web is Watchmen, which clocks in at an impressive 2 hours 42 minutes, and itself an adaptation (for better or worse) of one of the most complex and layered graphic novels of the 20th century, the only such book to make TIME’s 100 novels of all-time, one held in almost impossible high esteem by the exact same demographic (geeks) that also love to be on Twitter and other Web 2.0 sites.

Watchmen (and the other examples Johnson cites and expounds upon in Everything Bad Is Good For You) show that when consuming media, depth and brevity are not totally irreconcilable; you can concentrate on something difficult and concrete as well as enjoying content 140 characters at a time. And yet Twitter often gets demonised as a posterboy for the inanity of Web 2.0. Perhaps that’s no surprise, with its chief characteristics of brevity and ephemerality, the exact opposite of how we have consumed media in the past. Given that “value” of old media was often measured on its length (writers being paid by the word) or durability (all those books and records on your shelves), what’s produced in new media is often characterised as comparatively worthless, particularly by those who cut their teeth in the old media.

It’s tempting to say this was inevitable, the Internet coming in and destroying business models, rational thought, the very fabric of the universe as we know it. Yet Twitter didn’t just become popular because of all that free bandwidth, server power and cute pictures of whales. In fact, we were technically capable of producing Twitter five years ago, with broadband adoption relatively high among early adopters (geeks), and Twitter’s paltry bandwidth needs being easily handled by WAP or SMS on mobile. But it arose for two reasons: firstly, because people wanted to take part in this communal self-expression – giving us the so-called “adrenaline shot” of information. But we are not rats being stimulated by electrodes at a lab assistant’s will – Twitter is popular not just because we wanted to take part, but because we are now able to as well.

What if the vast rise in information volume and variety was not an inevitable result of the digital landscape, but rather a consequence of how we behaved socially and digitally? That as the “early adopters” – the bloggers and wiki editors and geeks – in the mid-2000s got more used to dealing with information – continually writing, creating, photographing, rating, tagging, chatting, texting, poking, etc. – and improved their brevity & clarity, that they have become more efficient at producing information? And once they became more efficient, they sought (or created) channels and media that better exploited that efficiency.

I’ll lead from my own personal experience here. Looking over this blog now and comparing it with 2004, you’ll see I write a lot less than I used to. What thought would have taken me an entire blog post back then, I can now Tweet once, or post to Delicious with a little extended commentary. How? Because I got better at communicating more briefly, and what used to take a blog post now can take a linklog post or a Tweet.

This can even improve the quality of one’s writing – once you’re confined to 140 characters (often less, if there are hashtags or a username to put in too) – and in a world where a lot of people are Tweeting, you have to make it more interesting, not less, to stand out from the crowd. Extraneous words, useless adjectives and sucking up have to get the boot, if you want to fit what you’re saying in succinctly. This is the exact opposite of growing sentences with David Foster Wallace, and while it may send fans of dense prose into despair, it does on the other hand encourage more simplicity and less pretension.

Of course, not everything on Twitter is gold – let’s face it, a lot of it is shit – but then again, a lot of everything is shit. But those of us who are fans of Twitter are comfortable with the levels of shit and have filtering mechanisms that can deal with them, so that we can find the stuff we really do enjoy. This comes back to what Clay Shirky called “filter failure“: the idea that “information overload” is not the problem – it has been a problem since the invention of the printing press – just that our filters have been radically realigned and are now the responsibility of consumers rather than publishers.

The key is this – that early adopters have not become distracted to death by consuming services such as microblogging, because we only moved onto it once we’d become publishers ourselves, mastering producing information online through other forms of media. By doing so, immersing ourselves in those information flows and working out what works and what doesn’t, we became better consumers of information as well – although that took a little time; hence why Twitter lagged several years behind the rise in blogging.

Discussion here is dominated by Twitter, but for “Twitter”, read anything new, distracting and disturbing in Web 2.0 that is emerging or will emerge in the near future, the same principle should hold – we choose to adopt them because we are able to as well as because we’re willing. This can appear puzzling to the outsider, and the only way to join in (if you want to join in) is to do what the geeks did, to produce and consume more information to help train up one’s filters. As I have discussed before, we will still need some technological help in the filtering process. But no technology can relieve the brain of all of its filtering burden. It’s somewhat ironic that the best way of preventing information overload is to indulge in even more of it – but who said anything in this world was that straightforward?