Thinking Digital live(ish)blog #2

Thinking Digital’s been really good. One of the things that has amazed me today has been the variety of topics and speakers. Kicking off was Paul Miller, the man behind School of Everything (matching people who want to learn with people who want to teach) and Social Innovation Camp (bringing together innovators and hackers to solve social problems). Paul was really quite inspired and energetic, calling out “that cyberspace is dead” and meatspace is all it is now (cf. my own discussion on how the barrier between ‘real’ and ‘virtual’ has come down) – technology should be more urgently directed to social problems. The kicker is, where is money going to come from – School of Everythng has a half-decent business model of taking commission, but SI Camp is still volunteer and sponsor-powered; as a recession kicks in will this be a problem (or maybe it won’t be – unemployed geeks volunteering to keep them sharp and improve their CVs, maybe?)

In a similar vein, there was also an interesting talk by Jim TerKeurst, director of the Institute of Digital Innovation, something I must confess has never really been on my horizon. Jim showcased some of the IDI’s fellows, who have worked on diverse range in the arts and technology – both art and artefacts. As a things geek, I was initially more impressed with the artefacts, such as chandeliers made from recycled plastic, the silver nanotechnology or the chairs that change according to your mood & clothing – annoyingly the IDI’s site doesn’t seem to showcase much, which is a pity. That said, later on one of the performers supported by the IDI, The Sancho Plan, gave us a great of their combination of live percussion controlling weird and wonderful computer animation, which I really liked – check out one of their videos:

The morning sessions weren’t just about targeting social problems or supporting the arts and creativity, but also about cold hard business. Well, with a cuddly side. Alex Hunter of Virgin group (though a committed Diet Coke drinker) talked about how he’s reshaping the Virgin website for a Web 2.0 & social media outlook. It was for the most part a well-presented Cluetrain Manifesto but still had some interesting lessons; Alex regards Digg’s blog as the best corporate blog – not just because it’s written by the guys at the top, but because it’s a multiplicity of voices and they respond to their fans. Geeks with fans, who would think it? But then, Digg know the audience they’re blogging for, and it’s harder for non-tech brands, so be careful of using them as an example.

Still, Alex was evangelistic about embracing social media in the business word, and made it clear it works for brands big and small (citing Qype and Zappos as examples). We also got some insights in the Virgin process – they have Virgin Eye a beautiful visualisation of mentions of their brands on the web (from over 5,000 sources) and other “labs”-style projects from Virgin at Explore Virgin. They have a new website, more of a community platform out which they’ve spent a year and a half listening, researching and creating, which is an impressive level of care and attention (although in a world where online fads come & go in days, risks being stale on the day it launches).

From another business point of view, Harry Drnec talked about his experiences as MD of Red Bull. His philosophy was from the emotional end of the spectrum rather than the practical – find your consumer, touch them, thrill them. Marketing wank? Possibly. But there’s no denying how attached people are to Red Bull as a brand, despite the ridiculous price it sells at (Red Bull made it a policy of not cutting price to increase sales, preferring the premium cachet). Now he’s trying to do the same for computers – make them rely on as little skill on the user’s part as possible. A noble goal, but I hope they don’t confuse simplicity and intuitiveness; by making things too simple to use we risk destroying their power and potential. Intuitiveness is what counts.

Right, enough business. Next post – hardcore geekery and genuine leftfield afternoon weirdness.

Thinking Digital live(ish)blog – #1

First Thinking Digital post, here goes… This is a post adapted & extended from one I wrote this morning over at We Are Social.

Thinking Digital kicked off with a ‘social media masterclass’ Stowe Boyd, the chair of the talk, kicked off with what he called the “strip-malling of the Web”. Controversially, he declared blogging as ‘dead’, claiming it as a transitionary stage between traditional web and ‘social media’ – which he says doesn’t exist (at least not yet). There are valid points – blogging’s format is derived from traditional news outlets’ own, and they have found it very easy to adapt to blogging as a result.

Boyd likens the takeover of the blogging to “strip malling” – likening the blogosphere to an urban landscape, where some big players in the mainstream media end up crowding out the smaller independent blogs. Those bloggers have since fled to streamed, more social and more egalitarian, media such as Twitter – compare with the phenomenon of urban flight.

It’s a nice metaphor but I don’t agree with it – not least because blog platform traffic is steadily on the up. Some blog traffic will be disproportionately allocated to the big players, but this is just part of the long tail effect. And Twitter is no more egalitarian than blogs – some user such as celebrities and news organisations have tens or hundreds of thousands of followers, and with the exception of a few web gurus, ordinary users have followers several orders of magnitude fewer.

An aside on the growth thing – the blog platform with the most remarkable growth is Tumblr – which has shot up five times this past year to 2.5M unique users per year. Tumblr is sort-of blogging but also lifestreaming – short posts, asides & links are encouraged – maybe that’s where the future lies as a hybrid (see also Friendfeed, or even Facebook, which now takes RSS feeds from your Flickr, delicious, blog, etc.)

Also there was Alex Hunter, head of web marketing at Virgin, who was talking about the social networking site he is setting up around the Virgin brand, and Paul Smith aka the Twitchhiker, who raised a lot of money for charity: water. In both cases, they’re contrasted with what happens in ‘real life’ – strangers sitting next to each other using Virgin planes & trains rarely talk to each other in person, and old-fashioned hitchhiking is nowhere near as common as it used to be over fears of kidnapping etc. In both cases there is a more atomised social capital-starved society, but interactions online with strangers have moved into this vacuum, giving context and building trust where there was none before.

As with many of these things, the best bits came up with the free discussion at the end. JP Rangaswami talked about his desire for ‘biodegradable’ data – the idea that personal data should rot like dead matter – old blog posts, photos, should have a limited shelf life (this is related to the concept of bit rot for code. Interestingly this chimes in with something Cory Doctorow said at the ORG privacy talk – that all data should either be less than two years old (so as to be accurate) or over 100 (so the person affected is long dead). Stow Boyd chipped in that Twitter already does this, to a degree – it’s very hard finding Tweets older than three months. Needless to say, with my current fetish for preserving everything I disagree.

There were also nice points on what happens when online media and the ‘real world’ collide. Thanks to sites like Meetup.com and events like Twestival, strangers are now using online to meet & make new friends in a social context (as opposed to Internet dating which is usually one-on-one, unless you’re kinky/lucky). But there’s a downside as well – backchannels at real-life events can quickly lead to douchebaggery (think the rebellion against Sarah Lacy’s admittedly soft interview with Mark Zuckerberg at last year’s SXSW).

Right that’s part one for now. There’s more livetweeting of the conference over at @conferencebore. And if you’re here then don’t be shy – come up and say hello…

Twitter & fixing replies (aka “Why am I writing this?”)

So I got quoted in the Guardian Tech blog on the Twitter replies debacle. And quite frankly, nobody cares about this, but once your name’s on a national newspaper website it’s best to lay the record down, so here goes.

When someone replies to you on Twitter in public, they use the @ sign like IRC. When looking at others’ replies in your stream, you had the choice of either seeing every single @ reply that people you followed made, or just those to other people you followed. 2% chose the former option, 98% chose the latter. Yesterday Twitter changed it to the latter only.

Cue backlash from the 2%. Cue backlash against the backlash (from me, amongst others). So Twitter freaked out, backed off and fessed up – discriminating between those you follow and those you don’t was proving to be not scalable. So the solution was to reverse the decision, and make @s to everyone visible – unless they were done through the “reply” button. So you now have to rely on how others use the system rather than have control over it – the worst possible solution.

Intermission: Nobody gives a shit about this. Why am I writing about this? WHY?

A half-arsed solution if ever there was one, and one that caused me annoyance. I flippantly Tweeted my annoyance – which I still stand by – and this got a mention in the Guardian’s Tech blog (thanks guys). But this is one of those things that needs more than 140 characters to elaborate on,so here goes…

Twitter’s approach was a classic fail in consulting users. The @ was a community-created asset and Twitter messed with it for no apparent good reason. Cure was worse than problem, and then they were forced into an icky compromise that suits no-one. The solution? They could fessed up it was causing problems, and announced a change well in advance. To help users prepare for it, they could extend the API to allow clients like Twhirl & Tweetdeck know the user IDs of who you follow and who you don’t. Then the clients can make the discrimination between followed and not followed, instead of the server, and the choice can be exercised at that end. Scalable, user-chosen, none of the problems encountered above.

Right, that’s it. Of all the things I’m meant to be writing about, I didn’t expect to write at length about this. Better stuff to come, promise.

If you’re thinking about commenting, don’t – I’ve wasted enough of the planet’s time as it is, please don’t add to it.

Thinking Digital

From today till Friday I’m going to be at the Thinking Digital Conference as a guest blogger of the organisers (be sure to check out their blog too). Thinking Digital’s speakers include some of the top people in the digital sphere such as Russell Davies, Ben Hammersley, Adrian Hon, JP Rangaswami and Paul Smith (aka the Twitchhiker) and I’m really looking forward to it. I’m blogging about it both here (the more tech & geeky side) and on the We Are Social blog (the social media side), so keep your eyes peeled.

I’m planning to Tweeting fairly extensively on a special dedicated @conferencebore account, with edited Match of the Day-style highlights of all the best bits on my usual @qwghlm account (to save overloading regular followers). And if you’re there too and want to meet up, feel free to @ or DM me or just come up to me and say hi!

Going beyond privacy

Last week I finally, finally, signed up to be a member of the Open Rights Group and attended a talk chaired by Cory Doctorow and Charlie Stross on privacy and the digital age. And thanks to the red wine and general convivality on the night, plus my own badly-taken notes, it’s taken me a little time before I’ve finally been able to distill a few thoughts on the matter.

The talk was slightly disappointing in that it was a bit too general and strayed into unrelated aspects security debate as well. For example, talking about Richard Reid the shoebomber and how if he had been successful, we would not be taking our shoes off at airports now as nobody would have known how he did it, is a lovely anecdote but didn’t do much to inform the debate. Ditto complaining about school web filtering or the state of the tabloid press.

Still, there were plenty of good points and interesting notes. Charlie kicked things off when he pointed out how privacy is a relatively recent social construct; in medieval times peasants would live communally and the nobility would be constantly under the eyes of their servants; only through the industrial age and the institution of a middle class, segregated in their own houses, has privacy become a norm to expect.

Spin forward a couple of centuries later, and we are no longer merely industrial, but sufficiently technically advanced that we produce data about everything, which spills out of our houses and lives. As a result the normative privacy we expect is an ever-more elusive ideal, now teetering on the edge of a paradox. One the one hand, people publish vast amounts of information willingly, whether it be on blogs, Twitter, Flickr or whatever. On the other, we clamour for the government to stop snooping on us, gathering massive databases or performing mass-mining operations.

Are the two mutually incompatible – are we just hypocrites? Not quite. As our information-generative capacities have evolved, privacy has evolved into a part of a more sophisticated framework; privacy is no longer a simple norm, but a beneficial side-effect of a much greater good – the ability to control what information about is is dispersed. With this control, we choose what we can and cannot publish (for example, you’ll read my views on tech and politics here, but only very rarely will I disclose information about my family).

But Cory very astutely pointed out that this choice is rarely informed. We are very bad at valuing the impact of information at the time we release it, compared to how we might value it in future; a teenager posting pictures of him drinking and smoking weed onto Bebo might not matter to him now, but when he’s applying for a job or running for office in a few decades’ time, they may come back to haunt him. You can add into that the mix the fact that taboos and social conventions change over time. Charlie provided the example of parents would take photos of their young children playing in the nude – regarded as innocent in the 1970s, but an activity now tainted with anti-paedophile hysteria of the present day.

So, we’re producing shitloads of personal data, the control over which we are unable to judge accurately. That’s part one of the problem. Part two is that that data may not be accurate or can be misleading in the first place; if your browser history is full of pages about HIV, is it because you are a sufferer or just researching on behalf of a friend, or helping your child with their biology homework? Without wider context, it’s easy to draw the wrong conclusion.

It gets really bad in part three – what happens when governments and corporations start using these vast pools of data, possibly mistakenly disclosed, possibly inaccurate, to make judgements about us; would a health insurer, getting hold of your browser history in the example, force you to undergo an HIV test or increase your premiums?

In this sense, privacy, in the sense of safeguarding information about us that is true, but which is not widely in the public domain and we’d rather not let people know about, is not the real problem. The problem is about what institutions do with all information about us, true or not, publicly available or not, embarrassing or not. And that’s a bigger question than privacy – we get back to control again.

As Charlie put at the end: “the relationship between privacy, security and the state is broken.” The social contract we once relied on for privacy is being pulled and reshaped into a wider one about information control. But while personal control of all our own data is an ideal goal (that us geeks love to profess), it’s really one fraught with complications – if we ourselves can’t make the right judgements about deciding what to do with our personal information, what hope do institutions have?

Doubt is the key to reworking out this social contract – a healthy dose of uncertainty and a warning about context whenever we deal with personal information. In some ways this means accommodating conflicting ideas – a “Digital Britain” made more reliable and efficient through IT excellence is underpinned by an assumption of good-quality data, but we must also entertain the possibility it is wrong or inappropriate and build mechanisms around it so we don’t make mistaken judgements, and that mistakes can be easily corrected or reversed. Oddly, the closest equivalent to this I can think right now is how any sane person should read Wikipedia – treat what you read as plausible, but never be willing to accept it as truth on blind faith alone; the community behind it does its best to keep it accurate, while full well knowing mistakes will always be there.

Never perfect, but honest with yourself about your fallibility. That might just bring some sanity to the situation right now. But good luck recommending that to anyone in a position of power.

Footnote: Better and more coherent posts on the Doctorow/Stross talk can be found from Richard King and Chris Swan.

Let no idea go to waste

On Wednesday night, I was at the London edition of the Digital Britain Unconference, a grassroots response to Lord Carter’s Digital Britain report, set up by Bill Thompson, Kathryn Corrick & others. Digital Britain is mostly concerned with preserving existing industries and interests and fails to address many of the issues those of us who live & work in the online space – free software, free content, social media & user-generated content, net neutrality, privacy, government transparency and so on.

Truth be told, I was tired, had a terrible headache and was a rubbish contributor – I didn’t say much of any worth or interest. My only decent contribution was during a session on skills & training; I pointed out that people who needed help to cross the digital divide could get as much training as they liked, but if the OS on their computers keeps crashing or the websites they use had unusable interfaces, it would be for naught; sadly, I don’t know what the answer is – but it lies with educating & incentivising the elite (i.e. us geeks), training & research into UX design and government leading by example is perhaps a pathway. It’s not all about the people at the bottom of the ladder.

I agreed with much of the stuff being said; some of it was a little woolly and wishy-washy, but that’s to be expected at these things, while consensus is still being sought. Encouraging the digitally excluded to take up technology and give them the confidence not just to surf the web but to express themselves was one key point; fostering digital entrepreneurship and risk-taking and allowing startups to experiment more stuck with me even more.

That has triggered the question: if we do let a thousand startups bloom and all corners of the population participating & creating, producing all this content, all this code, all these ideas. What happens next?

The worst thing that could possibly happen is that it goes to waste. That someone comes up with a great piece of code as proof of concept but hasn’t the time to bring it to production level. Or a frustrated web user has a great idea for a piece of UX design but none of the coding skills to bring it about. Or a startup has a great idea two years too early and goes bust, and whoever tries following in their footsteps has to start from scratch again. Reinventing the wheel costs time and money and given the abundance of digital and the interconnectedness of people online, really shouldn’t happen.

Mechanisms such as open source and free content licensing (GPL, Creative Commons etc.) are great means to make sure others’ work gets improved upon, but are still just a mechanism; getting people to coalesce around the work and support it is a separate challenge, which itself relies on making them aware of a project’s existence and encouraging & rewarding participation. For these means of doing to properly work, social and technical systems need to be set up for this to happen – Sourceforge does it for code, Wikipedia for knowledge.

But still, software projects die, data gets created and then wasted. To take an example in government – the ONS is busy creating a more accurate postcode geodata resource for the 2011 Census, which it is promptly going to destroy after it’s finished. Closer to home, I created an election map of the UK for the 2001 & 2005 elections, but I haven’t got the time or coding capability to extend it to 2010 (when there are new boundaries). I’d be happy to pass it on to someone else – if anyone’s a willing taker – but it’s hard finding places other than this blog to advertise the fact.

To make the most of the ideas and data coming out of a more Digital Britain and prevent waste, might it be necessary to be a bit more proactive? Recently I’ve been reading about the great work the Archive Team do, and their recent efforts to preserve Geocities, and it’s been both interesting and inspiring. So here’s an idea: a library of lost ideas and data, with a dedicated staff to curate it. They spend time going over submissions, wrapping them up with good metadata, categorising, rating, and promoting to make it easy for visitors to find.

At the same time, better government legislation on freeing up data collected there so it can be added in. Perhaps we should take look into data produced by failed and liquidated startups can can be collected into this resource as well (assuming it couldn’t be sold on the open market, which may be hard if its future worth is hard to ascertain).

Not only would the library take submissions but the team would actively go out and seek takers, sharing submissions with their audience and ‘matchmake’ to find people to take on projects others have passed on or left, asking “would anyone else like to take this on?”

Admittedly, there are more questions than answers. It may be a good idea to separate out data from ideas, to be honest. There are a lot of IP & copyright questions that I haven’t even begun to answer. It may not be possible to prove it can be a good return on taxpayers’ money, so maybe it’s best to run it under Lottery funding, as a creative experiment. And a lot of the best knowledge is tacit, in people’s heads, not written down anywhere – not to mention that knowing who is who is so important when trying to matchmake. So you’ll probably need a social networking function bolted on too to take care of that.

And of course, someone might have already done this, and this entire post is just reinventing the wheel itself. But I’ll leave the idea out there (most of it hurriedly written before I forgot it on Wednesday night) and see what you think…