Why it’s not just about teaching kids to code

The Guardian have launched a Digital Literacy Campaign, led by an article entitled “Britain’s computer science courses failing to give workers digital skills“:

In higher education, although universities such as Bournemouth are praised by employers for working closely with industry, other universities and colleges have been criticised by businesses for running a significant number of “dead-end” courses in computer science, with poor prospects of employment for those enrolled.

And from my own anecdotal experience, that’s correct. For one reason or another, I’ve been reviewing CVs and interviewing people at work for developer roles last couple of months, and some of them were awful. They tended to have degrees or other qualifications from mid- and lower-tier universities and colleges, but had trouble telling the difference between PHP and JavaScript code, or were unable to provide even stock answers to well-versed problems such as sorting.

(Feel free to call me out as a snob on this one; I read my Bachelor’s in Computer Science at Cambridge, one of the few universities in this country where the majority of the course is spent not coding)

Anecdotal though my own experience, and many of the quotes in the article are, the Guardian’s campaign is laudable and I back teaching kids code in schools. But there are two issues I have with the campaign – it’s not just teaching, and not just code that needs to be taught (or learned).

Firstly, “digital literacy” is as broad a term as “literacy” or “numeracy”, and there are a range of different issues at stake. Take this complaint in the above article:

Ian Wright, the chief engineer for vehicle dynamics with the Mercedes AMG Petronas Formula One team, said: “There’s definitely a shortage of the right people. What we’ve found is that somebody spot on in terms of the maths can’t do the software; if they’re spot on in terms of the software, they can’t do the maths.


Kim Blake, the events and education co-ordinator for Blitz Games Studios, said: “We do really struggle to recruit in some areas; the problem is often not the number of people applying, which can be quite high, but the quality of their work. We accept that it might take a while to find a really good Android programmer or motion graphics artist, as these are specialist roles which have emerged relatively recently – but this year it took us several months to recruit a front-end web developer. Surely those sorts of skills have been around for nearly a decade now?


In a highly critical report last month, school inspectors warned that too many information and communication technology (ICT) teachers had limited knowledge of key skills such as computer programming. In half of all secondary schools, the level many school leavers reach in ICT is so low they would not be able to go on to advanced study, Ofsted said.

Computer Science is not Programming, and Programming is not Web Development, and Web Development is not ICT. What we have is a whole spectrum of different demands and of different roles, all of which have technology in common but often little else; producing computer models for a Formula One team or CGI Studio is going to demand a PhD-level or near grasp of maths or physics, combined with knowledge of highly specialised programming. Developing a front-end for a website still demands a reasonable degree of intelligence, but also a wider knowledge of languages and coding, and a better appreciation of more subjective issues such as usability, browser standards (or the lack of them) and aesthetics. Meanwhile, being adept with ICT doesn’t mean you have to be a genius or be an expert in code, but it needs to be more than how to make a PowerPoint presentation; how to use a computer properly and not just by rote, how to be confident in manipulating and understanding data, how to automate tedious tasks, how to creatively solve a problem.

Today technology is integrated to our lives to a quite frankly frightening degree. Should that mean everyone has to learn how to code? No. Should it mean everyone have an understanding of the basics, an appreciation of what computers can and can’t do, and the ability to use that knowledge to solve problems by themselves? Yes. But making everyone code is not the answer, and to me the Guardian is taking a bit of a “if it looks like a nail” approach to the problem of digital illiteracy.

That said, from my experience of the graduate CVs I read, the teaching of coding, as a practice, does need to improve. University courses should be better assessed and monitored and the “sausage factories” closed. Teaching how to code should be integrated into related subjects such as maths and physics wherever possible (and it’s worth noting many places do this well already). It shouldn’t just be coding that is taught, but how to define a problem, to break it down, and solve it. If anything, that’s more important – programming languages and technologies change all the time (e.g. how many Flash developers do you think will be about in five years’ time?) but the problems usually remain the same.

Secondly, there’s a spectrum of challenges, but there’s also a spectrum of solutions. It’s not just schools and universities that need to bear the burden. As I said, coding is a practice. There’s only so much that can be taught; an incredible amount of my knowledge comes from experience. Practical projects and exercises in school or university are essential, but from my experience, none of that can beat having to do it for real. Whether it’s for a living, or in your spare time (coding your own site, or taking part in an Open Source project), the moment your code is being used in the real world and real people are bitching about it or praising it, you get a better appreciation of what the task involves.

So it’s not just universities and schools that need to improve their schooling if we want to produce better coders. Employers should take a more open-minded approach to training staff to code – those that are keen and capable – even if it’s not part of their core competence. Technology providers should make it easier to code on their computers and operating systems out-of-the-box. Geeks need to be more open-minded and accommodating to interested beginners, and to build more approachable tools like Codecademy. Culturally, we need to be treat coding less like some dark art or the preserve of a select few.

On that last point, the Guardian is to be applauded for barrier-breaking, for making the topic a little less mysterious and for engaging with it in a way I’ve seen precious little of from any other media outlet. And the page on how to teach code is a great start – it should really be called how to learn code, because it’s a collection of really useful resources. For what it’s worth, I wrote a blog post nearly three years ago on things on things to get started on – though if I wrote it today I would probably drop the tip on regular expressions (what was I thinking?).

If I had one last thing to add, is that all of the Guardian’s campaign, and the support from Government, is framed around coding for work. Which is important – we are in the economic doldrums and the UK cannot afford to fall behind other nations. But, at the same time, the first code a beginner writes is going to be crap, and not very useful. Even when they get to a moderately competent level, it won’t be very useful beyond the unique task it was built for. Making really good code that is reusable and resilient is bloody hard work, and it would be off-putting to make the beginner judge themselves against that standard.

We need to talk a lot more about why we code as well as how we code. I don’t code for coding’s sake, or just because I can make a living out of it. I code because it’s fun solving problems, it’s fun making broken things work, it’s fun creating new things. Take the fun out of it, making it merely a “transferrable skill” for economic advantage, will suck the joy out of it just like management-speak sucks the joy out of writing. It doesn’t have to be like that. Emphasise on the fun, emphasise the joy of making the infernal machine do something you didn’t think it was possible to do, encourage the “Isn’t that cool?” or “Doesn’t that make life easier?”. Get the fun bit right first, and the useful bit will follow right after.

@whensmybus gets a whole lot better

Wow. It’s been nine days since @whensmybus was released and the feedback has by and large been positive. It’s not all been plain sailing – the odd bug or two made it past my initial testing, and a database update I tried inadvertently corrupted it all. My thanks go to @LicenceToGil, @randallmurrow and @christiane who were all unlucky enough to manage to break it. As a result, I’ve ironed out some of the bugs, and even put in some unit testing to make sure new deployments don’t explode. I now feel this is A Proper Software Project and not a plaything.

Bugfixes are all very well, but… by and far away the most requested feature was to allow people to get bus times without needing a GPS fix, to allow use on Twitter via the web, desktop app or not-so-smartphone. And although using GPS is easier, and cool and proof-of-concepty, it’s plain to see that making access to the app as wide as possible is what makes it really useful. So, from now on you can check the time of a London bus by specifying the location name in the Tweet, such as:

@whensmybus 55 from Clerkenwell

This will try and find the nearest bus stop in Clerkenwell for your bus – in this case, the stops on Clerkenwell Road, which are probably what you’d want). The more precise the location given, the better; place names are OK, street names are better. It works great on postcodes and TfL’s SMS bus stop codes as well.

The geocoding that makes this possible is thanks to the Yahoo! PlaceFinder API, so my thanks goes to them for making a service free for low-volume use. (Aside: you may ask why not use Google Maps? Because Google Maps’s API terms only allow it to be used to generate a map, not for other geo applications like this).

So, play away, and let me know what you think. Of course, it may not always work – geocoding is tricky and not foolproof; if it doesn’t, please let me know in the comments here, or just ping me at @qwghlm on Twitter.

More information and FAQs can be found on the about page, and the technically-minded of you might want to check out the code on github.

Introducing @whensmybus

A few weeks ago TfL put all their information from Countdown, the service they use to provide bus arrival times, online. There’s a TfL Countdown website and you can enter a bus stop name, or ID number, and find out the latest buses from the stop.

But, it’s a bit fiddly. The main website doesn’t automatically redirect you to the mobile version if you are on a phone. If you type in a location, (e.g. my local Tube station, “Limehouse Station”), you have to pick a match for the location first (from two identically-named options), then a second screen asking you to find a bus stop, and then you get the relevant times. On a phone, it’s just feels fiddly and frustrating especially when I know my phone has GPS in it and knows my location anyway.

Update/correction There is, as it turns out, the ability to find by geolocation on the mobile site, it’s just on a mobile browser I just get the main website and don’t get redirected to the special mobile site, which means I never knew about it (thanks to Ade in the comments for pointing this out).

If only there was a mobile-friendly, geolocation aware, real-time way of fetching information. Oh wait. There is. It’s called Twitter. Twitter has geolocation allowed on Tweets (if you opt in) and an API to fetch and send messages, so we have a system set up already in place for our needs.

I owe a big debt of gratitude to Adrian Short, who wrote a Ruby script to pull bus times from TfL. TfL have not officially released an API for Countdown just yet, but Adrian found it, and it’s there and accessible – providing the data in JSON format for each stop. That use got me thinking – if that data is available and can be parsed quickly and easily, why not make a Twitter bot for it?

With that, @whensmybus was born, and is now in beta. Try it out now if you like. Make sure your Tweet has geolocation turned on (for which you’ll need a GPS-capable smartphone), and send a message like:

@whensmybus 135

Or whatever bus you are looking for. Within 60 seconds, you’ll get a Tweet back with the times of the next buses for that route, in each direction, from the stops closest to your location.

Why each direction? Specifying a direction is fiddly and ambiguous; bus routes wind and twist, and some of them are even circular, so “northbound” and “southbound” are not easy things to parse. The name of your destination can have ambiguous spellings, and I haven’t yet got round to tying it in with a geocoding service like Google Maps. So, at the moment the bot simply tells you buses in both directions from the stops nearest to you. I might change this in future, once I’ve got my head around geolocation services and fuzzy string matching and all that.

It’s still beta (thanks to an early unveiling by Sian ;) ) and I plan in future to add enhancements such as the ability to use without GPS. I also need to write some proper documentation for it, and stick the source code on Github later tonight once I am home. The source code is now available on github, but do bear in mind the codebase is a bit unstable right now. So, if you are a Londoner, please do use it and tell me what you think, either on the comments below or on Twitter. @ me, don’t @ the bot – it will think it’s a request for a bus service and get confused. :) All suggestions are welcome.

(And now, some tech stuff for the more interested)

The bot is a Python script, run every minute via a cronjob. It’s quite short – 350 lines including comments for the main bit. As well as the live data API, the service also uses two databases officially provided by TfL’s syndication service for free; one is of all the routes, and one for all the bus stop locations. I converted these from CSV format to sqlite so the bot can make SQL queries on the data. TfL use OS Easting and Northing locations for the bus stops, so I have to convert the GPS longitude and latitude; I am indebted to Chris Veness and his lat/lng to OS conversion script, which I translated from JavaScript to Python; I am also now much more educated on subtleties like the difference between OSGB36 and WGS84. Finally, I use the Tweepy library to receive and send the Tweets, which is really rather excellent and saves a lot of faff. Finally, the whole project would not be possible without the ideals of open data and open source software behind it, so if you’ve written even a single line of free software, then thank you as well.

Some thoughts on quitting Facebook

I did an odd thing last night, for a social media webponce. I disabled my Facebook account, perhaps for good (at least that’s the intention).

Although this was not solely due to what came out of the latest Facebook f8* conference, it probably was some sort of straw that broke a proverbial camel’s back. At f8, Mark Zuckerberg announced the Facebook Timeline, a way of not just showing what you are up to right now, but your whole life as Facebook saw it, digitised and shown to all. And my reaction was along the lines of:

Fucking hell, I’m going to be spending the rest of my life tagging photographs of myself

I joined Facebook early in 2007 when they let ordinary civilians in, and at first I quite liked it. It was a cute way of tying in and aggregating one’s content, thoughts and photos, and keeping up with people I knew, or used to know. What a nice service. And for free! But over time, the fun faded. Facebook kept on quietly changing privacy settings and made a landgrab for copyright of uploaded photos (later rescinded).

So, I harrumphed, tightened my privacy (a tedious task), removed a lot of personal info and content (photos, imported blog posts) and despite my misgivings, carried on with a stripped-down profile to keep in touch with friends. But as Facebook matured, and my profile accrued information over time, another unwelcome feature came about.

The practice of “Friending” someone just because you met them at a party, or went to school ten years ago with them, or you work with them, seemed a good idea at the time; it’s nice, who doesn’t want more friends? Even if they are just Facebook friends. But these are people I do not see every day, for whatever reason; as sad as that may be, over time those social ties would normally fade. C’est la vie.

But Facebook ossifies these previously ephemeral social ties; they are there forever, reminding us of the past. Whereas before we would be able to let these ties fade passively, with them laid now we have to actively “unfriend” people we no longer associate with. That’s not very nice, is it – after all, isn’t the opposite of a friend an enemy? So out of politeness, we accumulate these ossified ties, even after we change jobs, cities, relationships, as a form of digital clutter.

This was as bad as it got, until now. While social ties lingered, other content on Facebook would gradually drop off your timeline and fade away. Indeed, as online archiving extraordinaire Jason Scott observed in an excoriating critique of Facebook:

So asking me about the archiving-ness or containering or long-term prospect of Facebook for anything, the answer is: none. None. Not a whit or a jot or a tiddle. It is like an ever-burning fire of our memories, gleefully growing as we toss endless amounts of information and self and knowledge into it, only to have it added to columns of advertiser-related facts we do not see and do not control and do not understand.

Be careful what you wish for. Now our Facebook profiles will have everything we ever have, dished up by default (and while Facebook’s UI has got easier to customise recently, I bet the default will still be everything). Now it’s impossible to escape your past. Everything you have ever done that has been digitally logged by you, or your friends, can now be potentially dished up as your very own digital This Is Your Life. There is, on Facebook, a photograph of me in my early twenties, passed out after drinking too much tequila on Mexican Independence Day (any excuse, my younger self would say). That’d be on my Timeline by default, no doubt.

But it’s not because of embarrassing photos that I’m off Facebook (far more cringeworthy ones exist, thankfully on analogue prints). It’s the sense that Facebook is very much about the past. The people you have known. The relationships you were in. The things you have done. And these hang around your neck and tie you down.

Whereas what’s really exciting about the web is the things you are going to do. The new fact you’re going to find out idly browsing Wikipedia. The amazing people you meet thanks to you sharing a joke on Twitter. The inspiring blog post you’ll find via Delicious. The silly lolcat you’ll find on Reddit. Facebook isn’t offering anything what makes the Internet fun, and it’s taken this change to make me realise.

With Timeline, we’re opening ourselves up with an ever-growing obsession with the past. A quote I saw last night was “We’re gonna need architectures for forgetting”. Poetic as that line is, that’s a cure when prevention might be better – for me, in any case.

I must stress that this is not to say Facebook is bad, or Timeline is going to be a failure. Plenty of people are happy to have ossified social ties – if you are in a small, close-knit social network that is relatively static, I can see it why is a boon. Timeline will be fantastic for you, if you have been on Facebook your entire adult life, and all that data is there and well-curated (which it will be, if you have been on Facebook your entire adult life). But it’s not for me; it’s not interesting to me as a user, any more. So I’m out. Bye, Facebook.

* Named for Fate, the all-knowing computer in V for Vendetta, right?
† Although I’m still keeping the Facebook Like button at the bottom, just for kicks and sheer hypocrisy ;)


Or… considering the documentary-maker as not really a documentary-maker

The third part of ALL WATCHED OVER BY MACHINES OF LOVING GRACE looked like it would take the form of its predecessors; taking contrasting stories, seemingly unconnected events, and trying to draw pencil-lines (or stronger) between them. But in the end, Curtis ran a digging twist to make you realise this episode wasn’t really a documentary at all, but an attempt to produce high art as provocation.

The episode started in the Republic of the Congo (indeed, with some material lifted & extended on from his piece It Felt Like A Kiss), and looked the near-unimaginable scale of slaughter in Congo/Zaire and neighbouring Rwanda, and the role of Western interference in the region: the Belgians’ attempts to install the Tutsis in Rwanda as political ruling class, the CIA’s anointing of Mobutu Sese Seko as leader of Zaire as a bulwark against communism, mining companies’ bloody landgrab for the mineral columbite-tantalite (used in the manufacture of chips in electronic devices such as the Playstation) in the modern Congo, and even the naturalist Dian Fossey‘s ongoing feud with Rwandans as she tried protect gorillas in the rainforest. Little to do with machines, loving or not.

The other story was more conventional Curtis fare: charting the lives of geneticists Bill Hamilton and George Price; Hamilton came up with a theory of gene-centric evolution, which Price took on further, formulating a model of altruism as a means of gene propagation. And why stop at altruism? Take it to its logical conclusion, and all manners of human behaviour can be described as evolutionary techniques and no more, and we end up being, er, gene-propogating machines. This is on much more familiar territory – the scientific establishment reducing humanity to a mere aggregation self-reproducing automata.

But unlike his usual form, Curtis didn’t attempt to draw causal attempts (the kind that usually infuriate) between the two stories. In fact, apart from Hamilton’s death in the Congo, there didn’t seem to be any link at all. Instead as the episode unfolded it focused more and more on the ongoing slaughter in central Africa, spurred on by the West’s demand for Africa’s mineral wealth. When Rwanda massacres happened in the 1990s, the developed world’s militaries stood by, while its NGOs (despite their good intentions) were powerless to stop the fighting spread to refugee camps. Curtis played a series of increasingly distressing images of atrocities with a backdrop of incongruent music.

And you begin to realise Adam Curtis isn’t even trying to make a connection, and this isn’t even a documentary. Curtis is provoking you into feeling uncomfortable, into dwelling on the loss of control, disillusionment even, with modern liberal Western society, using the history of West’s interaction with Africa – colonialism, decolonialism and the bloody end of mass capitalism – as emotional bait. Your grandparents’ generation subjugated them, your parents’ generation unleashed anarchy upon them, and now you’re fuelling the slaughter and chaos they created when you buy your Playstation. This is not a documentary designed to educate, but an art project designed to provoke – riffing off the aforementioned It Felt Like A Kiss rather than The Power of Nightmares.

And when you’re done feeling uncomfortable, Curtis gives the punchline to the film: this is why you feel so attached to bury yourself in the embrace of the machines, because deep down you no longer have faith in humanity to solve its own problems or its own inhumanity.

This is a leap of logic, a provocative one, a sign of desperation, even, but an intriguing change in style as well. Perhaps Curtis thought this the best way to make his point, perhaps he couldn’t think of any others. It’s certainly a departure from his usual form: of a triptych of tightly-wound, interconnected episodes exploring the concept. And to be honest – it didn’t work; it just made the piece look disjointed and the two storylines out of place with each other. It’s not his style – Curtis is not a polemicist – his flat, laconic commentary and his refusal to appear on camera make it impossible for him polemicise – and to use a tactic of deliberate provocation is as mechanistic a view of human beings as the very philosophy he is attacking.

And yet despite the flaws and non-sequiturs of the main story, I can’t find it easy to shake off the moral of the other tale in the episode: the demise of Price and Hamilton. The originators of the selfish gene theory were ultimately wracked by the implications of their own research: George Price’s fixation on biological determinism led to his descent into self-enforced poverty and Christian piety as a means of escaping it, and ultimately suicide when it offered him no salvation. Bill Hamilton swayed the other way, growing mistrustful of modern medicine as it was an obstruction to natural selection, leading to a belief HIV was a byproduct of vaccination programmes in the Congo, where he ended up dying. Beneath the provocation, Curtis’s underlying point, warning even, seems to be: this machinist view will drive you mad too, one day.

Coming up next: a review of episode two. Yes, I know that’s the wrong order. Consider it a homage to the man’s style…


Adam Curtis is a filmmaker who intrigues and frustrates. His Century of the Self and Power of Nightmares peeled back the layers on Freud and modern capitalism, and the rise of neoconservatism and fundamentalist Islam, respectively, in a new and interesting light. Curtis may not be right, he may not even be telling the whole story, but he offers an angle, a way of skewering and unruffling our preconceptions. However, with his 2007 The Trap, he started falling off his usual run of form. It offered a frustrating take on the modern take on liberty, from positive to negative, and the perniciousness of game theory, behavioral economic models and performance targets – connections a little too technical and forensic to explain just with mashup of videos and laconic voiceover. A waffly final third culminated in a call to arms for positive liberty, at odds with his usual dispassionate tone of voice.

Four years on, we have his new series, ALL WATCHED OVER BY MACHINES OF LOVING GRACE, and the chance to see whether The Trap was an aberration from true form. Like its predecessor, Curtis delves into the technical not just the historical. His basic thesis (I’m summing up) is as follows: the selfish Objectivist philosophy of Ayn Rand inspired a generation of Silicon Valley geeks to create computer systems with the aim of removing the shackles of government to create a utopia of free individuals; these same system are then used by the creators of the New Economy (led by another Rand acolyte, Alan Greenspan) to create a new economic miracle controlled by the banks, Goldman Sachs and the Federal Reserve. But instead of creating a utopia they created chaos – the machines they had such faith in failed, and despite producing an economic crisis in Asia in 1997, we kept faith in them to create an even larger worldwide crisis a decade later, from which we can see no way out of.

The problem with this is that it’s seeing too many links where there aren’t any. Not every Silicon Valley company was inspired by Rand (Curtis named one or two examples at best, neither of who leading lights on the scene), and the area’s philosophy owes arguably more to the countercultural movement and political climate of California than Rand’s self-indulgent miserabilism. It’s certainly a long way away from the conservative, East Coast, market fundamentalist philosophy of Greenspan, Robert Rubin and indeed the entire neoconservative/Chicago School generation of politicians, economists and bankers who eventually assumed political control. Cyberlibertarianism envisions a future of individuals networked together, free of hierarchy or even the state; market fundamentalism celebrates harnessing the aggregate of individuals’ behaviour for greater prosperity and stability. In short, one coast’s philosophy created John Perry Barlow, the other Alan Greenspan.

Curtis’s other flaw is to confuse “machines” with what machines actually run. A computer is just a unit for processing numbers in any number of ways. They are just boxes, glorified calculators. It’s the software we run on them that makes them do “evil” and this software is made by human beings. Having spent so long talking about a generation brainwashed by Rand, Curtis now attributes all the evils to the machines. But who programmed them? Who first thought of using them for automated trading, just-in-time manufacturing, supply chain management and all the things that are now taken for granted in the New Economy? For a storyteller who loves to peel apart the unknown and the people behind history, Curtis instead frustratingly wastes his time on peripheral figures such as Ayn Rand (who died a recluse in 1982) and Bill Clinton (distracted by the Lewinsky affair and powerless to stop the SE Asia crisis), rather than the people who built and shaped the information economy.

The result is a mess, with Curtis making oversimplified and hurried connections between various subplots. But that doesn’t mean there isn’t a story to be told amongst this clutch of different tales. How did a bunch of so-called geeks and slackers, growing from the midst of the counterculture, create a multibillion dollar paragon of capitalism? How did the conservative, sleepy institutions of Wall Street become seduced by the wonders of technology and grow hypertrophically on computer models, automated trading and complex financial instruments? In short, how did Barlow and Greenspan’s generations become allies, intertwined and taking on each other’s aspects and practices? And finally, how have we become so dependent on these systems, making them become so ubiquitous and invisible, that we didn’t notice things were going badly wrong until it was too late?

If you think this sounds familiar, it’s because Curtis used this intertwined-dichotomy style of filmmaking so well with The Power of Nightmares: the story of how the ideological descendants of Leo Strauss and Sayyid Qutb ended up as putative enemies, yet neither could live without the other, and both were grounded in the same similar grievances with individualism and liberalism. It’s odd that Curtis was able to portray the balance of similarities and differences in that film, yet with AWOBMOLG he struggles to make sense of it all, and ends up merely telling the “what” rather than the “how”, giving us numerous red herrings in the process.

It’s easy (but patronising) to say that’s because “technology is hard” and it’s difficult to comprehend it and history together rather than history alone. But Curtis is not a stupid man. It’s perhaps more charitable to say it’s easy that when it comes to relatively-uncharted history of the information age, there is simple so much more information and so many possible narratives, it’s easier to see pattens where there are none than not. But, this was just part one, and maybe parts two and three are better, and a lot more coherent.

In the meantime let this not detract from Curtis’s earlier works – if you haven’t seen them, The Power of Nightmares and Century of the Self are both available from archive.org. His KABUL: CITY NUMBER ONE blog is a collection of blogging and archive clips about the Afghan capital, and well worth reading. And I retain a soft spot for It Felt Like A Kiss, an avant-garde experimental attempt at storytelling based on 20th century history commissioned by the Manchester International Festival. AWOBMOLG was disappointing but don’t let it put you off entirely.