Alex Steer

Better communication through data / about / archive

A wordcrunching manifesto

123 words | ~1 min

Historical note: this post is from back when this blog was coherent enough to have a name...

--

There's been a gap in posting to this blog as I've been moving from London to Cape Town. There are also a few changes. From now on, posts will be:

a) shorter; b) more frequent; and c) more focused on language and communications.

As well as being a strategic planner, I'm a linguist, so rather than writing Just Another Strategic Planning Blog, I'll be trying to take a recognisable 'language angle' on things.

Other than that, let's not kid ourselves, it's still just a blog.

And, in true communications fashion, with the new focus comes a new name. So goodbye Common Parlance, and hello Wordcrunching.

# Alex Steer (25/11/2009)


Online giving sites: why cheaper isn’t always better

493 words | ~2 min

Update: Third Sector reports that Help for Heroes has not stopped recommending Justgiving as the Times article suggests. It is, apparently, trying to keep its donors better informed about the funding models used by different online giving sites.

--

Just a brief post in response to this piece in the Times, which gets it right then gets it wrong about online donation websites.

It starts off by covering the decision by Help for Heroes to stop using Justgiving.com as its online fundraising site of choice. This is because Justgiving is a commercial organisation, not a charity, and takes a 5% commission on donations (taken out of Gift Aid where possible), which it uses to generate profits.

However, it then seems to take commission-taking as some sort of benchmark. It compares Justgiving with Virgin Money Giving, which also takes a commission on donations, and contrasts both of these with a third site, Bmycharity.com, which takes no commission.

But this is misleading. A quick scan of its website reveals that Virgin Money Giving is a non-profit company (though not a registered charity) operated by Virgin Money. It is therefore fundamentally unlike Justgiving, and substantially more similar to Bmycharity. Assuming you've decided that it's preferable to operate donor websites on a non-profit basis (and you should think hard even before deciding this), all else being equal, you need to decide which non-profit site to use. How do you do this?

Looking at commission rates charged is one way, but it's inadequate on its own. What matters with a donation site, I'd argue, is how much of donated money gets to the charities you give money to support. So you do indeed need to know how much money is taken out to cover the costs of operating the company. But you also need to work out what those operating costs contribute to the effectiveness of the site as a mechanism for giving. Are they hiring better staff who will maintain the site's profile and so maximise donations, or technical wizards who will make the process of online giving easier and therefore less resistible? Or paying for server hosting to ensure uptime? Or controlling their finances in a way that will make sure their cash flow never hits zero, so they will still be around to deliver money to charities in weeks or months to come?

Virgin Money operates on commission, Bmycharity is funded by donations and advertising revenue. Knowing this alone is not enough to judge by. Remember, when you give via an online giving site, you are donating money to the organisation that runs it, and trusting it to give, in turn, to the charity or cause you want to support. So check out where your money's going, and how well (not just in what quantities) it's being invested.

Here endeth today's bit of donor advice.

# Alex Steer (12/10/2009)


Reaching for the moon again

825 words | ~4 min

News that lunar soil holds more water than previously thought has got journalists looking again to a version of the future that has belonged to the past for a long time now. The Times piece is fairly typical:

The discovery has fanned dreams of establishing a manned Moon base. Scientists have long hoped that astronauts could be based on the Moon and use water found there to drink, extract oxygen to breathe and use hydrogen as fuel.

The moon as a stepping stone for manned space exploration is a familiar one to us, even though it has never happened, because it is part of a narrative of the future that became so ubiquitous in the 1960s and early 1970s that it turned into a mainstream part of public consciousness even in the UK, which played little part in the manned space race. From John F Kennedy's assertion that 'we choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard', the expectation that moon landings would be followed by 'other things' - specifically, other similar but more daring things - became an 'official future', a story about the shape of things to come that conditioned the thinking not just of those in Nasa or successive US administrations, but of all those who watched the moon shots and the progress of efforts to put people into space and onto other worlds on both sides of the Iron Curtain.

For a generation and more, manned spaceflight has been a future with the Right Stuff, even though it has failed to come about. It is so prevalent as a cultural motif as scarcely to be worth mentioning. The idea that it might be an oddity is itself seems odd. It's hard to imagine not caring about putting people into space, and the idea that it might never happen again is an uncomfortable one. As Catatonia sang in 1998, we all live in the space age, even though, on the evidence, there is a good chance that the space age is behind us.

Writing as one who is too young to remember the Challenger disaster, let alone the moon landings, but trying to think as a strategist, the moonwater story has got me wondering what signals we missed. Might it have been possible to predict that manned space exploration would fizzle out within a decade? Could someone have called out the space age rhetoric? Hindsight is easy, but the clues are there. They are there in that same Kennedy speech, where he says that the objective of US moon missions is to 'win' against the USSR; there in the makeup of the Right Stuff, dominated by air force personnel although a civilian agency; there in the lack of any stated long-term strategic objectives beyond getting there, despite the gains to science that came almost incidentally. We are used to the idea that the space age was a product of the Cold War, but perhaps we need a more honest assessment: that the space age was a facet of that conflict; that it ended when tensions between East and West began to ease. If we accept that the official future is now in the past, not just delayed, we need to rethink the purpose and nature of work in and relating to space.

Beyond the space industry, the lesson is that ideas and enterprises that seem noble and good and worthwhile in themselves may fail because they become too bound up with contingent circumstances that for a time give them energy. It may be too late to save the space age, but what is next? Now, for example, much public rhetoric surrounds the notion of a 'green new deal' or even a 'green industrial revolution', in which the need to stimulate the recovery of the economy during the present downturn provides a stimulus for innovation that will make economic production more environmentally sustainable. But if economic growth becomes the impetus for sustainable innovation, as outstripping Moscow became the impetus for reaching the moon, what happens afterwards? The idea of a sustainable 'green age' could become unsustainable if it is pinned too fast to shorter-term aims.

If permanently stabilising industrial society's relationship with the natural environment is the work of the century -and it may be - then the energy for that work will need to be found in social and economic futures we cannot yet see. Planning for sustainability therefore means planning for change, finding ways to keep long-term needs on the policy agenda. Preventing the green age from becoming the next space age matters, especially since the demise of the last great dream means we do not have other worlds to escape to.

# Alex Steer (06/10/2009)


Risk and asymmetric metrics

576 words | ~3 min

Why should schools have high fences, security cameras and biometric pass cards? Why should employment contracts be conditional on the supply of fingerprints and retina scans checked against a central database? Why should DNA samples of every citizen be held by the state? Why should sharp-nibbed pens be banned in hand luggage on planes?

Given any of these questions, it's easy to think of plausible answers. To prevent identity theft or hijackings or deter illegal immigration; to stop sex offenders or mad gunmen or kidnappers from getting into schools. We can come up with these 'security narratives' regardless of how high the level of risk of the things we wish to prevent, because it's so easy for us to measure the benefits of the interventions we are considering. If we know that mad gunmen typically get into schools because of lax physical security, it's obvious that tightening up physical security will directly reduce the number of man gunmen getting into schools. This is, our thought processes tell us, a Good Thing. Up go the fences, on goes the CCTV. It doesn't matter how many or few mad gunmen get into schools, because fewer is always better, so anything that will reduce the risk automatically gets the nod.

But what if the question is the opposite: why should we not fence round our schools, store everyone's biometrics, or file off the corners of everything that goes onto a plane? Why might that be a bad idea? When faced with questions like this, it's a lot harder for us to come up with simple narratives of risk and reward. If there is the possibility of mad gunmen, doing nothing just seems negligent.

Choosing between the options is not a fair process, though, because we have so few good ways of measuring the benefits of not acting. This is because there are not good enough measures of the impacts on individual wellbeing or social capital of intrusive or controlling measures. These impacts may be likened to the externalities that economists talk about when assessing the full cost of a transaction. Monetary externalities - such as the true cost of producing a t-shirt that you buy for £3 - are easier to account for than non-financial ones - such as the social cost of the damage to the wellbeing of the child that makes that t-shirt instead of going to school. There is a growing recognition among economists that better accounting of non-financial externalities is needed to assess the desirability of different kinds of transaction: some are even discussing the end of GDP as a useful metric.

While these debates are already being had about trade and globalisation, they need to be had around risk assessment as well. We are now more able than ever to control risks in our lives and in society, but we are still bad at measuring the fallout from the controlling measures we take. Until we have ways of accounting for the effects (positive and negative) on social capital and individual happiness of every CCTV camera that goes up in a town centre, or indeed every moral panic that hits the headlines, we will be unable to make good decisions on whether or not to invest in intrusive behaviours. Until then, like bad economists, we will fail to count the full cost of our social transactions and will keep taking the safe option, which may be easier to account for but not necessarily for the best.

# Alex Steer (20/09/2009)


Why Seth Godin is wrong (and why he isn't)

616 words | ~3 min

Seth Godin's blog post of a few days back, in which he accuses non-profits of hating change, has caused a lot of stir in some sections of the non-profit advisory sector (a sector which barely existed a few years ago) and the charities sector more generally.

Some of the best responses have challenged Godin's metrics for ambition and responsiveness to change. For sure these are controversial. He uses in support of his argument the fact that none of the 100 most followed users on Twitter are charities. This rather fatally equates ambition with scale, and assumes that free-to-use social media platforms really do provide a level playing field, completely ignoring the level of sophistication and the investment of time (and money to buy expertise) needed to build sustained social media presences. Unsurprisingly, charities which are often stretched for time and resources often feel they have better things to be doing.

It has also been pointed out in many responses that resistance to change is not an organisational problem unique to, or even peculiar to, non-profits. The words 'Wall Street' have been chucked around quite a lot, though this seems a bit opportunistic given the events of the last year, and not quite right either. The boom years in the financial sector seem to have been fuelled by an astonishing fetishisation of novelty and risk, and the bust has been a consequence of that, not of risk-aversion.

Many counter-arguments, though, share a guiding assumption with Seth Godin's piece: that disruptive innovation is a desirable good. As a former non-profit analyst my instinct is to agree: my whole industry is in the business of shaking up (or shaking down) ineffective patterns of philanthropic and charitable activity. But, with the same hat on, I realise that's flawed. The aim of philanthropy is not social innovation but the improvement of people's lives and conditions and the tackling of social, economic or environmental challenges. A non-profit analyst's role is to help establish criteria for impact and measure effectiveness - to see whether attempts to 'make a difference' are, well, making any difference. All social action begins with a perceived need for change, but not all requires disruptive innovation. What matters is the change philanthropy creates, not the pace of change in how philanthropy does what it does. This is not rocket science, and neither is much philanthropy: sometimes the most impact comes from doing familiar things better.

None of this lets charities or philanthropy off the hook, though. Both charities and funders can be resistant to change because change - and planning for change - can be uncomfortable. This can lead to charities pursuing objectives that are coherent but do not meet real need, or funders structuring their giving in ineffective ways. Doing things the way they've always been done is not good enough, and neither is novelty for the sake of it. In this way Seth Godin is both right and wrong, but most of all just misses the point by confusing activities with results. In this, he has a lot in common even with those he criticises most fiercely.

Above all, this debate should remind us that there is a clear need for good non-profit analysis as an an independent discipline. Not just to annoy charities and funders with awkward questions, but to test both old and new strategies for social impact in a way that charities and even funders often don't have the time and resources to do. To find out, in other words, which changes are worth making.

# Alex Steer (17/09/2009)


The way brands talk

140 words | ~1 min

I'm looking for examples of brands that have a really distinctive tone of voice. Good examples are Innocent (which has spawned lots of imitators with its friendly tone) and Marks and Spencer (which has spawned lots of parodies and come to typify the speaking voice of 'food porn'). Do you know of any brands that sound a bit unusual in their marketing communications?

I'm trying to find examples to do some detailed analysis of the constituents of register (linguist-speak for 'tone of voice') in branding. In particular, I'd like to find examples of tone that used to sound distinctive but now don't, in the hope of modelling the life-cycle of tone a bit more clearly.

Answers by email, in the comments or @alexsteer on Twitter. Thanks!

# Alex Steer (13/09/2009)


Public knowledge about things you don't know

767 words | ~4 min

A month ago the Guardian reported on Shadow Education Secretary Michael Gove's plan to create an online library of past school examination papers going back 'to Victorian times'. The article included a quote from Michael Gove, saying:

It is vital that we restore public confidence in our exam system. Universities, businesses and academics say the system has been devalued.

This may be true (and in some cases it is demonstrably true), but publishing an archive of exam papers is more a way of stirring up fears of dumbing down than it is a way of restoring public confidence in the exam system.

In much the same way that it's always September somewhere on the internet, when it comes to stories about school standards it's always August. When this story broke, of course, it was actually August: that 'silly season' period of slow news when Parliament and the law courts are recess, everyone's on holiday and, sadly for students everywhere, GCSE and A Level results are released.

This unhappy coincidence means that every year we are treated to a rush of news stories put together around DCSF statistical releases, magazine pieces consisting mainly of pictures of teenagers opening envelopes, and op-eds about how it's all going to the dogs. Followed by more op-eds about how it's not all going to the dogs and we should be proud of the continued average grade improvements. Followed by more op-eds from the authors of the original op-eds about why this is short-sighted and it actually is all going to the dogs. (Followed, inevitably, by blog posts like this one. Sorry.)

The proposal for an archive of exam papers feels like a way of enshrining this silly season attitude to education, capturing it and making it available to us at any time of year.

Start leafing through Victorian (or otherwise old) exam papers and you will pretty soon start to feel stupid. Compare them to some GCSE papers from last year and you might notice that you know (or can have a stab at) more of the answers from the recent papers than from the old ones. You might then conclude that this is because modern-day exam papers are easier than those set in the late 19th century, and raise a concerned eyebrow at the state of the nation's young minds. And hey presto, your own silly season has begun.

This is because ease, like other things, is in the eye of the beholder, and unfamiliar things seem difficult because they are unfamiliar. As you might imagine, historical (beyond a few years) and contemporary curricula are in no sense alike for the majority of subjects, and many subjects now exist which did not in the past. The expectations and demands of schooling have changed enormously, in response to changes in theories of learning and the economic and social requirements of education, and syllabuses have changed with them. As a result, old exam papers contain things you should not necessarily be expected to know. They will also have been marked according to a standard that you might not understand.

This is not to say that there is no such thing as grade inflation, or to take a particular side in the silly season debate, but it should be noted that uncontextual historical comparison by biased viewers does not make a fair test. Only in a very few cases, notably some areas of mathematics, is it possible to make relevant comparison, and even then the findings shouldn't be extrapolated too widely. (Grade inflation in some maths subjects does not imply easier exams across the board.)

This proposal is a way of damaging confidence in the existing system by making it look too easy, not of restoring confidence. The cognitive bias that it induces is also a way of affirming a conservative (small 'c') set of subject distinctions within education. Once you have convinced yourself that exams are getting easier, you may also convince yourself that new subjects (media studies, business studies) are inherently less valuable than traditional ones (history, Latin). Of course, you might conclude this anyway, but if you do, make sure it's for a cogent or at least honest reason, rather than because of private ignorance acquired under the rubric of public knowledge.

# Alex Steer (02/09/2009)


Wikipedia: growing up, not getting old?

485 words | ~2 min

There have been quite a few newspaper and online articles recently reporting that Wikipedia's's growth rate is slowing. These have taken a pretty alarmist approach. One of the articles linked to above is headlined, 'Is Wikipedia a slow death?'. Another hypothesises that an increase in 'editorial control' over entries may be putting people off.

The research comes from the Augmented Social Cognition Research Group at Palo Alto Research Centre (PARC), and is laid out on its blog. It's good work, and shows that the idea that Wikipedia's growth was exponential (as it looked for the last few years) was wrong. The level of editing activity has slowed in 2008 and 2009 (note: not the new-article creation rate - you'd expect that to drop anyway as the result of a backlog effect). The evidence is there, and it's robust and conclusive.

That doesn't mean Wikipedia is dying, though. The assumption in a lot of these articles is that would-be editors are either annoyed or bored with Wikipedia, and are deserting it. Having worked on a different-yet-similar collaboratively-driven editorial knowledge project, I think there's a better explanation.

Here's my hypothesis: Wikipedia is not dying but maturing. As it becomes a more comprehensive resource, glaring omissions and errors become fewer, so many of its readers no longer feel the need to alter it. They turn from active editors into users who have the power to edit, but do not necessarily exercise it.

The PARC research segments editors of Wikipedia according to how many edits they have made. It has found that the top two tiers (100-999 edits and 1000 edits plus) have grown their share of editing activity from about 55% in 2007 to about 60% today. Those who tend to edit less (call them the 'casuals') are editing even less; those who tend to edit more (what you might call the 'pros') are editing even more. Unfortunately I don't have the segment sizes, so can't tell if there are more or fewer casuals and pros, relatively or absolutely, than there were before. We'll have to wait for the data to be published at WikiSym 2009 in October.

Decreasing user activism need not mean users are getting fed up with a resource. It might mean they're becoming happier.

Update (18/08/2009): Newsy has a piece on the different angles on this story, including Alexa data showing site usage continuing to grow. Thanks to Daniel from Newsy for pointing me to this.

# Alex Steer (17/08/2009)


29% more boring stories about your gap year

571 words | ~3 min

Powerchex, a pre-employment screening company, has informed the world (via the Guardian) that under-21s 'told 29% more lies on job applications this year than last'.

Leaving aside potential conflicts of interests arising from companies who sell services conducting research into the extent of need for those services, this headline sounds a bit odd. That's because it is.

Let's do the linguistics first. '29% more lies' is a phrase made up of an adverbial phrase ('29%'), and adjectival/determining phrase ('more') and a noun phrase ('lies'). The whole phrase is the object of the transitive verb 'told'. (We'll ignore 'on job applications this year than last' for now.)

The subject of 'told' is plural: 'under-21s'. The plural subject implies multiple instances of individual action (as in 'three poets died this year') or collective action by multiple actors (as in 'three teenagers stole my car today'). Taken as a whole, 'under-21s told 29% more lies' means that, through simultaneous or collective action, under-21s increased the number of lies told on CVs by 29%. If there were 100 lies on CVs last year, there were 129 this year.

Which is not what the report says. It says that 29% more under-21s than last year told lies on their CVs. In other words, if there were 100 liars last year, this year there were 129. The number of lies told is not specified.

Which means what?

Of 4,735 job applications from all age groups sent to finance firms between June last year and this May, 899 contained false information.Powerchex... found that of the 307 belonging to under-21s, 18% contained lies, an increase of 29% from last year, when only 14% of forms contained false information.

So in fact while the relative risk increase is 29%, the absolute increase is (18 - 14 = ) 4%. Given that we're talking about individuals here, let's put that in human terms. Of 307 under-21s applying to finance firms, 55 told lies on their CVs, compared to 43 the previous year.

Is this a significant change? Is it related (as opposed to just correlated) to the recession? It's hard to know. According to the Guardian piece:

Alexandra Kelly, managing director of Powerchex, said: "The pressure of the recession on job markets seems to have led more applicants to believe that they should lie or make embellished claims to get jobs."

Well, maybe. On the other hand, 12 people out of 307 could be insignificant variation, a natural progression away from or towards the mean. Longer-term data is needed. So might a larger or more representative sample - this is a market (graduate finance intake) in which one large firm alone can still hire 1,000 graduates, despite 32% of organisations operating a graduate recruitment freeze this year. It's hard to tell from the article whether or not this trend can be taken seriously.

Yes, this is pedantry, but it's also marketing. Lots of organisations are throwing round 'recession mania' stories at the moment. If you think you have an insight, check your numbers, check your wording, and proceed with caution.

# Alex Steer (07/08/2009)


Plummeting to the mean

251 words | ~1 min

Clearly today is a bad day for the public understanding of simple numbers.

Could the author(s) of this article (headline: 'I don't: number of 'gay weddings' plummets') please look up the term 'backlog'.

Not that you need look far, since despite the headline the reason is given by Peter Tatchell halfway through the article.

'After civil partnerships were legislated there was a huge surge of couples who had been together for decades who suddenly wished to take advantage of the legal recognition.'

Which makes the opening paragraph's claim of 'speculation that, like heterosexual services, [civil partnerships] have fallen out of fashion' a bit fanciful.

The prize for the worst error of judgement goes to some unnamed 'government officials', though.

When Government officials drew up the new laws for civil partnership they estimated that five per cent of the population was gay or lesbian and predicted that 62,000 gay couples would register in the first five years of ceremonies.

62,000 is, as far as I can tell, an absolute back-of-the-envelope calculation that you get if you take the rough number of heterosexual weddings in the UK, take 5% of it, and multiply it by 5. This is not public-sector insight research at its best. (That said, if there is evidence that the government conducted some more research into the extent of demand for civil partnerships, I'd be delighted to be corrected.)

# Alex Steer (04/08/2009)