Alex Steer

Better communication through data / about / archive

The evolution of reading

1022 words | ~5 min

The University of California at San Diego's 2009 How Much Information? report estimates Americans' total leisure-time consumption of information (measured in hours, words and bytes - see the report's methodology), based on existing household panel, census and survey data. One of the headlines making the news (see here, for example) is the estimate that since 1960 reading as a way of receiving information has grown in proportion and absolute volume. As the report puts it:

Conventional print media has fallen from 26 percent of [words received] in 1960 to 9 percent in 2008. However, this has been more than counterbalanced by the rise of the Internet and local computer programs, which now provide 27 percent of [words received]. Conventional print provides an additional 9 percent. In other words, reading as a percentage of our information consumption has increased in the last 50 years, if we use words themselves as the unit of measurement.

While this measure is not perfect, it reflects what is fairly obvious: that since personal computing is dominated by print media (especially the internet), and we spend more time using computers, we are reading more. The decline in reading traditional print media has not matched the growth in reading screens.

On the face of it, this suggests that the adage that interactive media are killing reading is no longer true. But the headline numbers of the kind in this study are, if anything, the least interesting and the least revealing part of the story. The 'death of reading' story is a reflection of some more deep-running concerns which we are choosing to play out through our relationship with words.

There is a partly-hidden agenda to the 'death of reading' story, just as when we say that television is killing the art of conversation (to which I say a resounding 'meh') or when the Buggles sang that video killed the radio star. This is the domain of sensitive sociolinguistic analysis, not of simple counting, and this is where the limitations of the UCSD study become apparent. When you hear people talk about the death of reading they are not talking about a decline in the number of printed words being taken in. Nor are they necessarily expressing concern at declining literacy, though the argument is sometimes expressed rather weakly in this way. It is demonstrably not true. In the UK, for example, literacy is at 99% (though around 16% of adults have literacy difficulties). An NFER longitudinal analysis completed in 1996 found that overall literacy levels had changed little since 1948. Attainment rates in Key Stage 2 English SATS has risen sharply since 1995, despite a slight and over-reported decline last year.

When people talk about a decline of reading, I suggest they are talking about a decline in a particular social model of reading. This is the model of solitary contemplation of printed matter for leisure, entertainment or instruction: the image of the man or woman in a room of one's own, from Ambrose of Milan (the first recorded silent reader) to the entire cast of Richard Hoggart's The Uses of Literacy. There is a sense that we give less time to a kind of quiet contemplative idea of reading that was formulated by the early Christian fathers, celebrated for its individualism as a facet of Romanticism and then feared in the nineteenth and twentieth centuries by those who worried about the disordering effect of working-class autodidacticism. We still love the idea of the idle reader, away from the world. Inevitably, sweeping changes will always be a threat to that quiet idea (as in the late eighteenth century, so now), and the technological changes that are affecting how we read are also, perhaps, making it harder to preserve the notion of reading as a discrete and disengaged activity. We are constantly confronted with the necessity of reading for information rather than for pleasure. Reading from a screen, even in our leisure time, feels less like play and more like work. Even if we are reading more, we feel like we are reading less.

Yet reading for pleasure, if printed book sales alone are any judge, is not noticeably in decline either. I think we are just increasingly aware of the relegation of printed matter to the margins of our reading experiences. I speculate that the new technology of screens are print media is unsettling to the stable social model of reading, and there remains something comfortable about printed books as physical objects - something whose sensory impression makes us feel that we are reading when we are reading from a page. This is, in other words, a period of technological disruption, when a new medium is thoroughly embedded in our lives but has not yet settled into a comfortable spot.

If there is anything correct in this analysis, what are the implications for people who need to push print at readers? Better and more focused, more thoughtful research is needed, but a starting point might be to suggest that screen could be the medium for making people not feel like they are reading, and print the medium for making people feel like they are. This insight has already (possibly) been identified by marketers trying to choose between direct mail and email, but a better understanding of our responses towards reading experiences might help those decisions be made in ways that are more effective for communicators and more valuable to readers.

# Alex Steer (29/12/2009)


Sausages and statuses

495 words | ~2 min

Two of today's lighter news stories in the Telegraph turn on matters of language; both are slightly questionable.

The first is the bizarre, wonderful story of the man in Benxi, China who tried to convince a restaurant full of diners that he was a suicide bomber by going in with sausages strapped to his torso. A quote, attributed to one of the police officers attending the scene, is given:

It must have been terrifying for the customers but those things would only have gone off if you'd kept them past their sell by date.

Does this pun ('go off' = detonate/become putrid) even work in Mandarin? Has the gag (which is, admittedly, not bad) been inserted for the benefit of an English readership?

The other story is Facebook's list of the commonest terms in status updates this year. Helpfully, the list gives the topics (e.g. 'Facebook applications') and the specific terms used (e.g. 'Farmville, Social Living'). All are fine except the last:

15 - I
Specific words: I, is

This is far from clear in the Telegraph's coverage, but Facebook's blog entry on the status trends explains that the trend shows the increase in 'I' in status updates and the decline in 'is'.

Until March of 2009, people updated their status in a box that appeared next to their name on the home page and, consequently, many updates started with the word "is." Once that box no longer was shown next to people's name, the usage of "is" dropped off dramatically and usage of "I" doubled almost overnight. Prior to March of 2009, "is" represented about 9 percent of all words in status updates. With the change in interface, it remained high in absolute terms, but dropped all the way to about 1.5 percent recently while "I" increased from 1 percent to about 2.5 percent.

This is a pretty good measure. A more direct and plodding comparison would have been to look at the rise of 'am' against 'is', but this would have given a distorted picture, as it would exclude first-person statuses that don't use the verb to be, such as 'I love', 'I hate', 'I've just been', etc. Given Facebook's reputation, back in the days of the is-initial status, for producing syntactically mangled updates (e.g. 'John Smith is I hate Mondays'), it would be fun to know what proportion of posts begin 'is' and also contain 'I'.

Has anyone started building a corpus of social network status updates? Being able to run proper analyses on all that data would be fun. Maybe not that useful, but fun.

Happy Christmas.

# Alex Steer (23/12/2009)


Rage Against Social Media

1077 words | ~5 min

Let's start with the details before the analysis makes them pointless. On 20 December 2009 the winner in the annual competition for the UK Christmas Number One music single was announced. It was, rather unexpectedly, 'Killing In The Name' by Rage Against the Machine, a rap metal classic from 1992. The secret of the single's sudden festive success was a campaign on the social networking site Facebook. The 'Rage Against The Machine For Christmas No. 1' group, started by Jon and Tracy Morter, attracted over 960,000 members. Its aim was not just to get 'Killing In The Name' to the Christmas Number One spot. It was to prevent that spot from being taken, for the fifth year in a row, by the winner of the X Factor reality TV talent competition. This year's winner, `The Climb' by Joe McElderry, was relegated to Number Two. As far as anyone can tell, this chart result has been driven exclusively by the hype generated around this Facebook campaign.

I write this on the evening of the 20th December with a sense of mild but mounting dread, because I know what's coming. Fair play to the Morters, who have pulled off something quite spectacular. I just know that, as a planner, it is going to be my fate for the next several months never to hear the end of this. This is going to be the social media victory to end all social media victories. This is the archetypal story of ordinary consumers using the power of social networks to stick it to the man. Never mind that Rage Against The Machine's back catalogue is owned by Sony BMG, the same company that released Joe McElderry's single, and that this competition has been like a license to print money for them. Never mind that `Killing In The Name' is not only a violently anti-corporatist song but one of the least festive things you'll ever hear in your life. This is going to be the story of the power of crowds that will run and run and run.

It will appear, I confidently predict, in every PowerPoint presentation on social media for the next year. It will be impossible to sit in any meeting about digital marketing without hearing about it. Every trends deck, channel plan and comms strategy will feel duty-bound to namecheck it. It will become part of the frame of reference for comms strategists as surely as the Iranian election and the T-Mobile Flash Mob. (If you don't know what these are, look them up. There'll be a PowerPoint presentation waiting to enlighten you.)

Does it deserve this fame that it will inevitably garner? Maybe it does. It's quite an achievement, after all, despite the difficulties listed above. My problem is that it feels like such a hollow achievement. If good communications and brand planning should have as its aim to identify and meet people's wants, feelings, desires and hopes, then this seems a little small. It meets one desire - the desire to keep The X Factor off the Number One spot. But even in doing that it reminds us of the keen sense of loss we feel for the time when having a Christmas Number One was meaningful: when the last edition of Top of the Pops before Christmas was a national event; when the tail of singles releases was so short that you needed to sell a phenomenal number of records to be Number One; when music-buying, rightly or wrongly, felt like part of some collaborative effort, at Christmas time most of all. All of these drivers of behaviour are almost gone now. In the last ten years only two Christmas Number Ones have not arguably been propelled to the top by some sort of media (usually television) event. The charts have become a proxy, a sort of front organisation for other attempts to create common experiences through the media.

In its way, this Facebook victory is no different. This time the charts are acting not simply as a proxy for TV, as they have in recent years, but as a proxy for the battle between the perceived blandness of mass media and the perceived humanity and vitality of social media. In an interview, Zach De La Rocha, the Rage Against The Machine front man, described this as a battle between a 'sterile pop monopoly' and an 'incredible grassroots campaign'. I'm not so sure, myself, that this doesn't represent a rather sterile Facebook monopoly. Not on the part of the group's creators, but on the part of everyone who didn't want another X Factor Number One and who therefore got behind the Facebook campaign. Can almost a million people who joined the group, or the half million who downloaded the song, really have wanted `Killing In The Name' at Number One? Or did they just want something other than The X Factor? Where was the creativity of intention that said that we don't need a Christmas Number One any more? Why didn't half a million people organise into flash mobs across Britain and sing songs, or sweep the streets, or hand out soup and tea on some of the coldest nights the country has seen for a long time? That would have been harnessing the power of social networks far more than joining in a rather reductive either/or game through the charts.

So congratulations to everyone who took The X Factor off its perch - it is, on its own, one hell of an achievement. But to those of us who make a living talking about consumers and brands, let's pause a bit before we decide that this is is the best that hundreds of thousands of interconnected human beings can manage to do with their new medium.

(I realise this post has had very little linguistic content. The song is noteworthy for its rather sweary refrain, though, which is presumably the reason it was selected as the anti-X-Factor candidate. With that in mind, I can do no better than point you towards my former colleague Jesse Sheidlower, whose book and blog The F Word deal with the word in question in impressive detail. Let this be a starting point for anyone who feels tempted to wade in complaining about swearing in the Christmas Number One.)

# Alex Steer (20/12/2009)


From semantic webs to thesaurus worlds

633 words | ~3 min

I've spent a lot of time recently thinking (not blogging) about context-dependent search and location-based digital services of the Layar variety, and in particular what they mean for brands and consumers. There's too much to go into, but on the one hand there is enormous potential to improve opportunities for that most enjoyable of activities, random discovery; and on the other, there is the rather tiresome effect of putting another layer of intermediation between people and the world around them, even as we keep praising the disintermediating effects of the social web.

For now, I'd like to think about one of the newest real-world search applications, the frankly astonishing (if it works) Google Goggles. There have been no shortage of planners, social media commentators and all talking about GG, so I won't rehash those analyses. I will say, with my linguist's goggles on, that this is a particularly clear example of the trend towards making the real world a more semantically well-organised place.

A few years ago it felt like you couldn't move without hearing someone talking about the semantic web: the idea that data on the web should be well structured so it can be read and processed by machines as well as humans, meaning that data can be sliced, diced and re-presented as desired. Though it's less of a buzzword now, the idea of the semantic web (now it's not just mired in tagging) has proved phenomenally useful. We're used to websites that can cut and swap data via APIs and XML specifications - think of all those Facebook apps that use your data, or the various platforms for accessing Twitter. The other big development, of course, has been in natural language processing - helping computers improve their understanding of unstructured data.

All this has made the web a much more searchable place. We are now beginning to see the creation of a semantic world, which functions by interposing a layer of data between device-carrying human beings and physical objects. For individual users our semantic goggles can be switched on and off when need be (though in theory that's true of mobile telephones too), but collectively it means that we are going to start paying a lot more attention to the meanings and categories of things in the world. We are moving from the world as dictionary - a collection of items that we can only read one at a time and try to remember - to the world as thesaurus: a network of semantically interrelated items that we can cross-refer and make associations between. (For anyone interested in the implications of the web for this kind of thinking on actual dictionaries and thesauruses, I recommend my former colleague James McCracken's excellent paper.)

Inevitably, when a new visual or audio search technology comes along, someone somewhere proclaims the death of language as a medium on the web. Given my expressed scepticism about 'death of' arguments, I obviously have little sympathy, particularly in this case. Real-world search is going to require a massive expansion of both the semantic web and of natural language processing as we try to make sense of the poor stimuli and fuzzy logic (not to say fuzzy images) of the world on which we're trying to build a thesaurus. I can't help feeling John Wilkins and the other seventeenth-century theoreticians of philosophical languages should be the patron scientists of the context-dependent web.

# Alex Steer (18/12/2009)


Climate and other gates

1222 words | ~6 min

There can be something a bit precocious and annoying about new words, can't there? There's also no shortage of them: it seems to be a sport among journalists and tech people, in particular, to come up with new coinages. Few of them stick.

In my post on heatist, though, I made the point that new words, even if they don't last, can tell us a lot about the attitudes of the people who coined them. One way of finding those attitudes is to look for meaningful similarities to existing words. Linguists call these groupings of similar words classes. Classes may group words by their forms (often called lexical or morphological classes): words ending in -ing, for example, or words than pluralize in -en (e.g. ox, child). They may also group words by their functions: the parts of speech are classes. They may even group words by their meanings: thesauruses are lists of quite tightly-defined semantic classes.

Sometimes classes can be both formal and meaningful. 'Heatist' belongs to just such a 'lexical semantic class' - a class of words ending in -ist that denote prejudice or favouritism. Whenever someone coins a new word in this class, we know he or she is trying to encode some value judgements into the new word. What a heatist is doing is bad, like the actions or attitudes or a racist or a sexist or an ageist. (On a side note, we need to be careful not to confuse lexical-semantic classes with broader, purely lexical ones. A racist and a sexist belong to the class I've mentioned; a hypnotist and a dentist do not.)

The scandal over leaked emails from the UEA Climatic Research Unit (see my previous post and everywhere else) throws up yet another chance to think about the claims made by a certain lexical-semantic class: in this case, the class of words ending in -gate denoting scandal. This class takes its lead from Watergate, the name of the Washington D.C. office complex that extended its use to the whole political scandal that erupted in the USA in 1972.

How many -gate words are there? Helpfully, Wikipedia has a list of scandals with the -gate suffix. Of course, Wikipedia is freely editable, and there's no guarantee of completeness, but it's probably the best resource available without spending a lot of time on a lot of corpora, which just isn't worth doing.

The Wikipedia -gate List (hereafter WGL) has 129 items, which suggests that the -gate class is pretty productive. None of them seem bogus (I've checked), and even though some of them refer to the same scandals (e.g. Irangate and Contragate) they're still separate lexical items. I went through the list and classified each scandal by the decade in which it emerged. (Note, this is not quite the same as saying these words were coined then, though it's a fairly safe approximation.)

Here's the distribution of the WGL by decade:

Image lost in database transfer - sorry!

This tells us either that, despite its 1970s origins, -gate has been most productive in the last ten years, or that the list is incomplete and heavily biased in favour of recent examples. Possibly both.

Let's break down the 2000s by year:

Image lost in database transfer - sorry!

Again, it's heavily stacked towards the present. This may tell us more about the inherent biases of an encyclopedia that relies on what people know and can remember than it does about -gate, but that's interesting too.

So we've seen roughly when these -gate words were coined, but how successful have they been? Which ones never made it past the closed loop of journalist-speak, which now includes blogger-speak? To be fair to both journalists and bloggers, there's a reason to be sceptical: these are not just words, they are names of events, so we should expect them to have shelf-lives corresponding to the impact of those events. Still, it's good to wonder which ones were being read and repeated by people, rather than just committed to paper (or to the screen) never to be used again.

A half-decent test for this is to run each suffix through Google Trends, which tracks the volume of searches for words and phrases across time (for the last five years, anyway). Google Trends hits indicate that people are searching for a term, which at least suggests some limited uptake. Because Google Trends results are relative, I've used Watergate (which has unarguably stood the test of time) as the benchmark.

Here's what you get for Watergate:

Image lost in database transfer - sorry!

The big spike in May 2005 was caused by Vanity Fair's revelation of the identity of Deep Throat, the informant who leaked details of the Watergate Scandal.

I ran the names of all of the scandals coined in or after 2004 through Google Trends. Most of the -gate words on the list do not even show up on the graph when compared to Watergate. Those that do are tiny by comparison. They are:

  • Bloodgate (2009)
  • Crashgate (2008)
  • Naftagate (2008)
  • Petrogate (2008)
  • Utegate (2009)
  • Nipplegate (2004)
  • Troopergate (2008)
  • Spygate (2007)
  • Climategate (2009)

Even among these, most had only one tiny spike on the Google Trends graph, enough to conclude that they had no significant shelf-life at all, even if they were searched for a short, intense period.

The ones with a more prolonged time on the graph, zoomed in on, were:

Images lost in database transfer - sorry!

Even these have pretty short lives as words that are interesting enough to make people search for them in any significant numbers. The relative search volume for each, compared to Watergate, is tiny.

So what Climategate tells us is that although the lexical-semantic class -gate is very productive, most of the words within it have no staying power and little currency. This should not stop us from recognising the power of words in that class. However short a time they have, by bringing the taint of scandal, they can destroy reputations and bring down careers.

On a less serious note, here's a quote on the subject care of That Mitchell and Webb Look:

WEBB: Oh, the scandal in America. Yeah, that is interesting. That must be the biggest scandal since Watergategate. MITCHELL: Watergategate? Isn't it just Watergate? WEBB: No. That would mean it was just about water. No, it was a scandal or gate, add the suffix gate, that's what you do with a scandal, involving the Watergate Hotel. So it was called the Watergate scandal, or Watergategate. MITCHELL: Well said.

# Alex Steer (15/12/2009)


Posting lions to Twitter

96 words | ~0 min

I worked this out the other day, and it made me smile.

If you were to take all the text posted to Twitter in a single hour at a peak time of day (around 1.5 million messages, according to Tweespeed at the time of looking), and print it out on normal 80gsm paper, single-sided, single-spaced in 12pt Courier New, that hour's worth of Twittering would weigh around 250kg.

Which is roughly the weight of a lion. Quite a large one.

Suddenly Twitter's bird logo looks a bit out of place.

# Alex Steer (10/12/2009)


A note on ‘heatist’

1145 words | ~6 min

Andrew Gilligan's Telegraph piece on some of the operating costs of putting on the Copenhagen climate summit introduced me to a new word in its description of the climate protests taking place in the city.

In the city's famous anarchist commune of Christiania this morning, among the hash dealers and heavily-graffitied walls, they started their two-week "Climate Bottom Meeting," complete with a "storytelling yurt" and a "funeral of the day" for various corrupt, "heatist" concepts such as "economic growth".

'Heatist' seems to be new. If we assume that it's an addition to the set of words in -ist that contains words like sexist and racist, as well as newer formations like ageist and heightist, then at first sight it seems to imply that concepts like economic growth are prejudiced in favour of certain kinds of heat (just as sexist people are prejudiced in favour of certain sexes, racist people in favour of certain races, etc.).

Clearly, though, 'heatist' means something more specific than this, and that definition will not do. In context ('corrupt, "heatist" concepts such as "economic growth"'), and with the benefit of knowledge of the whole article, we can see that 'heatist' denotes something that promotes or permits global warming. Note that economic growth does not cause global warming, but because of its focus on production and progress rather than sustainability it acts as an enabler and driver of unsustainable environmental practices (burning fossil fuels, etc.).

So how old is 'heatist'? There are no hits for it on LexisNexis (barring Gilligan), in the British National Corpus (which runs to 1993), in the Corpus of Contemporary American English (to 2009), or in the TIME Corpus (to 2009). And so (with due apologies for method to ACL-SIGWAC) we turn to Google (Web and Usenet searches) and various searchable social media sources. Andrew Gilligan's piece is dated 5th December 2009, and there is very little evidence of the term in use online before this. There are already a few postdating examples, though all seem heavily influenced by the Gilligan article.

If looking for 'heatist' as an adjective runs into the ground, then the result are slightly better for 'heatist' as a noun. All the earlier examples I have found for 'heatist' are as nouns. However, the original sense (in contexts relating to global warming) is different. A heatist was (and is) a proponent of the idea that global warming is real or man-made. I've assembled a specimen quotation paragraph below. Only very recently does 'heatist' as a noun or adjective assume this new meaning - see the second and third quotation paragraphs. So far I have found no examples of the plausible relevant sense of 'heatism'.

A caveat: electronic texts are subject to change. Formally, many of these quotes should be dated 9 December 2009, as this is when I found them, so this is the earliest I can prove they were written in their current form. For the sake of ease I'm treating the stated dates of composition as genuine here. (Some former colleagues would kill me if I didn't mention this.)


heatist n.

1. A proponent of the idea that global warming is a genuine phenomenon, esp. one caused by human environmental activity.

[2006 Cerebrate's Contemplations (Electronic text) 11 June, Dear feminists, masculinists, leftists, rightists, blackists, whitists, straightists, gayists, heatists, coolists, pacifists, militarists, lifeists, deathists, and assorted other activists of all stripes...] 2008 Global Warming Nonsense in Pistonheads (Electronic text) 26 Apr., I'd say the heatists are in danger of doing more harm than good, however well intentioned their actions are. 2009 So where are the warmists now? in Harmony Central (Electronic text) 20 Nov., I just got back from the services at the Church of Global Warming. All the acolytes are warmists and the priests are heatists.

2. A proponent of practices contributing to global warming.

2009 T. White Twitter (Electronic text) 1 Dec., I've just been called a warm-racist and a heatist.

heatist adj.

Of a concept or practice: that indirectly promotes or contributes to global warming.

2009 Telegraph.co.uk (Electronic text) 5 Dec., They started their two-week "Climate Bottom Meeting," complete with a "storytelling yurt" and a "funeral of the day" for various corrupt, "heatist" concepts such as "economic growth". 2009 'Kardon' Twitter (Electronic text) 6 Dec., I declare myself a proponent of "heatist" concepts, anyone who does not like that can go freeze to death this winter.


The new sense of 'heatist' may have developed within the protest community organising around Copenhagen, and so deserve the 'new-word quotes' that Andrew Gilligan gives it. Without more evidence it's hard to be sure. It's not clear whether it was used adjectivally before Andrew Gilligan picked it up, or whether he is re-analysing a noun as an adjective. Any further evidence, especially antedatings, would be welcome. (Email alex@cantab.net.)

The sustainability debate is one of those areas that churns out a lot of new words and senses as well as a lot of feeling, though there's no guarantee of which new coinages will thrive. The new sense of 'heatist' is interesting for more than just lexicographical reasons, as it represents an attempt to reclassify unsustainable behaviour and concepts as kinds of hate activity by assigning them to a lexical-semantic class normally reserved for such things. I'll keep tracking 'heatist' for a while and post anything interesting here.


Update: As a side note, there is a contributor to the conservative blog Knowledge is Power who uses the handle 'Claire, Ideologically Stubborn Heatist Ruralite', and who has posts under that handle dating back to 2006. However, because under many blog systems changes to online handles are backdated, there's no guarantee that this handle was in use in 2006, so it can't really be counted as evidence of the term in use then (unless I hear different).

# Alex Steer (09/12/2009)


How dead is everything?

271 words | ~1 min

Taking my cue from Richard's post, I thought it would be fun to see the extent to which the blogosphere (as read by Blogpulse) has decided that things are dead over the last six months.

A warning first: like most untagged corpus searches, this probably returns quite a lot of noise, especially since Blogpulse tends to ignore stopwords. So don't take it as gospel.

Let's look at how dead email, print and Twitter have been recently. This search looks for the form '[medium] is dead' (which means '[medium] [any stopword] dead' to Blogpulse).

Image lost in database transfer - sorry!

Sorry, print, it's not looking good.

This is the same set of media queried as 'death of [medium]' (i.e. 'death [any stopword] [medium]').

Image lost in database transfer - sorry!

This version is hit quite badly by the death of Michael Jackson, which seems to have occasioned a lot of blog posts about the widespread reporting of his 'death on Twitter', which come into our results because 'on' is a stopword, so indistinguishable from 'of' to Blogpulse. This is a lot less reliable than the previous graph, then, but worth including as a warning to people not to take search results at face value.

And finally, given that Blogpulse is a corpus of blogs, this has to be my favourite:

Image lost in database transfer - sorry!

Does this suggest blogging is a winter sport?

# Alex Steer (03/12/2009)


Turn down the noise, turn up the silence

860 words | ~4 min

Sathnam Sanghera argues in the Times today that 'email is no longer a useful method of communication'. His reasons are, in short, that there is too much of it (the 'victim of its own success' argument), and that social networking sites 'allow you to message lots of people at the same time much more efficiently than e-mail' (and eliminate the need for some kinds of email) and 'encourage brevity of communication'.

The evidence for the death of email is presented as follows:

It is true, of course, that e-mail is still growing: according to a recent study, in the month of August this year, the number of e-mail users increased by 21 per cent. But social networking is growing faster: over the same period, the number of social network users grew by 31 per cent.

So continued growth means decline, apparently. Over the course of the article the position weakens, from the death of email to the 'peaking' of email to the assertion that 'e-mail is a long way from dying out... but it is beginning to fade'. The final sentence expects that email will '[end] up as the preferred method of communication for business users, the elderly, the helplessly middle-aged, the hopelessly nostalgic, estate agents, solicitors and credit card companies.' After all (we are asked a couple of paragraphs before) 'when was the last time a teenager sent you an email?'

Good question. The answer is: in about 2002. Because that was the last time any of my friends were teenagers. If I were a parent, I would not expect any form of electronic communication from my teenage children before they were 18 because they would, presumably, live in my house. Even if they didn't, teenagers are not known for their love of sending personal messages to their parents. There is some survey research from a few years back (not to mention the odd over-hyped thinkpiece) suggesting that teenagers use email rather less than older internet users, but all of this needs to be taken with a pinch of salt. Email remains outstandingly popular among teens, even if IM and social network messaging are jostling for first place.

The big unspoken assumption is that the habits of teenagers are supremely important. When last I checked, 'business users, the elderly, the helplessly middle-aged, the hopelessly nostalgic, estate agents, solicitors and credit card companies' were quite a large constituency. The comparative growth of social networking sites is also not a fair comparison. These platforms offer lots of features, not just messaging, whereas email does not. They are also much newer (email was invented in 1965, social networking platforms in the late 1990s by the most generous measure), so at a higher point in the hype cycle.

Email is a language technology, like handwriting, print or SMS, and like any language technology it lives or dies by its applicability to the lives and needs of language users. But not all language users are the same. Teenagers, for example, have disproportionate amounts of free time and a fairly low need to archive what they write. Archive-poor environments like IM and attention-diverting environments like Facebook are therefore much more acceptable venues for using language technology to teenagers than they would be to busy professionals, for example. No-one would say that Twitter has never got off the ground because its teenage demographic is small compared to its twenty- and thirtysomething user base. Nor could the replacement of email by substantially email-like messaging systems embedded within social networks really be called a revolution in language technology. Email would not be dead even if teenagers were deserting it in droves.

Sathnam Sanghera began his piece with an anecdote about the agony of waiting for a reply to an emailed invitation. In doing so he completely forgets the sheer joy of delaying replying to an email so that one can get on with doing something more important. In this, his reaction is itself quite teenage. It might do him, and all of us, good to step back and ask whether there is any value in calling the 'death' of a still-flourishing language technology. If the constant always-on noise of modernity is too much, I recommend reading Harold Love (1993) Scribal Publication in Seventeenth-Century England for its account of how one language culture (manuscript) flourished long after another (print) had in theory superseded it. Love warns against 'blindness to the nature and persistence of this culture', which many early modern writers preferred for the sense of privacy - of quiet - that it gave their work.

# Alex Steer (01/12/2009)


Ole, Allah and all

583 words | ~3 min

Earlier today I had the chance to watch Elizabeth Gilbert's lively and interesting TED Talk on nurturing creativity. Her thoughts on the dangers of pinning too much on individual creative ability and ignoring its context (and transience) make good advice for those in creative industries of any kind.

Talking about the creative spark, though, she says:

And when this happened, back then, people knew it for what it was, you know, they called it by its name. They would put their hands together and they would start to chant, 'Allah, Allah, Allah. God, God, God.' That's God, you know. Curious historical footnote - when the Moors invaded southern Spain, they took this custom with them and the pronunciation changed over the centuries from 'Allah, Allah, Allah' to 'Ole, ole, ole', which you still hear in bullfights and in flamenco dances.

This claim for the etymology of ole (bravo) from Allaah (Allah, God) crops up a lot. It has a certain charm to it and draws on our fascination with the kingdom of Al-Andalus and the fusion of Muslim, Jewish and Christian cultures that thrived in the Iberian peninsula from the 8th century until the expulsion of Jewish and Islamic people and religious cultures in Spain in the 16th and early 17th centuries.

Unfortunately it has been discredited, mainly on the basis of the lack of evidence and the unlikely phonological shifts necessary to go from the low final vowel sound of Allaah to the high of ole. For the best and most knowledgeable treatment, see the late Alan S. Kaye (2005) 'Two Alleged Arabic Etymologies', Journal of Near Eastern Studies 64:2 (subscription required). The historical record is not in favour of the Allaah etymology either. Spanish ole is not recorded before 1541, long after the Reconquista. The revised entry in OED Online (2003) suggests ole may derive from hola/ola (the greeting) but is cautious even on this.

Interestingly, there are lots of exclamations of greeting, surprise or despair in both romance and Germanic languages that have a vowel-'l'-vowel pattern of some sort - compare alas, hela, weylahola, hail, heil hello (and its earlier variants, including hallo and hullo), holla, etc. (Note: not French voilà, which is from vois la - 'look there'.) As the OED implies, these probably all developed around each other to a certain extent.

Busting open beautiful myths is not any linguist's favourite task (unless he or she's a really sociopathic linguist), and this definitely falls into the 'I hate to have to tell you' category. But I think creativity and beauty need truth and reason to balance them and make them better, so there you go.

# Alex Steer (30/11/2009)