Alex Steer

Better communication through data / about / archive

The cloud without the silver lining

1145 words | ~6 min

I'm still amazed how much of the public conversation about cloud computing has been focused on the upside. Admittedly, the upside is pretty good: the promise of reliable, scalable, cheap computing services, where somebody else does the grunt work of maintaining the servers and connections, and you just pay for what you use, like a driver on a toll road.

But it's good to see some of the downside risks start to make their way to mainstream attention in the wake of the Amazon EC2 service meltdown and the Sony Playstation Network hack. Bloomberg does a nice job of drawing the line between those two, showing that EC2 was used to bring down the Playstation Network. Is that the first case of cloud-on-cloud crime?

Reliable, scalable, cheap computing services, people are starting to notice, may suffer from a Holy Roman Empire problem: they may be neither reliable, nor scalable, nor cheap.

It's always slightly unfair to bash the reliability of a whole infrastructure because of some significant failures. Neither Amazon nor Sony means cloud computing as a model is necessarily flawed - but we should avoid the temptation to use the 'one bad apple' excuse. In these cases both Amazon and Sony were unreliable. That doesn't make all cloud computing unreliable, but it may show that organizations that rely on it suffer from low resilience.

Here's the difference between reliability and resilience. If Bob's Widget Company outsources all its computing needs to cloud providers, it probably gains computing reliability. As Adam Smith might tell you (and it's not often I cite Adam Smith), professionalization tends to drive up quality at least somewhat. Bob's decision to let computer professionals handle his computing needs is smart. In the hands of a pro, his services are less likely to topple over, and more likely to be fixed quickly if they do.

Economy-of-scale logic kicks in here. Rather than hire his own systems administrator, Bob can outsource to a big shared hosting company, who can do this job more cheaply with little apparent drop in reliability. For Bob, this is clearly an efficiency gain. He may also think of it as a gain in resilience. The economy-of-scale gains and the competitiveness of the shared hosting industry means lots of good things like better backup systems, version control, proper firewalling, etc. Bob's computing services are in safe hands.

So where's the drop in resilience? Well, it's to the resilience of the whole system. A thousand computing services hosted in a thousand locations is less efficient and reliable but more resilient. If Bob's and his widget-making rival, Tim, host their computing services in different places, and Tim's services are hit by a power outage or a bus crashing into the office, Bob isn't harmed, and may gain from the extra business from Tim's customers. If they're both in the same place, along with the services of lots of other widget-making rivals, then it's goodbye widgets.

In a low-resilience system the ripple effect from failure is greater. We've seen that with Amazon EC2, we've seen it with the attacks on Tumblr and Paypal, and we've seen it writ large with the subprime mortgage collapse. In that last case, in particular, it wasn't just the concentration of a single risk that caused so much trouble - it was the interaction of lots of risks. The assumption that all subprime mortgages were, like the unhappy families in Anna Karenina, each unhappy in their own way, led to an belief that a risk to one would not be a risk to all. Which meant that, a little like cloud services, all those mortgages could be bundled together into derivative products whose risk of going bad would supposedly be less than the risk of each individual mortgage defaulting. When the waves of foreclosures happened, it became clear not only that the risks to thousands of subprime mortgages were all much the same, but also that the interaction of risks - between mortgages, CDOs, credit-default swaps and all the other glamorous products hosted in the mortgage cloud - created a downward spiral effect, hastening the collapse.

Even more than shared hosting systems, cloud computing - where the underlying services are more fundamentally shared - are low in systemic resilience. And this means that all the talk of their scalability may also be pretty risky. They can be scaled in much the way that I can stack pennies on top of each other without much effort. I can keep on and on until I have an impressive tower of pennies for very little incremental cost. But if they fall over, I've got a lot of clearing up to do.

What about 'cheap', then? Again, this becomes a conversation about complex systems. The incremental cost of cloud computing to the end-user is extremely small. But that's because we're operating in an economy which assumes that a lot of the systems that let you access cloud services will continue to be extremely cheap, incrementally almost costless. As my colleague Andrew Curry has pointed out:

The technology industry has grown up in an age of cheap and abundant energy, and that has shaped, deeply and fundamentally, the way it sees the world, what it chooses to make, and how it designs what it does... But the age of cheap and abundant energy is coming to a close. It is about to become scarcer and more expensive.

The cloud computing model assumes cheap electricity and cheap bandwidth, assumptions that pretty much no scenario I've seen supports. This poses two problems: one about business continuity, the other about marketing. If the energy outlook or the bandwidth cost becomes more volatile, mass-scale cloud computing may suddenly look less attractive, since despite its low incremental cost it adds additional layers of cost in terms of data and the need for persistent high-bandwidth network connections.

The second problem is about changing attitudes. The available energy forecasts depend for their success on a reduction in resource use, especially in developed markets. After the economic downturn reduced disposable income, people learned how to tighten their belts, became more savvy and calculating about spending, and have grown angry with businesses and brands that seem to promote wasteful spending and financial irresponsibility. Might an energy downturn not prompt a similar anger at business and brands that seem to promote an over-indulgent approach to data and downloading?

So my question for marketers is: could we be heading for a cloud crunch? And if we are, could your enthusiasm for the cloud and for the always-on world turn from a strength into a vulnerability?

# Alex Steer (22/05/2011)


Splitting the future of libraries into two

481 words | ~2 min

In this post I'm not talking about mobile networks or attitudes to the cost of cloud computing. I will do, but this is about libraries.

The BBC News website has a short and slightly overdramatic video piece asking whether e-books will spell the end of lending libraries. (It also shows lots of nice shots of the British Library, where I spent a summer scanning manuscripts and early printed books. Rock and roll.)

It doesn't exactly come to many conclusions, so I've come to a couple of my own.

It's hard to argue with the notion that digital or digitally scanned texts are more useful than printed books, at least in highly-connected environments like the UK. You can raise lots of other arguments for the superiority of printed books - as cultural totems, as art objects, as fetishes (in the broad sense - the smell, the feel, etc.), as gifts (though as I've said that perception may change, if slowly). But ebooks are weightless and searchable. In terms of sheer usability, for most of us, ebooks just win. As they become less unfamiliar and the technology improves and becomes less baffling, we can expect an acceleration in their uptake.

Under those circumstances, taken to an extreme, there's no need for physical libraries as places to go and borrow books from, even ebooks. The idea, which I've heard mooted, of turning up to download an ebook is insane. It's suggested by some ebook pioneer in the BBC video, but it just makes no sense and will kill lending libraries even faster, especially if the cost of buying ebooks falls - and it might, as the ability to peg the price of ebooks to the price of physical books declines because people realize it's horribly unrealistic.

Libraries have another function, though. If you take the idea of a lending library, rather than its strict form (a building full of books you can read and borrow), it splits neatly into two:

  1. A resource for acquiring reading matter on a temporary basis at low or zero cost; and
  2. A publicly-accessible shared work space available at low or zero cost

Online ebook lending takes on the first function. As far as I know, nothing takes on the second function except coffee shops, which do so incidentally and with varying degrees of begrudgery.

For now, and probably for decades, there will be a need for book-lending libraries, though I reckon it will become increasingly marginal. It feels, though, like the idea of shared public workspaces has a lot of mileage. If I had to advise a library on where to invest, I'd say more desks, comfier chairs, and faster, simpler internet connections. Oh, and coffee machines.

# Alex Steer (13/03/2011)


Social media disaster response, 1906-style

215 words | ~1 min

Just quickly. The earthquake and tsunami in Japan have already started generated the usual hastily cobbled-together journalism on the role of social media during disasters.

I wondered if this was an entirely new phenomenon, or if people used to write the same sort of thing about previous leading technologies. Turns out the answer is yes, as this publication listing in a 1906 copy of Engineering Magazine shows:

Magazine clipping: Telegraphy and Wireless Telegraphy During the San Francisco Disaster

Anyone got a copy?

# Alex Steer (11/03/2011)


Switched-off futures - two quick thoughts

157 words | ~1 min

This is a note on a couple of things I'm thinking about. No, not Five Things; just two, to remind myself to write a bit more about them here.

I'm thinking, as I sometimes do, about the challenges that new media and technologies are going to run up against once the initial enthusiasm for them dampens a bit. Right now I'm wondering about a couple of things - social reactions to the energy cost of always-on mass distributed computing (i.e. what happens to digital channels if unlimited data becomes not cool?), and possible rejections of mobile in developing markets. Essentially the idea that the 'mobile boom' might come to be seen as parasitic for not just outstripping but preventing capital infrastucture development.

Both of these, as usual, have come out of great conversations, which I'll no doubt do endless damage to by trying to write them up into posts.

# Alex Steer (11/03/2011)


From script to print to ebook and back

347 words | ~2 min

I said here (and again here) that I'd write something about that other great flashpoint in the history of the book, the sixteenth century. In fact, for brevity and interestingness it's hard to top this short piece of film which includes an interview with Elizabeth Eisenstein, the historian who's done most to establish and quantify the 'printing revolution' of the fifteenth and sixteenth centuries. (It's from Eisenstein's book The Printing Revolution in Early Modern Europe that we get one of the most telling estimates of the sheer oomph of early print - that eight million books were produced between the printing press's invention in c1439 and the beginning of the sixteenth century.) So here goes.

If you're really interested in how media revolutions work, read Eisenstein's book. But also read Harold Love's Scribal Publication in Seventeenth-Century England. It describes how, despite the obvious onslaught of print, manuscript production didn't just die off. Instead it became a legacy system serving a small and devoted aristocratic audience who valued its uniqueness, intimacy and elite status. You can feel some of that dynamic if you read, say, Sir Philip Sidney's Astrophel and Stella poems (not printed until after his death), which rely for some of their effects on the assumption that these are handwritten poems designed to be circulated to an intimate readership; or in the prefatory poems to patrons and friends in Edmund Spenser's Faerie Queene, which treat the epic poem as if it's being sent around by hand rather than printed for a general public. It's the same dynamic you might feel now if tempted to buy someone a hardback book as a gift, or send a handwritten note rather than an email.

# Alex Steer (09/03/2011)


Kindles, iPads, and medieval readers

1474 words | ~7 min

Blogging's been a bit derailed by work, but I promised something on the history of the book in the cultural imagination, and in particular about a couple of points when existing ideas about what books mean were transformed.

As I mentioned before, there's been a lot of talk about the factors forcing a rethink of what written matter looks like and how it's used. Books, magazines and various other historically printed matter (dictionaries and encyclopedias, for instance) are shifting online. They have been for some years, but the pace of change has started accelerating as new structural and physical formats for written matter have been developed (both hardware like e-readers and encoding systems like the work of the TEI) together with new distribution and revenue models, from paywalling to The Domino Effect.

What books mean in culture

These are the physical and infrastructural changes. But since the physical form and infrastructural mechanisms of books and publishing have been reasonably stable for so long, the book as an object has built up a huge cultural back-story. The bound, printed volume has a whole network of associations bound up with it. Those associations form part of the reason academics, especially in the arts and social sciences, regard the book-length study as the most definitive kind of work; or why financiers feel the day starts incompletely without a newspaper under the arm; or why so many religions as they are now practised rely on the exchange and circulation of heavy bound paper objects (now a rather easy task in many parts of the world; still difficult and even fatal in others, as in the past). In a very broad sense we are people of the book, and much of our cultural life has been conditioned by printers' economies of scale or the limits of binding glue.

And then, in July 2010, Amazon announced that it had sold 143 ebooks for every 100 print books in the previous three months, and we realized that digital text might not just lead to the generation of new literary forms (the blog, for example) but eventually to the extinction or extreme alteration of old ones. I still remember the genuine shock that used to come from many people when I would suggest the possibility that there might be no more print dictionaries (at least for UK readerships) within a few years.

Eventually this disruptive shock will generate cultural transformation as new written forms become part of the toolkit with which we think. The blog would have been difficult in the age of print - only the diaries of the famous or notorious were ever published - and the tweet would have been impossible, except as graffiti or marginal notes. Put like that, the scale of the cultural disruption begins to make sense. No wonder dictators and CEOs alike struggle to internalize the speed and reach of the web. Forget streaming video or mobile telephony. Words were never committed to paper so fast.

Western Europe has been in this situation before: what you might call disruptive bibliographical shock. A confluence of factors over the course of the twelfth and thirteenth centuries rearranged our cultural furniture by putting pressure on the status quo of the book, even then one of the most powerful ways of transmitting information and ideas and making culture. I won't go into them in detail, but I'll give a sketch.

Book futures with medieval monks

If you'd been a reasonably learned clergyman thinking about the future of book production in western Europe at the start of the 12th century, you might have fallen victim to the same kind of bubble thinking that sometimes trips us up today. To you, literacy would have meant Latin literacy, the copying and annotation of theological, philosophical and occasionally scientific texts in Latin. The twelfth century was a boom time for that practice of book production. The reconquest of parts of southern Europe from Arabs had allowed Christian Europe sudden access to a large wealth of Arabic texts and a decently-sized bilingual and literate population to help find and translate them into Latin. As well as the vast reserves of Arabic knowledge were many texts originally translated into Arabic from Greek, a language lost to the Western Roman Empire and its successor states. So a huge bulk of ancient and late antique learning, from the Church Fathers to Ptolemy's astronomical work, was suddenly up for translation. You could forgive the Latinists for feeling a bit like the film studios when DVDs replaced VHS, or like publishers no doubt feel now as they rush to reformat their back catalogues for the Kindle.

Historians used to call it the twelfth-century Renaissance, and sometimes they still do.

But drivers of change are complicated things, and one largely unforeseen consequence of the sudden boom time in translation and transmission of Latin texts was a boost to the profile of book-making as a cultural activity. No longer was book production the preserve of monasteries. Literacy and learning were in vogue. We know this from the unseemly competition among royal courts in western Europe to attract as many scribes and literate men as possible. The clear winners was the Anglo-Norman King Henry II, and his wife Eleanor of Aquitaine, who built up impressive networks of scribes, book-makers and authors around them, producing new works as well as copying and studying old ones. In the early years works on science were all the rage, including a surprising number of books on timekeeping.

The boom time of the 12th century wasn't just confined to books, though, and royal courts weren't just centres of learning. Like most seats of power in good times they were centres of fashion, passion and politics as well. They were also, for the first time, overstocked with powerful aristocratic women who had vastly more leisure time than their mothers or grandmothers had enjoyed, in part a consequence of greater political and economic stability in France. They were also far more likely to be able to read - but not in Latin.

Anyone who works in publishing knows a demand shift when it happens, and this was an unprecedented demand shift. The wealthy, leisured populations of the royal courts didn't want to read about timekeeping, intricate theology and maths. They wanted something a bit more Jilly Cooper and a bit less BBC Four.

The shift happened slowly at first. The first signs were the production of more Latin treatises - except these ones weren't on astronomy, but on how lovers should carry on at court. (Notable examples include Andreas Capellanus's On Love and Walter Map's On the Trifles of Courtiers.) Being in Latin and full of clerical humour, these first efforts were probably as much pieces of wry social commentary as they were an attempt to meet changing demand, but they would have managed a bit of both, at least for male Latin-reading audiences.

Romance is in the air

But the big, decisive, disruptive shift was the production of works of literature in Western Europe's vernacular languages, not in Latin, for the first time, sponsored by these powerful royal courts. Designed to meet the demands of a new audience, in terms of content as well as language, it's unsurprising that the catch-all term for all those not-quite-Latin languages (which we still use today) became a byword for racy exciting fiction.

They called them romance. The French still call book-length works of fiction romans (and we call them novels).

It was a kind of writing that didn't exist before, in languages like French and Occitan that had had very little tradition of book-making. Over the next century or so it would spread prolifically, driven by demand, surrounded by moral panics and even some rather clever PR attempts to lend religious instruction some of romance's excitement (probably the main driver behind the equally meteoric rise of vernacular saints' lives between the twelfth and fifteenth centuries, going head-to-head against romance for the hearts and souls of aristocratic readers).

Unlike the present shock to the system, the big drivers of change weren't technological - the form of books didn't change dramatically - but social and economic and linguistic. Still, the change shifted permanently the idea of what books were for, and introduced the idea of fiction in the West. So your holiday reading probably owes something to Henry II, his wife, and their set of well-connected writers.

This has gone on more than long enough, but sometime soon I'll talk about the 16th century, and a series of more technological shocks to the system, mainly print.

# Alex Steer (08/03/2011)


The future of (the idea of) the book

630 words | ~3 min

There's been a 'kaboom' moment in publishing journalism, probably because there's been a 'kaboom' moment in the history of the book. Newsweek offers quite a deft summary of the big factors changing books and publishing, and the changes themselves. They may not all be surprising, but the scale and speed have suddenly reached a decisive rate.

Those with an interest in the history and future of the book (let me point you towards James Bridle and Ben Hammersley, for instance) have been tracking the drivers of change in publishing, and the early expressions of a seismic shift, for several years now, but a lot of that remains under the radar of public attention. Futurists (think strategy consulting rather than crystal ball-gazing here) often use the 'seasonal' classification to think about awareness of change. I like it, I use it quite a bit, so I'll use it here. In essence it's a nice piece of shorthand, and it goes like this:

  1. Spring: Fringe issue. Only picked up on by specialists, scientists, radicals, and similarly esoteric and out-of-the-way sources.
  2. Summer: Specialist issue. Discussed in subject-specific sources such as journals, conferences, blogs for expert readerships.
  3. Autumn: Accelerating issue. Turns up in newspaper articles, popular magazines, blogs, TV shows, etc. as a widely-discussed new thing.
  4. Winter: Mainstream issue. Regularly featured in government documents, corporate strategies, etc.

(Side note: I love that government reports are the benchmark medium for issues as they enter the 'duh, well, obvious' phase of their emergence.)

It feels right now like the challenges to the established forms of the book are moving from Summer to Autumn, and we're starting to take them seriously. Most of the discussion, in pieces like the Newsweek one, are about direct challenges to form, economy and use: what books are, and how we buy and read them. Newsweek's talking heads are also smart enough to know that challenge doesn't equal overthrow when it comes to media forms (see my many rantings on the 'Twitter exists, therefore email is dead' type of fallacy), and that there's no reason to assume that paper books don't have significant vitality left for certain functions.

So let me float another question, one that may still be hovering in Spring. What will the emerging challenges to the form, economy and use of the book do to our idea of the book as a meaningful unit within culture?

If this sounds like a crazy question, there's a good reason for that. The form of the book had a reasonably (though not entirely) easy time of it between about the eighteenth century and the mainstreaming of the web in developed countries about fifteen years ago. It's now taking a bit of a beating, but not for the first time. I don't have time now, and this post is long enough (an idea which raises questions of its own, I know), but over the next few days I'll aim to write one or two more posts which look at a couple more flashpoints in the history of the book - specifically, in Europe in the 12th and the 16th centuries - and offer a way of thinking about how cultures and books make sense of each other.

Meanwhile, I'd love to get a bit of a thread going on the future of the book, one of my favourite topics. Please do comment, trackback, tweet (I'm @alexsteer) or similar if you have thoughts on the bigger implications of the changing nature of books. Thanks!

# Alex Steer (06/02/2011)


The internet, criticism and the culture wars

735 words | ~4 min

Malcolm Gladwell, mid-way through either stating the obvious or missing the point in a New Yorker piece on social media and activism, writes:

We would say that Mao posted that power comes from the barrel of a gun on his Facebook page, or we would say that he blogged about gun barrels on Tumblr—and eventually, as the apostles of new media wrestled with the implications of his comments, the verb would come to completely overcome the noun, the part about the gun would be forgotten, and the the big takeaway would be: Whoa. Did you see what Mao just tweeted?

To me the most important word in that paragraph is 'whoa'.

'Whoa' is not a word you put in the mouths of others if you want to portray them as serious and independent-minded thinkers. Nor, let's be honest, is 'apostles'. Speaking of long words, rhetoricians (there's one) used to call this tactic 'prosopopoeia' (there's a second) - speaking in the voice of another, to make a point of your own. Gladwell's point here, I think, is that social media users are essentially shallow and vacuous.

While pausing briefly to ponder the vacuities that print is able to bestow upon the world every day without anyone blaming the medium (yes, McLuhan, yes), this derisory 'whoa' speaks volumes about the ongoing vitality of the old debate about 'high' vs 'low' culture.

Only a few days ago, the Observer's Culture section went all 'O Tempora! O Mores!' about the shift in power from professional critics to amateur criticism. Unsurprisingly, social media was roped in for a whacking:

The real threat to cultural authority turns out not to be blogging but social networking... The point isn't that the traditional critics are always wrong and these populists are right, or even that these comments are overwhelmingly negative or invariably take on the critical consensus. More often than not, they aren't and they don't. The point is that authority has migrated from critics to ordinary folks, and there is nothing -– not collusion or singleness of purpose or torrents of publicity - that the traditional critics can do about it. They have seen their monopoly usurped by what amounts to a vast technological word-of-mouth of hundreds of millions of people.

Those familiar with the history of debates on the function of criticism, particularly around the rise of English studies in the late 19th/early 20th century, might feel the icy breath of T.S. Eliot ('breathing the eternal message of vanity, fear, and lust', maybe) and others at this point. If you're not familiar, I recommend Chris Baldick's The Social Mission of English Criticism, or virtually anything by Stephan Collini. This is not a new story - mass participation is the death-knell of criticism, as surely as home taping kills music.

There is no doubting which side of the high/mass culture division the internet falls on. What's doubtful is the continued insistence that mass culture has to be low culture. (If you don't think you're prone to this, go on, ask yourself: which is really better, Classics or media studies? If you have an answer off the top of your head, either way, you're prone.) Look all over the web and you will find thoughtful, sustained engagements with the future of media and culture that make many 'function of criticism' arguments look paltry and narrow. For all the dumb, there is plenty of smart - and, whereas much high cultural aesthetics invites you to pay no attention to the man behind the curtain, the best critical theories of mass culture ask you to understand the smart in the context of the dumb. Not just with the aim of appreciation of the smart (as in many aesthetic formalisms), but with the aim of understanding various cultural systems as wholes (or interlocking sets, or whatever - pick your metaphor), and perhaps even of engaging and (dare I say) improving, of helping the smart outnumber the dumb.

It's enough to make you say 'whoa'. And, thanks to various miracles of technology and economy, it's in all of our hands (quite literally; or pockets). Which means it may be time to take criticism seriously as a discipline. Again.

# Alex Steer (03/02/2011)


Media and revolutions: Microscopes and megaphones

333 words | ~2 min

The current uprising in Egypt has prompted a lot of talk about the role of social media in political protest. The most sensible comments have been to the effect that, no, it's not an enabler, but yes, it can be an accelerant.

What about the relationship between 'social' and 'mainstream' media? Here are my thoughts. Take them to bits and see if they work.

At its best, social media provides a microscope. It plays the same role foreign correspondents traditionally play, giving a close-up view of what's happening. Social media provides many correspondents and allows for the capture and rapid sharing of lots of detail.

At its best, mainstream media provides a megaphone. It takes the important stories and gives them scale.

At their best, together, the microscope feeds the megaphone. Take Egypt. Without the microscope, today's news headline would be Rival factions battle in the streets of Cairo. The microscope shows that the pro-Mubarak faction is a rent-a-mob of thugs and cronies. As a result, the megaphone tells a better story. Or should, at least.

Between the two there has to be a filter. The microscope is notoriously bad when it pretends to be a megaphone. Easy me-too functions like retweeting are great for spreading news, but also great for giving isolated observations an undue sense of scale (which is, after all, what microscopes are for). The megaphone is just as bad as a microscope, given the pressures of time and money which hinder true investigative journalism.

The filter is the journalist - at least, what journalists may be becoming. Not the sole unearther of facts and pursuer of leads, but the skillful observer of everything the microscope reveals; the maker of connections; the person who finds the story, and knows how to set it up for the megaphone. If professional news has a future it may be through this analytical function. Rehashing press releases is, after all, little more than retweeting.

# Alex Steer (02/02/2011)


Getting economic history wrong with Google NGrams

267 words | ~1 min

Listen very carefully, I shall say this only once (or more). Google Ngrams is lots of fun, but it is next to useless for analysing the history of language, or of ideas.

I'm talking to you, New York Times Economics Blog. An otherwise sound blog post on the flawed idea of an economic 'new normal' includes this chart, which is used to argue that the idea of the 'new normal' was also bouncing around in the past 'during some major economic shocks'.

Google NGram for New Normal since 1900 Right, so what's happened here? Clearly a forty year period does not constitute a period of economic shock, and everything other than that massive hill of data does not constitute a meaningful change in usage. If you bother to go to Google Books and search for new normal between 1900 and 1940, all becomes clear. You wade through pages and pages of hits describing 'new normal schools/colleges'. These are, for the most part, references to the Normal Schools (teacher training schools) that sprang up across the States in the late nineteenth and earlier part of the 20th century. Sorry to be a churl, but seriously. If you want to be an economic historian, do your homework properly.

# Alex Steer (14/01/2011)