Alex Steer

Better communication through data / about / archive

You either love it or... that's it

485 words | ~2 min

I seem to be having a lot of conversations with smart people at the moment about brands with well-established problems. Specifically, the kind of problem with a product, service or organsation that gets the kind of persistent 'yes' response that planners can only dream of. You know the kind. Stella's for wife-beaters, Skodas break down, Belgium is boring.

Typically, you can do one of two things with a problem like this. The first is to ignore it, and develop all your brand messages without ever giving the slightest hint that lots of people already have a strong point of view about your brand. Let's call this the 'Stella' option, for obvious reasons.

It doesn't work, obviously. It's fresh wallpaper in a condemned house.

Option two is the surprise tactic. Don't talk about your problem, but find ways in which your brand is the exact opposite and talk about those. Let's call this the 'Visit Dangerous Belgium' option. (As far as I know there hasn't ever been a 'Visit Dangerous Belgium' campaign. But there should be.)

Unlike the 'Stella' option, this can work pretty well, but you've got to work really hard to show that popular perception is wrong (to convince us all that Belgium really is thrilling and dangerous).

So let's talk about option three. The one where you say 'yes, we know'.

This can be hard, especially if you've spent your whole history pursuing option 1. But it can be a far more effective way of changing people's conversations about your brand, by taking away the covert character of the old conversations. You may feel like you're selling out your brand managers, but really, look at how well it can work. Marmite said yes, we know half of you hate the taste. Skoda said yes, we know you all think our cars are crap. Nationwide said yes, we know building societies are desperately boring. Now you're listening, let's talk.

People are basically pitiless when it comes to brands, mainly because brands deserve it. They will find and exploit weaknesses. And now they'll do it online, where everyone can hear. Who can blame them, when most brands spend so much time talking about how great they are? Nobody likes that. If you spend all your time alternating between telling people 'Brand X Baked Beans are great' and asking people to 'tell us how much you love Brand X Baked Beans', they will be praying for any chance to knock you down. (If you don't believe me, Google 'Skittles twitter' and see how the colourful candy brand died for your sins.)

I've seen this advice dressed up as 'creating a forum for brand discussions', or 'generating conversations around a movement'. Both those metaphors still imply that the brand remains in control. So I prefer to summarise what's above as 'dealing gracefully with the fact that some people hate you, working out why, and fixing it'.

Catchy, eh?

# Alex Steer (03/08/2010)


Cross my palm with data

483 words | ~2 min

Wired report on the investment by Google and the CIA in Recorded Future, a predictive analytics tool. (There's a good piece in Fast Company too.)

Here's the showreel:

And here's the description of how it works:

  1. Scour the web
    We continually scan thousands of news publications, blogs, niche sources, trade publications, government web sites, financial databases and more.
  2. Extract, rank and organize
    We extract information from text including entities, events, and the time that these events occur. We also measure momentum for each item in our index, as well as sentiment.
  3. Make it accessible and useful
    You can explore the past, present and predicted future of almost anything. Powerful visualization tools allow you to quickly see temporal patterns, or link networks of related information.

It feels like there's a lot of missing steps in point 2 there, but that's understandable, as I expect there's a lot of proprietary smartness around data extraction, categorisation and analysis, and natural language processing (all of which I'd love to find out more about). The Fast Company article linked to above makes the good point that, like a lot of 'predictive' analytics systems, it's more an assembly and connection tool than a predictor, and that 'the only way of really proving that the intel gathered is bona fide is common sense'. To that I'd add a caution about the data that's going in. Without guarantees as to how the compilers judge the informative content of input data, no guarantees are possible about the dependability of outputs (as ever, x in, x out).

But it's easy to bash a product you don't understand. Actually, my concern is that the whole thing feels over-branded. The name and the other associated terms ('Temporal Analytics Engine', for one) seem grossly out of step with reality. That's a shame, because it's clearly a fantastically smart system designed and analysed by equally smart people. Isn't that enough without forcing your marketing material to imply some sort of clairvoyance? All the 'news from the future' stuff is a bit offputting.

# Alex Steer (30/07/2010)


Nee-argh, nee-argh, nzzzt

71 words | ~0 min

Writing the last post made me think, when was the last time you heard a modem?

Amazing how quickly technology changes and leaves a museum of sensory experiences in our memories, no longer being updated. What will be next to go? Will we look back fondly at that death-rattle noise you get when you're standing next to a speaker just before your phone gets a text message?

Maybe not fondly, actually...

# Alex Steer (29/07/2010)


Haptic things and the post-signal world

417 words | ~2 min

haptic: of or relating to touch.

The last few years have seen a shift towards the haptic in personal computer interface design. For several decades now the main paradigm has been visual-spatial, based on the metaphor of seeing things in space and being able to control their movement and arrangement. Of course, the mouse involved using touch to direct movement, but was not really fully haptic because of the difference in location between the hand performing the action (below your eyeline) and the action performed (on the screen, in your eyeline). The new paradigm is visual-spatial-haptic, to coin a really awkward term. Touch-screen devices create the illusion of direct manipulation. Maybe it isn't even an illusion: you touch the screen, and changes take place beneath your fingers. This is not a new conversation.

But the idea of the haptic, as opposed to the purely visual-spatial, seems to be emerging now in media. Looking at (and cherry-picking, admittedly) some of this year's most awarded communications campaigns - think Nike Chalkbot, Tropicana Sun, KitKat deckchairs, that weird Nokia arrow-on-a-crane thing - the idea of the touchable world as a locus of interest seems to be on the rise.* The best communications, we seem to say, are those which touch the haptic, material world and change it, not those which just configure pixels on a screen or the vibrations of a speaker.

Perhaps the 'post-digital world' we're all beginning to think about now is actually a 'post-signal' world. Perhaps the last fifty or so years under the dispensations of television, radio and personal computers have been about deriving value and entertainment from the projection of electronic signals into our field of attention. Under those dispensations, visual-spatial interfaces with superficial manual manipulation (mice, buttons, switches, dials) was enough. Perhaps the next cycle of our relationship to communications technology will be about deriving value from the physical world and the things in it, with the insistence that interfaces extend our haptic reach (not just our sight and hearing) in ways that feel physically genuine. As telecommunications reach the post-scarcity phase, maybe we'll want our remote activities to feel more like our interactions with the things around us. And maybe the convergence of physical interaction and telecommunication will help us value both more. I hope so.

* I pinched most of these examples from @sarahewalker, who knows and thinks about them far better than I.

# Alex Steer (29/07/2010)


Facebook and the new errors of judgment

587 words | ~3 min

Another day, another Facebook privacy story. Well, sort of. BBC News reports on the publication of a data file containing the publicly-available details of 100 million Facebook users by security consultant Ron Bowles.

Predictably, there's anger and concern, much of it directed at Facebook. But I'm not sure the company itself is entirely to blame. I'm starting to think it's a generational thing among internet users.

This is, after all, publicly available information. Yes, Facebook's privacy settings are convoluted and offputting, but it is possible to lock down virtually everything with some determination and a small amount of time. Or - and I know nobody wants to hear this - you can simply not use Facebook if you're more deeply concerned about putting personal data online. In much the same way that, back in the olden days when I learned to use the interwebs, you might consider not having a web page.

So yes, there's still cause to grumble at Facebook for some of its points of view on privacy; but there's a more general problem. It's that a lot of social network users don't really know how computers work any more.

Specifically, they don't know how scripting works, and the ease and speed with which a publicly-accessible site can be read by a script, and structured data extracted and written to a file. Once written, it doesn't need supervision or amendment (as long as it's not buggy). It just runs and runs.

This breaks through what I call the 'effort threshold' for privacy. We tend to assume that there are some things people won't do to us because the cost in effort outweighs the possible benefits. For example, assume someone who lives in south-west London doesn't like me, and wants to punch me. A problem if I'm also in south-west London, and I might have a word with the local police. However, I live in South Africa, so I assume that the effort required to give me a black eye is so great that the risk is pretty low that he will fly out. I'm beyond the effort threshold.

The problem comes when we get the effort threshold wrong. That's easy to do online when you don't understand how tasks are performed by computers (and by people who know how to use them).

We're now well beyond the point at which most internet users had a pretty decent knowledge of computing generally. We can complain about this ('it's always September...'), or we can accept that it's a symptom of a maturing technology. It's probably the first time a media technology has had a cost of participation so low that this kind of 'mass error' has been a problem.

So by all means let's argue about privacy versus openness, and about settings on social networks (and whatever comes after). But let's also talk about what new kinds of knowledge are needed to avoid mass error with the technologies that increasingly define how the world works. That's a much bigger conversation. We're used to talking about a generation that's growing up digital as if that's a benefit, but it can also make that generation less well equipped to function than the members of the previous generation who had to learn their way into the future, one grep at a time.

(And if you don't get the pun in the previous sentence, ask a geek.)

# Alex Steer (29/07/2010)


Good, more advice on social media

260 words | ~1 min

Altimeter's paper on The 8 Success Criteria for Facebook Page Marketing makes for an interesting read.

Here are Altimeter's eight criteria:

Altimeter 8 criteria: Set community expectations; Provide cohesive branding; Be up to date; Live authenticity; Participate in dialog; Enable peer-to-peer interactions; Foster advocacy; Solicit a call to action.

And here's their heuristic analysis of how different categories are performing.

Altimeter category performance

Retail and consumer goods are doing best; regulated industries and luxury hotels are doing worst.

But hang on. If you had to rank these industries by the extent to which they needed to be good at meeting the eight criteria above, wouldn't you generate something like this?

The paper seems to assume that brands in all these industries are playing the same game in respect of social media: that retailers are in the Premier League and luxury hotels are in the Vauxhall conference. But that assumes that social media is an equally useful and viable channel for them all, and that these objectives are equally necessary and desirable.

I had a great conversation last night, talking about the Starbucks digital strategy, in which the thought came up: maybe it's time for everyone to stop writing opinion pieces on social media marketing. Maybe it's just time to concentrate on start doing it, doing it well, and adapting it to specific business models. You know, like we do with TV and newspapers.

Reading this paper makes me think we had a point.

# Alex Steer (28/07/2010)


Monks and hypertext

943 words | ~5 min

The BBC News website's Tech Brief pointed me towards this post on Religion Dispatches, which revisits the argument for the similarity between hypertext and the literary practices of medieval monks:

The medieval books we admire so much today are distinguished by the remarkable visual images, in the body of a text and in the margins, that scholars have frequently compared to hypertexted images on internet “pages.” The function of these images in illuminated manuscripts has no small bearing on the hypertext analogy. These “miniatures”... did not generally function as illustrations of something in the written text, but in reference to something beyond it. The patron of the volume might be shown receiving the completed book or supervising its writing. Or, a scene related to a saint might accompany a biblical text read on that saint’s day in the liturgical calendar without otherwise having anything to do with the scripture passage.

My academic training was in medieval English bibliography, so it's always good to see medievalists clamouring for a bit of relevance. However, since medieval book-making and modern internet cultures are pretty much my specialist subjects, I have to disagree slightly with the claim.

It's true that glossing, whether pictoral or verbal, was a fundamental of monastic text production. In contrast with the modern tendency to privilege originality of composition, the mainstream of scholarly thought in the medieval west saw thoughtful reproduction (copying) as an important end in itself. In part this was due to the cost in time and materials to reproduce texts. Don't think of illuminated manuscripts here, necessarily. Most manuscripts are much simpler affairs, but even so required patience, skill and financial outlay. The best work (at least until about the late fourteenth century) was done almost exclusively in monasteries, a fact which had a major influence on both the texts which were copied and the discipline of copying itself. The best scribes were not just photocopiers with human faces, but thoughtful textual critics.

As you'd expect when most copyists were monks, textual criticism owed a lot of its theoretical foundations to theology. And, in fact, with its roots in the Late Antique period, the theology of the monasteries had been intensely aware of its dependence on textual criticism and the status of its faith as a religion of the book. Some branches of theology arose explicitly from the need to reconcile the interpretative difficulties of the Bible, and particularly the complicated relationship of the Old Testament (thought of as shared with the Jews) and the New. This need - to show that the New Testament represented a fulfilment of the Old, superceding but not obviating the Jewish scriptures - is at the root of typology, the discipline of Christian reading popularised by the work of Origen, which saw Old Testament figures and events as prefigurations of New Testament ones.

With so much of Christian textual theology dependent on analogy, it is no surprise that monastic copyists took pains to discern and point out the analogies, figurations and lessons in the texts they copied. Typological thinking was not reserved for the relationship between testaments, either. It pervaded monastic understanding of the past, present and future too. The theology of time saw it as a linear progression marked with cycles of return and repetition. This was true in the short term - the cycle of feasts and fasts which marked the progress of the liturgical year - and in the long term. So there was a religious imperative to teach people to draw out the parallels between past and present as a way of negotiating the perils of the religious life into the future.

Students of theology were taught to find the typological significance of all sorts of events. They were taught, in other words, to make links between texts.

But does this mean medieval monks were the original hypertext makers? My answer's still no. This may sound obvious, but the defining feature of hypertext is the ability to move between texts. What matters is the immediacy of access to another resource.

Unsurprisingly, this is lacking in analogical glossing. It's not just a technological barrier that means that a gloss is not a hypertext. The glosses acted as reminders, not just as references. Unlike hyperlinks, they did not necessarily point to 'something else' that should be chased up as soon as convenient. Instead, they referenced ideas and images that were already familiar. Whereas hyperlinks provide a series of optional next steps to new things, analogical glosses provided a whole frame of reference for thinking about something new.

That said, copyists were masters of intertext and metatext - respectively the referencing of other resources and the creation of information structures capable of organising resources by various kinds of relation. Intertextual and metatextual thinking are fundamental to hypertext, and more widely to search engines and the semantic web.

They're not a rediscovery, either. Intertextual and metatextual thinking underpins the arrangement of libraries, dictionaries, thesauruses, the indexes in books, footnotes, object-orientated programming, file dependencies, and most of the other innovations that make managing information bearable. This tradition is long, and more or less unbroken. Medieval monks weren't prefigurations of the internet. They wouldn't even have seen themselves as the originators of this kind of information organisation. But they were there, way upstream, pointing readers in interesting and useful directions.

# Alex Steer (28/07/2010)


If Facebook were a country...

170 words | ~1 min

Of all the social media statistics, this one is most loved. If Facebook were a country, it would be the third largest country in the world. With 500 million members, only China and India are bigger. But what kind of a country would it be?

A country governed by a small unelected elite.

A country which conducts routine intelligence on its populace.

A country in which the main sectors of industry are farming and organised crime.

A country without effective separation of powers.

A country which makes it hard to leave.

A country where the median age is 26.

So, if Facebook were a country... it would be Burma.

Great.

# Alex Steer (27/07/2010)


Predictive crime analytics, Victorian style

189 words | ~1 min

PSFK (along with others) reports on the adoption by two British police forces of IBM's CRUSH system. CRUSH (Criminal Reduction Utilising Statistical History) 'analyzes parameters like crime records, offender profiles and intelligence briefings to look for patterns and identify potential areas where a crime may occur'. It's interesting stuff.

This kind of predictive analytic cartography isn't new, though. Charles Booth showed a similar range of interests when making his London Poverty Map in the late 1890s. Booth was part of a generation of analytically-minded social reformers who gave Britain an insight into the geography of deprivation. His colour-coded classification of London streets shows that his interest extended beyond mapping poverty to understanding its effects. He describes the black-shaded areas as 'Lowest class. Vicious, semi-criminal'.

The image below shows the area round Borough High Street, where I used to work.

Section of Booth's London poverty map

# Alex Steer (27/07/2010)


Your friends are round the corner reading this post

685 words | ~3 min

Are location-enabled social networks a golden opportunity for marketers? Brian Solis says yes, by virtue of engagement potential; Forrester says no, by virtue of demography. So I'll say not yet, by virtue of network effects.

In a nutshell, I argue this. For now, I'm not sure anyone's found the extra benefit that comes from marketing in an environment that combines geolocation and social network effects (as opposed to one or the other). In the future, they may, but I suspect they'll concentrate their efforts on already-massive social networks (e.g. Facebook) and sharing platforms (e.g. Twitter) as they add geolocation functions.

A couple of months back someone was poking round in the code that underlies the mobile version of Facebook, and noticed they'd built in support for location-based features. Facebook finally went public with this in late June, though so far not much has been revealed.

Twitter added a geo-location tagging option to tweets in November 2009, and last month extended this feature under the name Twitter Places, making it easier for mobile Twitter users to 'tag' the location they're tweeting from.

So far one of the big drivers of this behaviour (by social media channels and users) has been technology – geo-location is pretty straightforward with many smartphones, and still fairly novel. And geolocation functionality is increasingly standard. Here in South Africa, more than 10% of phones sold this year had GPS built in. (I'd love to tell you the source for that, but I've lost it. Sorry.)

It still needs to be genuinely useful, though, and we're seeing a lot of really useful applications for geolocation functionality. We're moving past the novelty-value stage of geolocation.

However, with a few notable exceptions, like the dubiously brilliant 'Please Rob Me' site, the combination of geolocation and social networks doesn't yet offer much more than you can achieve with an iPhone running Google Maps, from a utility point of view. Most of the best location-enabled apps do not make use of social network effects, even if they make use of social sharing effects like tagging and notation.

So far geolocation + social networking hasn't meant business, aside from a few 'look, we're on this thing' promotions (e.g. Starbucks' Foursquare promotion). Foursquare's just hit two million members, but it's still functionally pretty niche, and others (e.g. Gowalla, Loopt) are way behind. No-one really knows what they're for, except fun.

Foursquare gets most of its inbound referred traffic from Facebook, which is one of several reasons for suspecting that it may get hit very hard by the arrival of Facebook location features.

That said, so far brand activity on Foursquare and the like has been very promotional (come here, buy stuff, get discounts, etc.). So compared to a lot of 'fan engagement' activity on Facebook it's actually pretty measurable (for brands) and pretty useful (for consumers). So maybe geolocation will become a way for consumers to get better value out of brands in social media - effectively, by stalking them for deals.

I'm not sure I'd be rushing to tell any brand to get on Foursquare, though I'd certainly encourage brands who are interested in/insistent on 'doing social media' to see if location features might provide them with a way of adding some utility. There's an obvious applicability of geo-location for retailers ('we're nearby') and FMCG ('available here'). I'm unsure how much value is added by social network effects ('5 people you know visited this bar last week'), though.

I'm happy to be wrong, though. Anyone got any great examples of geolocation and social networking making themselves useful together?

# Alex Steer (27/07/2010)