Alex Steer

Better communication through data / about / archive

The need for speed in AI ethics

1219 words | ~6 min

A few weeks back LBB asked me and a few others for our views on the open letter calling for a pause to large-scale AI research. This is what I said:

Sometimes you need old wisdom to think about new things. In the eleventh century, King Cnut set his throne on the sea shore and commanded the tide to stop. Spoiler: it did not. He was demonstrating the futility of believing you can stop the unstoppable. It’s a good, if unusual, starting point for thinking about how to take ethical actions in the AI era.

There are good reasons to be suspicious of those AI enthusiasts who argue that there should be no checks and balances on the development of this new technology. This is the ‘move fast and break things’ mentality of Silicon Valley, hyperscaled to a scenario where the speed of movement is incredible, and the scale of breakage potentially immense. We urgently need applied AI ethics, and this should not be left to technology companies (many of whom have laid off their AI ethicists recently).

But I have little sympathy for this open letter. Demanding a halt to the development of new technology, and using crude scaremongering language to do so, is not a credible ethical position. We need AI ethics that can deal with the world as it is and as it will be. When change accelerates, that matters even more.

As King Cnut knew, the tide won’t stop because you want it to.

Technology ethics, which absolutely should not be left to technologists (of which I am sort of one), is fascinating because often technology proceeds in leaps, not steps. This means ethical frameworks may not exist, or it may not be obvious which ethical frameworks to apply, to decisions about the implications of new technologies. Often these implications arise from the intersection of capability and speed, and this is true of many of the ethical dilemmas arising from machine learning and AI at the moment. In much the same way that the ethics of the right to bear arms as defined in the US Bill of Rights were judged on the capability of a gun but not the speed of a modern automatic weapon, so the ethics of AI are founded on beliefs about the capability of prediction, but not the speed of modern cloud computing.

I've written before about the impact of computational speed on consent when it comes to data disclosure. Telling a random person that you like Hello Kitty has a low consent barrier because of the presumption that there is limited damage that person can do with that information. Telling Facebook you like Hello Kitty a few years ago should have had a high consent barrier because Cambridge Analytics used that information, together with lots of your other profile data, and that of millions of other people, to make predictions about your personality type and voting intentions. But almost nobody could have imagined that this was how the information was being used. Some of the ethical concerns around AI are an extrapolation of this: they expose the fact that some ethical frameworks are founded on the presumption that predictions are slow, broad and inaccurate. When predictions become fast, specific and precise, new forms of discrimination become possible in fields like media, advertising, insurance, recruitment, commerce, etc.

The ethics of decisioning and recommendation have, at least, seem some work done on them over the last decade or so, as the direction of travel has been reasonably clear. We have also seen specific ethical actions being taken, not least the proliferation of privacy policies designed (some well, some not) to curb the creation of consent-free identity graphs and unlimited trading of personal data. (Less attention has been paid to discrimination itself than to data trading, admittedly, indicating that privacy theory has been a more dominant force than theories about the ethics of prediction; hence why a single company collecting vast amounts of predictive data about a person is less frowned-upon than many companies collaborating to do the same.)

Generative AI presents a whole new set of challenges because, it turns out, computers have suddenly got very good at predicting not only the next best action but the next best pixel or bit. Methods like diffusion models have got very good, very fast, at learning how to output new images, videos, sounds etc that correspond to novel inputs (e.g. 'Abraham Lincoln playing a Wurlitzer') based on huge amounts of training data. Again, a lot of the ethical frameworks we have for these things, including those encoded in IP law, labour law, etc., have not previously been tested against such capability and such speed. This opens up whole new problematic areas: for example, whether the copyright holders of training data should have any ownership over generated outputs, or any right to prevent their work from being used as input; or whether people whose jobs involve tasks that generative AI can perform should have any protection from being replaced.

Hence my point, I guess. Technology ethics frameworks need to be designed to account for the current and potential future speed of technology, because the speed of technology is such a contributing factor to ethical decisions. Speed (or at least, speed-for-a-given-cost) determines scale and breadth of applications. Being recognised by your local policeman as you walk down the street once a week, and being continually tagged by face-recognition technology hundreds of times a day along with all your fellow citizens, are two variations of the same capability, but the vastly lower speed-for-cost of recognising each extra person using facial recognition AI may lead to wildly different outcomes. Hence the futility of banning research and development until ethical questions are worked out: doing so requires using the law to make rulings on things for which no ethical framework exists (saying, effectively, 'stop doing this in case we turn out to think it's bad'), and then asking ethicists and legslators to speculate about the likely speed increases of technology capabilities, and to 'sign off' on the resumption of R&D. Since speed gains for many technologies, including AI, seem to follow a power law, the consequences of getting these speculations even slightly wrong can be enormous. It's far better in these cases (I think) to make ethical decisions based on extreme possibilities rather than spend too much time trying to work out which are the likely ones. Imagine, for example, that AI becomes able to make incredibly accurate predictions about people, things and networks, faster and more accurately than people can. How should we live, and what should we value, in a world that looks like that?

That kind of ethical decision-making - using extremes, not just likelihood estimates - is far more productive and interesting, and allows us to make fundamental ethical decisions in advance more often. It's why Minority Report is such a fascinating story. It poses the ethical question: in a world where you have a system that can predict crimes, is it right to arrest people before they commit them? The narrative drama hinges on the question: what if the system is wrong? But the really interesting ethical question is: What if it's not?

# Alex Steer (27/05/2023)


Language models, truth and logic

1404 words | ~7 min

Much has been made of large language models and their struggle with truth. Specifically, LLMs like ChatGPT have a tendency to turn out statements that are plausible but demonstrably false. Some of these are logical (eg incorrect calculations or conversions), others ontological (eg making up academic references).

This probably won't matter much for very long. Computers are already good at logic and calculation (e.g. working out what 9 x 5 is) and search (e.g. finding articles about Norwegian beekeeping). Things that are hard for an LLM can be done easily by another type of algorithm. These are such fundamental computing tasks that most of us barely think about them any more. Assume that at some point fairly soon we'll see progress in the development of 'supervisory' systems that can identify what kind of computation is being asked for, and triage accordingly.

Despite crowing from some corners, and legitimate concern from others (e.g. those worried about the spread of algorithmic misinformation), the flaws are interesting because they remind us what LLMs are optimised for. Beyond the usual constraints of speed and energy use, logical algorithms are optimised for accuracy; search algorithms are optimised for completeness then relevance. LLMs are optimised for plausibility, based on their resemblance to things written by people.

I think that may explain what I'd call the jittery enthusiasm towards them in professional services (e.g. marketing, consulting, law, PR). Go onto LinkedIn and you can't move for breathless commentary about the latest thing that ChatGPT has managed to do, much of it written by people who are paid to do that thing in real life. These sudden outpourings of just-in-time thought leadership, of human beings writing about machines that can write, have a distinctly 'I for one welcome our new insect overlords' feel to them. It's remarkable how many of them take the view that, actually, one shouldn't be scared of these innovations, because they'll help us work smarter. I would suggest that so many people suddenly shouting this out loud to strangers on the internet is not a symptom of mass calm.

I think LLMs represent such an affront to the professional services because those industries have previously treated plausibility as a moat against automation. Fifty years or so ago, computation began to bite into large sectors of the service economy. Many tasks like filing, document retrieval, calculation, measurement, bookkeeping and so on turned out to be trivially easy for computers to do. It bears remembering that in the older sense of the word a 'computer') was a person who did calculations for a living.

The professional services responded in two ways. First, of course, by embracing computerisation and automation of these kinds of tasks. (The 2021 US Census occupation data lists vasts numbers of people who work with computers, but no longer any who are computers.) But they also responded by changing what they valued. They progressively but insistently made the case that the bits of their work that could be done by machines were not, really, the bits that truly mattered. (The process of doing this also marginalised and demeaned the people who did that work as it was progressively computerised - see Jennifer S Light’s paper ‘When computers were women’ for a survey of this.)

Take, for example, advertising media planning, a subject the maths of which are close to my heart. The foundations of this discipline are not hard to describe: they consist of identifying where to place advertisements (or other branded assets, content, etc) to maximise the attention received to a defined group of people, usually in defined circumstances (eg ‘when they are searching for a hammer’, ‘before next Wednesday’); and to do this for the best possible price for a given amount of attention and commercial return. As an outsider you would imagine that most of this consists of matters of fact, things that can be established one way or the other with sufficient effort. There may not be one single best way of delivering a given media budget against specified objectives, but there will certainly be one or more best ways. While any media plan is a prediction based on past results, it is a testable, measurable prediction; in other words, it is computable. Since $820 billion was spent worldwide on paid media last year, you would be right to imagine that even small improvements in these computations could be hugely commercially valuable.

Yet this represents a tiny fraction of what the media services industry talks about, and what it value. When media professionals talk to themselves and to their clients, more time and effort is spent on innovation, creative uses of media, consumer insights, future trends, and other things that tend to be topics of intelligent discussion but not matters of fact. The tendency of media procurement processes to dwell on these areas suggests that clients buying media services share the belief that the computatable parts are not the important parts. Now, this might all be entirely reasonable if the fundamental computation questions mentioned above were solved problems with generally-agreed solutions, and the greatest source of differentiation between media organisations were their points of view on these more speculative and forward-looking matters. However, this is not true. Good analysis and optimisation routinely unlocks double-digit improvements in media delivery and return even for large advertisers.

Measurement and analysis are not everything; they are often hard; and ease of measurement can too often be mistaken for importance. But in professions whose main role is to advise businesses how to spend their money for growth, the matters of fact are too often systematically downplayed by both the buyers and the sellers of these services. Despite the enthusiasm for various exotic data techniques, the basic computable facts get too little attention. I suspect this is because people in many professional services industries have conditioned themselves to believe that the bits that are done by humans are the important bits... and many of the most human tasks in these industries are those that involve being plausible and convincing in the absence of matters of fact.

Being plausible, interesting and convincing are important, of course, when decisions need to be made and actions taken under conditions of uncertainty. We have to be able to weigh evidence, take judgement calls, build consensus, encourage and inspire others, and so on. But these are supplementary drivers of value in many professional services industries. When planning a national advertising campaign to sell dog food, the most important thing your media planner can do is not give you a point of view on emerging trends in immersive gaming; it is to tell you where, when and how to get the attention of the greatest number of eligible dog owners for your budget. Of course, a point of view on immersive gaming may help you do this better - but again, this is a testable prediction and requires an understanding of the existing facts. We tend not to like to dwell on this, because it reminds us that in many of the core value-driving parts of the professional services, computers have long since proved to be faster and more accurate than people.

And this is why, I think, generative AI represents such an affront, even if we’re all pretending it doesn’t. We have finally programmed computers to sound plausible: to come up with cogent-sounding rationales and summaries on the fly; even to produce writing and imagery that looks like it took original thought and care. In short, all the bits that we told ourselves were the important parts because they couldn’t be done by mere counting and sorting machines. Well, now many of them can - for now under supervision, perhaps soon without it. The parts that looked entirely safe turn out not to be at all.

The people claiming to be unconcerned have all the credibility of a horse salesman laughing at a Model T Ford. The expansion of AI will transform professional services the same way that the expansion of automated calculation and search did, and probably more so. We all have work to do, and we owe our colleagues and clients better than to brush it off.

# Alex Steer (05/02/2023)


‘Brand’ vs ‘activation’ advertising: a matter of timing

511 words | ~3 min

Reposted from LinkedIn

This by Mark Ritson is a good summary of and thoughtful build upon the ‘long vs short’ work of Les Binet and Peter Field.

It makes the point that communications that build brand knowledge, memorability and perceptions, rather than merely using them to prompt direct action, rapidly become more effective when done well. They are not so much ‘long-term’ as ‘sustained’.

But, to build a bit further: the piece reminded me of the tendency I see a lot when discussing this topic, to spend too much time thinking about the advertisements and the media formats, and too little thinking about the people encountering them.

One of the biggest choices we make is whether to talk to people who are likely to buy soon, or not. (‘Soon’ varies by category - minutes for milk, weeks for cars.)

Direct response communications (which now includes a huge variety of digital media, search, and CRM) are immediately effective and valuable because they let us reach people who are about to buy and prompt them with actions we want them to take. Done well, this capitalises on existing brand associations to protect existing demand, and switch demand away from competitors. This can be done among known existing users (eg through CRM) or along intent non-users (eg through search). It’s an art in its own right and shouldn’t be undervalued or demeaned.

What we imprecisely call ‘brand’ communications are those largely shown to people who are not likely to buy soon. Is this a sensible thing to do? Yes, if it cost-effectively creates the associations mentioned above, that we capitalise on when prompting response. If selling a $10 product, it is better to spend $3 creating demand this month and $2 converting it next month than to spend $6 driving a response this month in the absence of any prior knowledge or association. Not for any great philosophical reasons, but because you get an extra $1 of profit. The only reason to do the opposite is if you’re in a hurry. ‘Short-term’ is an attribute best applied to marketers rather than their ads.

If we think of ‘brand’ ads as ‘ads shown to people unlikely to buy soon’, we see they are not fundamentally different in kind. You could show an ad saying ‘buy a Jeep’ to anyone… but you might expect better results from some people than others. So ‘brand’ ads tend to put something in your mind that will stay there until you are next ready to buy. One of those things is familiarity with a brand’s identity, products and services. An ad that says ‘buy some Spongles’ is unlikely to have much impact even on the most determined immediate buyer… whereas ‘buy some Pringles’ may. The other is some set of positive associations.

‘Brand’ ads also tend to benefit from the happy accident of being seen by imminent buyers as well, and to current brand users as well as non-users. So they can stimulate response among the intent, and prompt repeat buying… hence the immediate sales effect.

# Alex Steer (05/02/2023)


No, you shouldn’t cut your marketing budget to help with the cost of living crisis

476 words | ~2 min

Marketers, like all of us, are right to worry about the rising cost of living, and want to help. But suggestions that brands should cut advertising budgets as a way of lightening the load on their consumers are misguided.

I get it - nothing seems quite as self-serving as a marketer telling you to keep marketing during hard economic times. So let’s look at the numbers. (Yes, I need to add the sources for these.)

Unsurprisingly, businesses invest in marketing because the returns are greater than the outlay. The returns are what pay salaries (and capital costs, shareholder dividends, etc). In general, money spent on marketing leads to more money coming back in.

Across all categories, businesses spend around 3% of their revenues each year on paid marketing promotion through media (aka advertising). And each year, about 12% of sales revenue is driven by the short-term effect of this advertising pressure. In general - across categories, brands and markets - for every $1 a business spends on advertising, it gets $4 back on an annualised basis.

Across all industries, c30% of revenue goes on staff costs. In other words, for every $1 spent on marketing, from the $4 of revenue returned, about $1.20 in staff salaries is returned.

So on average, if you cut your advertising budget by $1 million, you will need to cut your staff costs by c$1.2 million over the next year. In other words, based on median salaries and staff cost overhead ratios, if you are a typical business in the US, for every $1 million reduction in advertising spend, you will need to lay off 17 people over the next year. (Yes, this is an average. Stronger brands which can sustain demand will be less affected; newer or weaker brands will be more affected.)

What about the wider effect on consumers? The argument here is that by cutting marketing you will suppress aggregate demand, helping consumers save money.

There’s some evidence for this. Based on econometric studies from the EU and UK, around 4% of annual GDP is due to the effect of advertising on consumption. Advertising does stimulate aggregate demand (it’s why we do it). But it also creates jobs. Around 2-3% of employment is due to the same demand-creation effects of advertising. So by removing advertising spend from the economy you are increasing the likelihood of people being made unemployed.

Taking the UK as an example, annual advertising spend is c.£32 billion, and there are c.32 million people in employment. That suggests UK adspend supports around 800,000 jobs (2.5% of the workforce). So for every £1 million of adspend cut, we can expect 25 people to lose their job.

Advertising creates and sustains jobs, makes money flow in the economy through regular buying and selling of goods and services, and subsidises media and entertainment services. None of these are things we should want to reduce during a period of economic uncertainty.

# Alex Steer (17/09/2022)


Ethics and opportunity in a world without third-party cookies

919 words | ~5 min

I wrote this back in January 2021 for The Drum, and it was recently (unexpectedly) commended by the Atticus Awards, so pasting it here.

--

In January 2022 [scratch that, late 2023 late 2024, maybe...], as most marketers know, big changes will be in the air. Google Chrome, the most popular web browser in the world, will no longer allow the tracking of third-party cookies (probably). This has sent brands, marketers, and advertisers scrambling, as third-party cookies have become a critical part of how online advertising works. Words like “walled gardens,” “cookieless tracking,” and “customer data platforms” have become common. But before rushing into replacement systems, perhaps the industry should stop and reflect on what got us here, what it means, and how we can shape a better future.

How we got here

For background, cookies are nothing more than bits of code that websites and other digital properties put on people’s browsers and devices. The industry has been using them for the last 25 years or so to track and remember user behavior. In practical terms, they give us a way to create memory and persistence — and distinguish between people.

As brand marketers, we’ve always felt that cookies are a good thing. They help us to be more relevant, less interruptive, and more helpful to consumers. Unfortunately, there’s another side to this. In order to create that relevance, we’ve also had to build a massive data ecosystem and technology infrastructure to connect brands and consumers. As a result, data about individuals is shared billions of times a day, not just with advertisers but with hundreds of thousands of intermediaries.

To give you an example of just how extensive that sharing is, whenever you log in to CNN, in the first 16 seconds your data is shared 950 times. Of course, it is used to create communications and media that are more relevant and valuable to you, but the extent of digital data collection has become so vast that many consumers and regulators feel the value exchange between users and data collectors has become imbalanced.

How we use data

The problem is not merely how much of our data has been shared, but also how it is being used — without our knowing it. Most people have a mistaken tendency to think of data collection as a form of memory. It’s much more than that.

When you share data with a brand, it is not only remembering you but also using that data to make predictions about other people. Those predictions are typically about two things: the next best action and the next best person. The next best action means deciding the best offer, opportunity, or action to recommend to an individual. The next best person means finding individuals who will likely respond well to messaging based on their similarity to existing customers.

This gets into some uncomfortable questions, especially around the idea that data sharing is a personal choice. Many people think they choose what brands will share in exchange for a more personalized experience. But a more accurate analogy for data sharing is public health. If you consent to give information to a brand or a technology company, it can also use those things to make predictions about other people. As a result, we have a collective risk and responsibility about how we use data, much as we have a collective risk when it comes to infectious disease.

To give an extreme example, in the 2016 election, Cambridge Analytica built models about user behaviour on Facebook that made some extraordinary predictions. For example, one model found a relationship between whether you like curly fries and how intelligent you are. Another could determine political affiliation by whether or not you liked Hello Kitty. It’s really difficult to give consent to the use of your personal data when you have no idea what it is being used for.

Predictably, this has led to a backlash. Consumers, regulators, and other technology platforms have become concerned about the pervasive and potentially abusive nature of data collection, and as a result, the cookie itself is being forced to retire.

The implications moving forward

For marketers, this is a reality check. As an industry, we need to acknowledge that whatever type of media we are using or communications we are devising, we are always thinking about the next best action and next best person. We are always trying to be more relevant, whether that’s tracking people online or putting the right posters halfway down the escalator at Piccadilly Circus. We are always trying to incite change and influence behaviour. These are things we cannot run away from; rather, we must engage openly and honestly with them.

Before we start to think about what the future of cookie-less tracking looks like, we need to have a conversation about the ethics of data technology. We should ask whether there are ways to do it better than we have in the past. Few people thought about these things 25 years ago when the data industry was in its infancy. Over time, this lack of attention to data ethics has led us down paths that now threaten the basis of digital advertising. It is time to do the hard work of getting it right so that we don’t find ourselves in a similarly uncomfortable position a decade or so down the road.

# Alex Steer (27/01/2022)


Predicting the future of advertising: hits and misses

1939 words | ~10 min

Back in 2013, the Wharton School at the University of Pennsylvania ran a study exploring what the future of advertising might look like in 2020. I wrote about it at the time, and did some basic and only-partially-scientific word distribution analysis to look at whether the language being used to describe advertising seven years in the future was significantly different from, or more diverse than, language used in predictions of advertising over the next twelve months. I found, at the time, that 'predictions about 2020' looked eerily similar to 'predictions about 2013', and wondered whether they were underestimating change.

Which brings us to today. While the study's website has not survived to 2020 (see its archive), this blog just about has. So now that 2020 is the present (and what a present it's turned out to be), I thought it would be worth revisiting that analysis.

First off, no, I'm not going to deduct any marks for failing to predict a pandemic. We're better than that. Instead, let's re-run the analysis, to see how 2013's predictions of 2020 vary from 2019's predictions of 2020.

To do this, we need a new data set. The old one was built from:

  • The raw text of all the 2020 predictions submitted to the Wharton programme - a total of 39,405 words.
  • The raw text of a set of 2013 predictions from industry sources I found at the end of 2012 - a total 33,041 words.

This time round, I've pulled together the raw text of a set of 2020 predictions from industry sources from the end of 2019 and January 2020 - a total of 37,422 words. As before, all are from reputable sources, mainly interviews/talking heads, appearing in the first two pages of Google search results for 2020 advertising predictions.

So here goes. The table below shows the top 20 words appearing in the 2013 predictions from 2012, the 2020 predictions from the Wharton study, and the 2020 predictions from 2019.

Rank2013 trends2020 predictions2020 trends
1marketingadvertisingdata
2brandsconsumersmarketing
3contentsocialcustomer
4socialbrandbrands
5databrandsdigital
6mobilemedianew
7consumersmarketingmedia
8mediaconsumerscustomers
9brandadvertisersbrand
10marketersfutureexperience
11becomecontentpurpose
12companiesdigitalcompanies
13businessdatahuman
14onlinemobileconsumers
15bigneedbusiness
16usersadpeople
17servicesliketrends
18trendworldglobal
19beentodayconsumer
20digitaltechnologycontent

You can see the top 500 words/two-word phrases by frequency here (common stopwords and obvious noise removed).

So, from this rather partial analysis, how do the current concerns and buzzwords of marketers compare to what the Wharton participants talked about seven years ago?

The first and most obvious thing is the dominance of data as the single most talked-about thing - up from fifth place in the predictions about 2013, and up from 13th place in the Wharton 2020 predictions. This suggests that the Wharton predictions underestimated the persistence and prevalence of data even at the time, or perhaps assumed it would decline as a topic of conversation and become a solved problem. Clearly not. Digital, which even in 2013 may have felt like a term nearing retirement, has also moved up into the top five.

But beyond the dominance of data, what's interesting looking down the new top 20 list, and indeed the top 500, is how thoroughly the language of business and marketing has come to supplant the familiar language of the advertising industry. Advertising, which unsurprisingly was the most frequent word in the Wharton predictions, doesn't even make the top 20. In the new 2020 data, advertisers are in 27th place, and advertising in 29th. Communications is only the 483rd most common word. The same shift is visible in the movement up of customer(s) vs consumers, a subtle shift that may reflect the increasing dominance of direct marketing and CRM thinking vs the language and ideas of brand marketing. Agencies is in 195th place, on a level pegging with customer data. Perhaps we shouldn't be surprised that the words companies and business are in the top twenty for 2020, as they were for 2013, since they are for the most part the funders of advertising... but they were not as well represented in the Wharton predictions.

There's some reassuring familiarity here too. Brands remain every bit as vital a part of the conversation as they were in 2013, much in line with the concerns of the Wharton respondents. This should act as a sense-check to those who persistently predict their demise. Media has hardly moved - though about a third of the usage is in the term social media.

What else was under-represented? Experience makes an appearance now, as does content, reflecting the new front line of concern for where brands show up and (rightly or wrongly) pushing advertising off its perch. Purpose is also a notable addition, but digging into the underlying text it is used in two different ways - the broad notion of brand purpose, on the one hand, and the nuanced and technical territory of data collection purpose(s), on the other - which concisely demonstrate the competing pressures on the modern marketing organisation, and the need to find a simplicity on the far side of so much complexity. The prevalence of human is exactly the cry for help that it appears to be.

Comparing the Wharton prediction text with the updated 2020 word list, what stands out? First, the extent to which the language used by the predictors back in 2013 suggests that they thought the future of advertising would be recognisably ad-like: advertising brands to consumers through media to achieve marketing goals, even while grappling with the changes and challenges of social, content, digital, data and mobile. These were the concerns of 2013 but (reading back through the original Wharton responses) they also came with a degree of confidence: that advertising was at heart a human discipline and that brands would adapt to a noisier, more democratised environment by learning to listen before they talked, and earning the right to be heard.

Little of that same confidence is found when listening to how we talk about 2020 now. Pandemics and downturns aside, there is a sense that advertising is a complex, technical, precarious and demanding discipline, using data to do digital marketing to customers of a business... while at the same time grasping for purpose and human connection for brands. The word social, aptly, is the 43rd most common word, just one step ahead of AI.

It sounds clever, it sounds complicated, it sounds like a high-wire act of technology, privacy and coordination.

It does not sound fun.

What feels like it's missing, between the data mines and the mountaintops of purpose, is any real discussion of how to create communications and experiences that people might actually enjoy. Or, indeed, how to get beyond the nuts and bolts of how to deliver advertising, to the trickier but more fulfilling question of how to get it to perform. The discourse of advertising feels like it has been been co-opted by those who profit by making advertising sound hard, rather than those who profit by making it look easy.

For all that complexity, there is little sense of progress. Looking back at the next-twelve-month predictions of marketers in 2012, it takes an attentive reader to notice that they are eight years old. Of the dominant concerns of 2012 - marketing, brands, content, social, data, mobile, consumers - only mobile seems like a solved problem.

Agencies may look at the language of the industry today and feel sidelined. They bear a share of the responsibility. By ignoring these problems of capability and coordination - by under-investing in or divesting expertise in data, technology, media, production, etc. - many agencies have left their clients to solve them. It's no wonder they have become preoccupations, and that new sources of help have been sought.

What's driving those preoccupations? Ultimately, I think, the increased difficulty of finding growth from markets, due to more diffuse demand, oversupply of good-enough products and services, more competition, and shifts in the mix of channels where people discover brands, research purchases and buy. Many of the foundations of the economy of mass markets, mass production and mass media have changed. These changes were well underway when the last set of predictions were made. The fundamentals of marketing have become less familiar, and harder, for many businesses.

What's been lost? Perhaps the optimism with which the future of advertising was talked about in those Wharton predictions - the sense that the challenges facing advertising's next decade would be social rather than technical, and that brands would respond by adapting the arts of persuasion to a noisier and more complex environment. Too little effort has been spent since on learning to communicate more effectively, compared to the effort spent on increasingly sophisticated marketing technology and operations. Worse, these have too often been treated as separate conversations - numbers on one side, words and pictures on the other. Agencies should have shown more interest in the mechanics and the maths.

Where next? I suspect that if we asked the industry to predict advertising in eight years' time, the results would just reflect today's preoccupations again, as they did eight years ago. To take that at face value would, I think, be a mistake. Many of the technical challenges of marketing, media and commerce are now well understood by marketers, technology companies and consultants, if not well enough by agencies. As we converge on solutions, the competitive advantage of having certain technology, and even of operating it in certain ways, will begin to decline again. The 'tech stack years' are, probably, drawing to a close. What will become more valuable is what can be done uniquely: this includes building strong brands and creating distinctive and powerful communications, but it also includes analysis, prediction and innovation based on the information a brand has about its consumers (or customers), the coordination of all marketing's assets to be more interesting and more effective.

A new golden age for agencies is far from guaranteed, but far from impossible. Those who succeed will be growth partners, finding advantage from all a brand's assets, mastering complexity rather than selling it or ignoring it, and making it all look easy again.

# Alex Steer (09/08/2020)


The media agency of the future will have no boundaries

1019 words | ~5 min

Note: this piece first appeared in Campaign in October.

Change can often be a shock but not a surprise. We love to talk about "black swans", but shifts in society, culture or business often announce themselves a long time in advance, if we know how to listen.

Our industry is no different. The narrative of media agencies as lumbering giants is flatly wrong – there are few sectors with media's track record of evolution and reinvention. It's happening again, driven by changes in technology, consumer behaviour and client need – but it's easy to miss the signs.

When things change, our response is often to look for patterns from the past rather than signals from the future. This may explain the recent spike in industry commentators asking whether we're returning to the era of the "full-service" agency after years of increasing specialisation.

I'd like to suggest that the change is real, but the analogy is wrong. The future isn't full-service, it's modular.

The connected customer experience goes beyond media

The traditional agency model, of offering a fixed scope of high-class outsourcing that doesn't change for years, is simply no longer sophisticated enough to suit the demands of today's clients.

That isn't just because clients have changed – it's because the world has changed.

The experiences people have with brands are more interlinked and less predictably linear, and this creates challenges and opportunities.

Marketers now have to look at the whole consumer experience journey, finding ways to get advantage out of each part without losing sense of the whole.

If that sounds obvious, it's because it has been coming for years. Any business that isn't organising its marketing around people, journeys and outcomes is behind reality.

The response from brands has been gaining momentum for some years. Most brands are fusing together brand advertising with performance marketing and bringing customer relationship, service, subscription, membership and digital behavioural data into the mix.

Marketers have long been frustrated with the fragmentation of the agency market into narrow, specialist service providers.

Being able to offer an opinion or service relating to only part of the picture may have worked once; now, it's a recipe for gaps and overlaps.

The re-engineering of brands' operational requirements, unified through data, is now bringing matters to a head and is forcing the media and advertising industry to respond.

Clients need agencies without boundaries

In this context, planning and buying media can no longer be a distinct stand-alone discipline. It is now part of a complex landscape, connected with content creation, influence, technology design and build, consumer engagement strategy, data creation, experience design and performance measurement.

Just to make things more exciting, we are nowhere near a settled answer to what the marketing function of the future looks like.

A fixed "full-service" scope is a risky bet, and a constellation of specialist agencies means you spend too much time managing your agencies, not focusing on your customers.

Clients need agencies without boundaries – ie an agency with many specialisms, along with the ability to assemble them and evolve them fast according to client needs. To become an agency without boundaries, you need to move on from focusing on service silos to understanding how to move people from a passive brand relationship to action.

And of the old breed of agencies – from creative to digital, CRM and branding – it's media agencies that have shown themselves the most able and willing to evolve and adapt.

When WPP created Wavemaker, for instance, we were asked to design the agency of the future now. It's not often you are given a challenge or an opportunity that big.

We decided from the outset that Wavemaker should not be another media agency in the mould of those that had come before, but a new model focused on unlocking growth by finding opportunities in people's paths to purchase and the huge array of influences on that.

Every one of us is on a perpetual journey in relation to many different brands. We see, engage with or ignore these brands in 100 different ways every day. The beginning of a journey may take years, days or minutes, or it may never start.

If it does start, it may take months of media, content or tech nudges, inspiration, persuasion and reflection before it reaches a conclusion. Or it could take 30 seconds from discovering a need to clicking "buy now".

Looked at this way, the scope for creativity and invention is boundless – from inspiring affinity with a brand to delivering a double-digit sales uplift in a matter of hours. But it requires a unity of service provision, thought and action to achieve meaningful impact.

Removing service boundaries has not only meant that we can align ourselves with a client's business strategies more completely, but it has unlocked our business model.

Media agencies are uniquely able to adapt

Clients are asking for flexibility, speed, problem-solving, rapid innovation and delivery. Inevitably, many briefs are not simple media planning and buying retainers; they could just as easily include a consultancy project or a content development initiative.

Rather than being marginalised, media agencies now have an incredible opportunity to enhance their relationship with brands as a business growth partner rather than a media service provider.

Historically, media agencies have shown themselves uniquely acquisitive and flexible in terms of absorbing skillsets and expertise across strategic planning, media performance technology, customer insight, content development and data utilisation.

You can see the difference in the diversity of skillsets and backgrounds in our teams. The mix of people involved in our agency and across the media agency sector as a whole is dramatically different from even two years ago.

I expect the breadth of expertise and delivery offered by agencies to continue to grow, reflecting the radical changes facing clients as they continue to digitise their business operations and respond to people's changing desires and needs.

Agencies have been, and always will be, a reflection of clients' needs. Technology has changed how brands and people interact profoundly. That's not sudden, but it's important.

Marketers need agencies to respond to this and help them see and act on the whole customer journey.

# Alex Steer (21/06/2020)


Machine learning needs people knowledge

1079 words | ~5 min

Written on behalf of UKOM and originally published on the Mediatel site.

If you were to write out a list of the data-related topics that marketers talk about, and arrange them in order of sexiness, it’s a fair bet that you would find ‘artificial intelligence’ at one end of that list, and ‘credible standards for audience measurement’ at the other.

This should come as no surprise. After all, we’ve lived through the best part of a decade of incredible advances in cloud computing which have given new life and (let’s face it) big new injections of cash into artificial intelligence and machine learning research. We’ve seen the rise of tech and media giants whose entire business models are built on predictive analytics, not to mention significant B2B marketing efforts directed at getting brands excited about the possibilities of ‘AI’ (a phrase whose definition ranges from the enormously specific to the borderline mystical).

Nor is this purely hype, the way so many other supposed marketing innovations are. (I’m looking at you, blockchain.) Artificial intelligence does, after all, offer the prospect of being able to make better decisions faster, and to update those decisions as the evidence changes. For an industry like ours, whose practitioners have gone from being parched for lack of data to drowning in the stuff, in little more than a decade, the promise of an approach that can help make sense and create value from all that data is hugely appealing.

The kinds of questions that AI, machine learning and cloud computing can answer read like the back of a self-help book for marketers. How do I understand my customers better? How do I make better predictions in close to real time? How do I create better customer experiences? How do I see how every marketing touchpoint contributes to penetration, customer acquisition, lifetime value, and so on? These are genuine and legitimate needs, and the application of data science (the well-branded union of statistics and software engineering) can offer meaningfully better answers.

Back in 2012, Harvard Business Review famously declared data science ‘the sexiest job of the 21st century’. It’s a fair bet that they were thinking about the AI end of the spectrum. Meanwhile, down at the other end of my list, away from the excitement, are those of us diligently working out whether the individuals who see all these new AI-powered ‘brand experiences’ are the right people, or indeed people at all. In a world of elegantly-architected walled gardens, some of us are checking the bricks.

Now, I’m a bit of a nerd, as the whole ‘let’s rank data topics by sexiness’ thing in the first paragraph has probably made clear, so I don’t think being on the boring-but-important end of the spectrum is anything to be ashamed of. But I think diligence can be its own worst enemy when it forgets to win hearts and minds – so now is the time to start shouting, loudly, about the importance of people measurement.

Why now? Because for the last ten years or so, we’ve been building a house of cards – building businesses and deploying billions of dollars of marketing spend on ‘good-enough’ proxies for people and audiences. From clicks, to cookies, to ‘roll-your-own’ customer IDs, to ‘trust-the-platform’ walled-garden reach and frequency estimates, we’ve created a digital economy that rewards audience scale and granularity, without insisting on independent validation of that information. As if that weren’t bad enough, we’ve taken a similar ‘good-enough’ approach to behavioural metrics such as ‘impressions’, ‘views’ or ‘engagement’, with the result that media planners are now routinely forced to compare apples to oranges, which is bananas.

So what happens when you pay for ‘people data’ without insisting the people are real? After a decade of fake news, disinformation, electoral interference, data breaches, echo chambers and unchecked hate speech, look me in the eye and tell me you don’t know.

But the real answer to the question, ‘why now?’, is that if those of us who control media budgets don’t insist on a higher standard now, we’re at the start of a catastrophe, not the end of one. We are only beginning to explore the ability of artificial intelligence and machine learning to classify, predict, decide and act, based on information about people. What do you think happens if we allow algorithms to make decisions based on fake people? The consequence for advertisers is a massive escalation of fraud. The consequences for people and for society as a whole are much worse. Optimising towards unverified engagement metrics such as clicks has already led to a noisy digital ad ecosystem which has prompted rampant ad-blocking. That will look like a fairly small problem compared to the reputational damage to our industry if we are seen as the major source of funding for platforms that enable fake news and disinformation.

There’s a wonderful scene in the film The Big Short where one of the lead characters, a hedge fund manager who’s been betting against dodgy loans, realises that for every dollar invested directly in those loans, there are thousands of dollars invested in exotic financial derivatives built on top of the value of those loans. Without a commitment to verified people data, many businesses will find themselves in a similar situation, making huge investment decisions based on machine learning algorithms which are trained on dodgy data about the behaviours of people who cannot be verified as real. The results will be like playing Jenga. With a hammer.

There is good news, and it’s simpler than you might think. Most of us think that these problems are inherent to the ‘black box’ nature of AI and machine learning. In fact, the vast majority of them are data problems, not algorithm problems. Marketers can take two actions that, if applied at scale, will drive the cleanup of the ecosystem. The first is to work directly with responsible businesses in a transparent media supply chain. The second is only to plan, measure and pay for media based on independently verified people data, such as that provided by UKOM. Do not accept machine learning, however sophisticated, in the absence of people knowledge.

Digital audience measurement may never be sexy. But now, more than ever, it’s important – to brands, to individuals, to societies. And that matters more.

# Alex Steer (30/07/2019)


When they go deep, we go wide: Why almost everyone is getting marketing science wrong

2571 words | ~13 min

I'm going to start the new year off on a controversial note – not with a prediction (predictions are overrated) but with an observation. I think most of the chatter and hype about data science in marketing is looking in the wrong direction.

This is a bit of a long read, so bail out now or brace yourself.

I've worked in marketing analytics, marketing technology, digital marketing and media for the last decade. I've built DMPs, analytics stacks, BI tools, planning automation systems and predictive modelling tools, and more than my fair share of planning processes. I am, it's fair to say, a marketing data nerd, and have been since back when jumping from strategy to analytics was considered a deeply weird career move.

My discipline has become, slowly-then-quickly, the focus of everyone's attention. The industry buzzwords for the last few years have been big data, analytics, machine learning and AI. We're starting to get to grips with the social and political implications of widespread data collection by large companies. All of this makes data-driven marketing better and more accountable (which it badly needs). But all of this attention - the press coverage, the frenzied hiring, the sales pitching from big tech companies, all of it – has a bias built into it, that means talented data scientists are working on the wrong problems.

The bias is the false assumption that you can do the most valuable data science work in the channels that have the most data. That sounds self-evident, right? But it is, simply, not true. We believe it's true because we confuse the size and granularity of data sets with the value we can derive from analysing them.

Happiness is not just a bigger data set

We're used to the idea that more data equals better data science, and therefore that by focusing on the most data-rich marketing channels, you will get the best results. We are told this every day and it is a simple, attractive story. But the biggest gains in marketing science come from knowing where to be, when to appear and what to say, not how to optimise individual metrics in individual channels. Knowing this can drive not just marginal gains but millions of pounds of additional profit for businesses.

This makes lots of people deeply uncomfortable, because it attacks one of the fundamental false narratives in marketing science: that the road to better marketing science is through richer single-source data. This narrative is beloved of tech companies, but it comes from an engineering culture, not a data science culture. Engineers, rightly, love data integrity. Data scientists are able to find value from data in the absence of integrity, by bringing a knowledge of probability and statistics that lets us make informed connections and predictions between data sets.

Marketing data is the big new thing, but from the chatter, you would believe that the front line of marketing analytics sits within the walled gardens of big data creators like Google, Facebook, Amazon or Uber. These businesses have colossal amounts of user data, detailing users' every interaction with their services in minute detail. There is, to be sure, massive amounts of interesting and useful work to be done on these data sets. These granular, varied and high-quality data resources are a wonderful training ground for imaginative and motivated data scientists, and some of the more interesting problems relate to marketing and advertising. For example, data scientists within the walled gardens can work on marketing problems like:

  • How do I make better recommendations based on people's previous product/service usage?
  • How do I find meaningful user segments based on behavioural patterns?
  • How do I build good tests for user experience features, offers, pricing, promotions, etc?
  • How do I allocate resources, inventory, etc., to satisfy as many users as possible?

All of which is analytically interesting and important, not to mention a big data engineering challenge. But if you're a data scientist and particularly interested in marketing, are these the most interesting problems?

I don't think they are.

These are big data problems, but they are still small domain problems. Think about how much time on average people spend in a single data ecosystem (say, Facebook or Amazon), and the diversity of the behaviours they exhibit there. You are analysing a tiny fraction of someone's behaviour; worse, you are trying to build predictive models from the slice of life that you can observe in minute detail. If you work in operations or infrastructure, almost all the data you need sits within the platform. But if you are doing marketing analytics, swimming in the deep but small pool of a single data lake can cause a serious blind spot. How much of someone's decision to buy something rests on the exposure to those marketing experiences that you happen to have tracked through that data set?

As a marketing scientist you have an almost unique opportunity among commercial data scientists: to build the most complete models of people's decision-making in the marketplace. Think about the last thing you bought: now tell me why you bought it. The answer is likely to be a broad combination of factors… and you're still likely to miss out some of the more important ones. As marketing scientists we're asked to answer that question, on a huge scale, every day in ways that influence billions of dollars of marketing investment.

We need bigger models, not just bigger data

Marketing analytics is a data science challenge unlike most others, because it forces you to work across data sets, often of very different types. The machine learning models we build have to be granular enough to allow tactical optimisation over minutes and hours, and robust enough to sustain long-range forecasts over months and years.

The kinds of questions we get to answer include:

  • What is the unique contribution of every marketing touchpoint to sales/user growth/etc?
  • Can we predict segment membership or stage in the customer journey based on touchpoint usage? How do we predict the next best action across the whole marketing mix?
  • How do touchpoints interact with each other or compete?
  • Are there critical upper and lower thresholds for different types of marketing investment?
  • How sensitive are buyers to changes in price? What other non-price features would get me the same result as a discount if I changed them?
  • How important is predisposition towards certain brands or suppliers? What is the cost and impact of changing this vs making tactical optimisations while people are in the market?

Yet we have a massive, pervasive blind spot. We are almost all acting as if marketing science applied only to digital channels. Do a quick Google for terms like 'automated media planning' or 'marketing optimization'. Almost all of the papers and articles you will find are limited to digital channels and programmatic/biddable media. I have had serious, senior people look me in the eye and tell me there is no way to measure the impact of brand predisposition on market share, no way even to account for non-direct-response marketing touchpoints like television, outdoor advertising or event marketing. This is, of course, wrong.

Everywhere you look, there is an unspoken assumption that the whole marketing mix is just too complicated to be analysed and optimised as a whole – that the messy, complex landscape of things people see, from telly ads to websites to shelf wobblers, needs to be simplified and digitised before we can make sense of it. It's little surprise that this idea, that anything not digital is not accountable, is projected loudest by businesses who make their money from digital marketing.

Again, this is an engineering myth, not a data science reality. Engineers, rightly, look at disunited data sets and see an integrity problem that can be fixed. Data scientists should (and I hope do) look at the same data sets and see a probability problem worth solving. The truth is that it is possible to use analytics and machine learning to build models that incorporate every marketing touchpoint, and show their impact on business results. The whole of media and marketing planning is a science and can be done analytically – not just the digital bits. Those who claim otherwise are trying to stop you from buying a car because all they know how to sell you is a bicycle.

This is the part that makes people uncomfortable – because it requires a more sophisticated data science approach. Being smart within a single data set is relatively easy – getting access to the data is a major engineering problem, but the data science problems are only moderately hard. As a data scientist within a single walled garden, it's easy to feel a sense of completeness and advantage, because only you have access to that data. Working across data sets, building models for human behaviour within the whole marketplace, needs a completely different mindset. There is no perfect data set that covers everything from the conversations people have with their friends to the TV programmes they watch to the things they search for online to the products they see in the shops – yet we need to build models that account for all of this.

Probability beats certainty… but it's harder

Making the leap from in-channel optimisation to cross-channel data science means having a better understanding of the fundamentals of probability theory and the underlying patterns in data. For example, I've built models that predict the likelihood that people searching for a brand online have previously heard adverts for the brand on the radio, and the optimum number of times they should have heard it to drive an efficient uplift in conversion to sales. If I had a data set that somehow magically captured individuals' radio consumption, search behaviours and supermarket shopping, this would be a large data engineering problem (because there'd be loads of data) but a trivial data science problem (because I'd just be building an index of purchasing rate for exposed listeners vs a matched control set of unexposed, etc.). This is the kind of analysis that research and scanning panel providers have been doing for decades - it's only the size of the data set that's radically new.

But of course, that data set doesn't exist. It's unlikely it'll ever exist, because the cost of building it would be far in excess of the commercial interests of any business. (Nobody is in the 'radio, online search and grocery shopping' industry… at least not yet. Amazon, I'm looking at you.) So what do we do?

The engineering response is to try and build the data set. This is a noble pursuit, but it can lead to an engineering culture response, which is to try and change human behaviour so that people only do things that can be captured within a single data set. An engineering culture will try to persuade advertisers to shift their spend from radio to digital audio, and their shopping from in-store to online, because then you can track all the behaviours, end to end. So measurement becomes trivial - it's just that, to achieve it, you've had to completely change human behaviour and marketing practice, and build a server farm the size of the moon to capture it all.

The data science response is to look at it probabilistically - to create, for example, agent-based simulations of the whole population based on the very good information we have about the distribution of occurrence of radio listening, online search and supermarket shopping. To do this, you need to be able to master the craft of fitting statistical models without overfitting them - building a model of exposure and response that is elegant, both matching reality but capable of predicting future change and dealing with uncertainty. When you do this, it's possible to build very sophisticated models that give a good guide to how the whole marketing mix influences present and future behaviour, without trying to coerce everything into a single data set.

Data science cultures are vastly better suited to transforming the future of marketing than engineering cultures. They see ambiguity as a challenge rather than an error, and they look hard for the underlying patterns in population-level data. They build models that focus on deriving greater total value from the marketing mix, through simulation and structural analysis across data sets, rather than just deterministic matching of individual-level identifiers. With apologies to Michelle Obama: when they go deep, we go wide.

Data science cultures may not be where you think

Marketing needs to change, and data is going to be fundamental to that change, as everybody has been saying for years. The discipline needs to be treated as a science, and the agencies, consultancies and platforms that want to survive in the next decade need to make a meaningful investment in technology, automation and product.

But while everybody is looking to the engineering cultures of Silicon Valley for salvation, I think the real progress is going to be made by data science cultures - the organisations that combine expertise in statistical data science, data fusion, research and software development, to create meaning and value in the absence of data integrity. Google, to its credit, is one of these. Some of the best original statistical research on the fundamental maths of communications planning is being done in the research group led by Jim Koehler and colleagues.

My employer, GroupM (the media arm of WPP), is another. Over the last few years we've quietly built up the largest single repository of consumer, brand and market data anywhere, of which the big single-source data sets are just one part. We are in the early years of throwing serious data science thinking at that data, building models and simulations for market dynamics that no single data set could hope to capture. Some of the other big media holding companies have strong data science cultures and impressive talent. There are a handful of funded startups, too - but vanishingly few, relative to the tidal wave of investment behind data engineering firms and single-source data platforms.

This is a deeply unfashionable thing to suggest, but a lot of the most advanced marketing science work is being done in media companies and marketing research firms, not in technology companies. There are two reasons for this. First, the media business model has supported a level of original R&D work for most of the last decade, even if it's not always been turned systematically into product. Second, media companies and agencies are ultimately accountable for what to invest, where, how much, and when - the kind of large-scale questions that can't be solved simply by optimising each individual channel to the hilt. (On a personal note, this is why I moved from digital analytics into media four years ago - the data is more diverse, the problems harder and more interesting.)

While everybody is focusing on the data engineering smarts of the Big Four platforms, keep an eye on the data science cultures who are transforming a huge breadth of data into new, sophisticated ways of predicting marketing outcomes. And if you're a data scientist interested in marketing, look for the data science cultures not just the engineering ones. They're harder to find because money and fame aren't yet flowing their way… but they have a big chance of transforming a trillion-dollar industry over the next few years.

# Alex Steer (04/01/2019)


Nets, spears and dynamite

804 words | ~4 min

This originally appeared in the 50th edition of Campaign. It's co-written by my colleague David Fletcher.

--

Nothing brings us together like a good theoretical disagreement, does it? For an industry built on a reputation for persuasion, we are rather fond of picking sides. This can be on almost any topic, but the prize fight of the year, possibly even the decade, is over the question of data and targeting.

This clash of advertising cultures has become so profound that those on either side no longer seem to be talking the same language. In the red corner are those who argue that the era of mass communication is dead, and that highly targeted, in-the-moment interventions to fragmented 'segments of one' will determine what people buy and why. In the blue corner, we have the defenders of scale, reminding us that broadcast media packs a bigger punch, that brands need to reach new buyers and that costly signalling makes them more desirable in a market driven as much by emotion as logic.

The problem is, we are asking our clients to referee – and that's an exhausting distraction from the 'day job' of solving business problems. Marketers know that the right answer is both: both shared experiences and precision, brand-building and demand fulfilment. They are looking to us – and others – for guidance on how to do both together, and do them well.

Marketing is more than a bit like fishing. Sometimes you fish with a net: there is value in catching lots of potential buyers all at once, even if some aren't ready yet and need to be thrown back for another day. Sometimes, you fish with a spear: you go after individuals because they're easy to spot and disproportionately appealing. And sometimes, you fish with dynamite: you throw something new into the water and blow everything up.

Our agency, Wavemaker, is only 10 months old, born in one of the most disruptive periods our industry has seen for decades. When we put the words "media, content and technology" outside our door, it was out of a sense of shared frustration with the "two tribes" thinking that leads to clients having to act as peacemakers and interpreters between their partners. We're building a large agency of specialists in different client verticals and marketing disciplines, none of whom claim a monopoly on the right answer.

We've now organised those specialists into three large disciplines: Wavemaker Media creates shared experiences for brands (fishing with a net); Wavemaker Content makes ideas and partnerships that shift brand perceptions (fishing with dynamite); and Wavemaker Precision brings all our digital marketing, ecommerce, analytics and technology experts into one team to deliver targeted relevance (fishing with a spear). Our insight, effectiveness, strategy and client-delivery teams operate across all three (fishing where the fish are). We've done this to simplify our offer to clients, and help them accelerate their own transformation by giving easier access to the right expertise, configured in the right way.

Clients' most urgent need is in precision marketing, as the fusion of digital media, search, ecommerce, CRM and tech is now known. Most businesses have digital transformation as a C-suite priority, and this means taking control of their data and technology investments, reorganising the marketing, sales and commerce functions around customer intelligence, and integrating media with digital user experience. Most agencies talk a good game with precision marketing but few deliver it in practice, and this includes many of the specialist performance agencies that make the most noise in this space.

We find there are three factors involved in making a successful leap from performance to precision. First, a focus on growth audiences. Too much digital targeting is based on reaching people who are easy to find or who respond well, rather than those who represent real and distinct sources of growth. This leads to bland demographic targeting instead of intelligent data use; or, worse, false optimisation towards digital hand-raisers.

Second, real data scrutiny. Much digital data makes promises it cannot possibly live up to (can you really target introverted low-fat cheese spread consumers in Leamington Spa?), and off -the-shelf attribution models give a wildly inaccurate view of marketing contributions. Precision means building up trusted data assets and measurement approaches, not just box-ticking.

Third, obsessive deployment. DIY digital buying is easier than ever. Clients need certified activation, tech and analytics experts who will work flexibly and collaboratively to squeeze every last bit of performance out of their tech stack and marketing platforms. Vague claims or mysterious proprietary tools are no substitute.

Clients and agencies who focus on these three things – growth audiences, data integrity and obsessive deployment – can avoid the theoretical debates and focus on finding new ways to do what great advertising has always done: building strong brands, delighting customers and driving growth.

# Alex Steer (27/10/2018)