MEASUREMENT STRATEGY
IN THE DIGITAL ERA:
A GREEN PAPER
MEASUREMENT STRATEGY IN THE DIGITAL ERA : A GREEN PAPER
This Green Paper and accompanying attachments have been produced as a provocative discussion piece on measurement strategy as part of the 2017 EffWeek Thought Leadership Conference organised by the IPA.
Co-produced with Gain Theory, the IPA and a selection of leading industry figures, the green paper and the topic was launched and debated at an interactive think tank at EffWeek and is set to help develop the agenda for a bigger cross-industry EffWorks research project in 2018.
For more information on this paper's contributors please visit:
www.ipa.org.uk
www.gaintheory.com
Introduction
The Green Paper has been written by Gain Theory with input from a cross-industry working group including:
- Alan Bloodworth & Jon Webb - Gain Theory
- Kathy Dykeman - Facebook
- Jonny Protheroe - Google
- Matt Hill - Thinkbox
- Janet Hull & Joyce Kelso - IPA
- Chris Hutchings - BBC (representing EffWeek Client Advisory Board)
- Vasileios Kourakis - L’Oreal (representing ISBA)
We have also consulted with a wider circle of clients, agencies, content owners and industry bodies: including Dixons, Direct Line, Mondelez, Macmillan and Asda.
Scope
We have asked a number of leading marketeers to define what they meant by marketing effectiveness. Jan Gooding from Aviva helps set the challenge:
"Did it work?
Every organisation makes choices about how it invests money. For Marketing to get its share it needs to prove value."
Jan Gooding, Global Inclusion Director, Aviva
Our working definitions, for the purposes of this paper are:
Marketing
We are focusing on the narrow definition of marketing, mainly surrounding direct communications with current and potential consumers. This covers the paid (P), owned (O) and earned (E) media space.
Wider considerations are covered where P, O, E communications has a clear impact on some other element of the marketing mix.
We are aware that, in time, we have the opportunity to stretch our discussion much wider, to encompass the 7Ps of classical marketing theory.
Effectiveness
There are many ways to demonstrate effectiveness. The debate here centres around marketing activity where the primary aim is either to build for the long term or to see sharp increases now.
This was summarised nicely by Billy Ryan from Direct Line in the quote:
"I’m often asked how our media budget is split between digital and non-digital. This misses the point – they should really be asking how much is split between demand generation and demand activation."
Billy Ryan, Marketing Effectiveness Manager, Direct Line
In the course of our discussions we have also touched upon the following themes of relevance to the wider topic:
- Effectiveness versus efficiency
- Return on Investment (ROI)
- Manifold effects
- Short-term and long-term performance
- Correlation and causation
- Incrementality
- Comparability
- Key brand health measures
- Mapping tools and techniques
- What is new and different?
- Pain points
METRICS THAT MATTER
One of the key challenges we have identified in addressing marketing measurement is metrics overload: the increase in the number of channels available to marketers has led to an increase in the number of metrics used to report on activity. As one marketer commented in recent independent CMO research commissioned by Gain Theory:
"Too many data points = analysis paralysis"
CMO, Retail
Newer metrics are often not comparable with old ones, measured in silos and insufficiently linked to business outcomes. There can be a mismatch between real-time metrics and reporting processes (marketing teams’ weekly reports etc.).
In the same research, one senior marketer said ‘We can’t agree on which few metrics to measure or focus on’. Another one said: ‘We need one source of truth that will drive our metrics and resulting insights’.
So, what are the key metrics that marketing leaders could and should focus on?
Our diagnostics or KPIs are based on a number of sources:
- Research conducted on behalf of the IPA and CIMA by Fran Cassidy in the new IPA EffWorks publication ‘Culture First’
- IPA Databank evidence contained within Media in Focus, the latest IPA EffWorks publication in the series authored by Les Binet and Peter Field
- Feedback from the various contributors to this report
However, it is worth making five points at the outset that resonated with virtually all contributors.
- There is not a single metric that on its own captures all relevant factors, though some come close.
- A metric without context has very little use. For example, an increase in like-for-like sales on the back of a competitor exiting the market is not really indicative of the success of your marketing, at least not in any straightforward way.
- Unless there is buy-in from all stakeholder groups, metrics are meaningless. Integration of metrics is key to move beyond a ‘reporting culture’ to a ‘learning culture’.
- Broadly speaking, in current practice, metrics fall into three main buckets:
- Financial performance
- Brand health
- Customer service/engagement
- The volume of data available varies by market sector. Service businesses tend to have a plethora of customer experience data from their many touchpoints, particularly those in retail, and have many more complex primary lever analyses.
Setting priorities
Binet and Field’s most recent research of the IPA Databank ranks the shared characteristics of campaigns that successfully generated profitable growth. Their table is shown below:
Source: Media in Focus: Les Binet and Peter Field. (IPA EffWorks 2017)
The sorts of questions that came up in our working meetings were:
- What should guide the metrics we look at?
- Should they vary by department or should we all use
- the same ones?
- How often should they be reviewed?
- Who agrees them?
- Who owns them?
MEASUREMENT STRATEGIES: WHAT DOES GOOD LOOK LIKE?
Listing the key metrics to focus on is the relatively easy part. Much harder is implementing a measurement strategy that:
- Provides a framework within which KPIs make sense
- Results in the framework not changing every time a campaign changes
- Is consistent in approach i.e. each KPI is defined and measured in the same way
- Is consistent in activation e.g. we know that staff move around frequently, so we should plan with this in mind
MMM and Digital Attribution are among the most popular methodologies deployed, but both have shortcomings.
MMM is a top down approach that is great for measuring channel level impacts and maybe one or two levels down but struggles to go any further. For example, on TV, it is possible to look at the differential impact by network, by daypart and to split the impact between demand generating and demand gathering creatives. But for advertisers running more than two or three creatives it struggles to differentiate. Moreover, the focus of interest is often sales volume or value which means upper funnel activity that does not directly affect sales will be discounted.
Digital Attribution by way of contrast, is a bottom up approach and is great at providing detailed and granular results within the realms of digital touchpoints. But it struggles to incorporate both off-line media and events in the “real world” – the weather, for example. Or the actions of competitors. Other issues surround cross-device usage and of-course the well-known debate around viewability may skew results. While the technique is great at determining relative effectiveness of different activities, it has little to say on the absolute impact.
Classically the divide between the two techniques can be described below.
- Econometric analysis has a good record when it comes to identifying the overall impact of an activity on a dependent variable or KPI.
- But Digital Attribution is required to dig further – yes, Display had this impact, as measured through econometrics, but this combination of publisher, tactic, creative and positioning was most impactful, as measured through attribution.
Often, MMM would provide a boundary for the absolute impact on incremental sales of an activity and then Digital Attribution would be used to fractionally allocate between tactics. This is fine as far as it goes, but MMM analytics invariably lags behind Digital Attribution. Assuming the results of an MMM study from 3 months ago are still holding good can be a stretch. As a result, the industry has been moving towards Multi-Touch Attribution as well as towards more sophisticated unified measurement approaches.
Other models also have important contributions to make. For example, other methodologies allow base-line sales to evolve, reflecting changing tastes and preferences – a key development. Path-to-purchase approaches allow us to explicitly evaluate the impact of upper funnel activity. What we need is a system that brings all these approaches together into a single framework – combining the holistic nature of econometric analysis with the granularity of digital attribution.
Unified Measurement – The Future
"Today’s increasingly complex media environment means once reliable marketing performance measurement techniques, such as marketing mix and attribution models, fail to properly credit marketing tactics with a customer action"
Forrester
Coined by Forrester in ‘The Marketing Measurement And Insights Playbook For 2017’, the label Unified Marketing Impact Analysis (UMIA) describes perfectly what many marketers are striving to achieve:
- Evaluate all marketing touchpoints, upper/ lower funnel and on and off-line, in as much granular detail as required
- While taking account of all other factors that may have an impact on sales, from competitor activity to the weather. Quickly enough for action to be taken in-campaign.
Based on the correspondents who have contributed to this report, we expect these developments to play a leading role in the 2018 research.
So, what would be the 6-step plan towards a perfect marketing effectiveness measurement strategy? Our next stage of market research is designed to provide best practice cases to illustrate this. Based on current knowledge the straw man which we will be using as a framework for discussion is as follows:
From green paper to white
Gain Theory is a global marketing foresight consultancy that helps brands make faster, smarter business decisions via data, predictive analytics, technology and consumer-insight capabilities. The consultancy leverages WPP’s intellectual capital in media and marketing helping brands move away from insight into actionable foresight in order to positively impact the bottom line.
As a partner sponsor of the IPA EffWorks initiative, Gain Theory has agreed to lead on a 2018 research programme which will consolidate on, and take forward, this green paper, with best practice case examples. We will be conducting our research between November 2017 and June 2018 across the UK, the USA and China.
Appendix:
MORE DETAILED ANALYSIS ON KEY METRICS
MORE DETAILED ANALYSIS ON KEY METRICS
This analysis should be read in conjunction with Measurement Strategy in the Digital Era: Green Paper.
Here, we review our discussion of the strengths, weaknesses and opportunities around the different metrics in current practice.
Like for Like Profitable Sales
An important metric and it should also be set against growth rates achieved by key competitors to provide context to the number. However, as one correspondent noted, although like for like growth is important, it is not always a straightforward comparison. Products and services might be withdrawn due to exogenous factors (risk, for example, within financial services) or tech developments might just make such comparisons misleading. So, often, proxies are used.
Price Sensitivity
According to Binet and Field (Media in Focus; IPA Effworks 2017) just 16% of cases in the IPA Databank make any mention of price effects. And yet it has to be one of the key metrics out there. In a world where private label alternatives are often indistinguishable from branded alternatives, the ability to charge a premium price is a key benefit of marketing. As Laura Mazur writes in the preface to Doyle’s Value-Based Marketing ‘…it enables the purpose of marketing in commercial firms to be clearly defined. Its purpose is to build intangible assets that increase shareholder returns.’ And the effect is easy to show using econometric or MMM studies. A simple example is given on the next page.
Brand example operating within UK Healthcare (Over the Counter).
As you can see, the elasticity is getting closer to zero. This shows that consumers are getting less price sensitive and that future price increases will cause fewer consumers to defect. This effect is extremely unlikely to be seen in a few months – to be sure of a trend rather than a temporary blip, one should run the analysis and consider the effect over a number of years, as noted by Binet and Field.
And this brings us onto one of the key problems facing marketing effectiveness measurement strategies in this, or any other, age. This is relatively short tenure of key personnel. Some correspondents noted that brand managers move around an organisation fairly quickly, often staying in place for less than 12 months – this is the time frame within which they need to see and measure success.
So, an effect that might take years to materialise (and monetise) and needs regular support is unlikely to find favour where tenure is short.
Return on Investment (ROI)
Within marketing effectiveness studies, focus often defaults to the marketing return on investment, ROI. But is this sensible? In its defense, the ROI is seemingly a well understood diagnostic that can be readily understood and benchmarked against direct competitors in this and other markets as well as across verticals too. Great. And yet, as Binet and Field noted (Media in Focus; IPA EffWorks 2017), just 15% of submissions to the IPA Effectiveness Awards showed a high ROI AND strong profitable growth. Moreover, the metric has a number of shortcomings too.
First, what actually is the ROI? At its simplest, it is just incremental revenue caused by some event divided through by its cost.
But let’s look at this in more detail. What does cost actually mean? Does it include just the direct media cost, or does it include creative cost too? What about agency fees? And if the creative is re-used, how do we factor this into the calculation. And what about supplier funding? For some retailers, this is around 30% of their media budget. Is it included or stripped out?
And that’s just costs. The incremental benefit is also open to interpretation. For example, the media activity may well increase brand consideration and purchase intent which surely leads to increased future sales revenue. Is this included? Many media are both activators and enablers – how is the impact on other media assessed? What about pure upper funnel media and is it a short term result, or does it include the long term too? And if so, how? Does it include changes to base sales, or perhaps consider marketing’s role in decreasing price sensitivity?
And nor is the comparability straightforward, even within what one might consider the same vertical. Within FMCG, the ROI is likely to vary markedly between alcoholic and soft drinks. For example, on a £1m media campaign one has to sell many more cans of Coke than bottles of Tanqueray gin before a positive ROI is achieved just simply because of the differences in price.
So how should we use the ROI? Really, it is best used to track changes campaign by campaign – has it improved, if so why? How many of the factors driving the change are controllable, and how can we protect ourselves from those factors over which we have no control. Also, the metric can play a useful role in allocating the media budget between channels, taking account of increasing and diminishing returns. But should it be used to set the media budget? – our panel think not and that this should be done by circling back to hard business targets.
Long Term Effects
One of the key aims of marketing lies in its ability to change the tastes and preferences of both current and future/potential customers thereby making them more likely to buy. There are a number of ways in which this might visibly affect sales volumes over time, but one is through an evolving level of base sales. This idea was popularized by Simon Broadbent [Adstock modeling for the long term; IJMR, 1995, Broadbent and Fry], and techniques that allow this phenomenon to be investigated are now widely available and allow attribution back to specific marketing initiatives.
Extra Share Of Voice (ESOV)
Other metrics also have problems associated with them. For example, ESOV is often referenced, building on the work of John Phillip Jones in the 1990s. Binet and Field, for example, report that for every additional 10 points of share of voice, one might expect to see share growth of 50 basis points.
However, the world is a much more complicated place now and both market share and share of voice are hard to define in a way that makes them useful. First, what actually is market share – does the data exist? For many UK retailers outside of grocery, the answer is “no”. Estimates can obviously be made, based partly on annual reports but it is often difficult to separate out international business from domestic and to focus on just those categories that are of interest. Other estimates can be made using official statistics from the Office for National Statistics again, it is hard to zero in on the exact sector that is of interest.
And nor is share of voice immune from criticism. It is well known that publically available data for online media is dramatically underestimated, simply because of the long tail of web sites not included in surveys.
You might know data for your own brand, but you will not know it for competitors. And the increasing importance of digital in all its forms makes this a serious omission. So ESOV may have been a relatively good predictor of success in the past, but it is less clear that it can be used now.
Brand Health
Fran Cassidy’s research conducted on behalf of CIMA and the IPA for the new publication ‘Culture First’ (IPA Effworks 2017) shows that brand health measures are routinely included as metrics that matter, with the most commonly referenced listed below:
- Brand Awareness | Salience | Brand Love
- Brand Affinity | Relevance | Reputation
- Brand Consideration
These metrics are likely to be key where marketing campaigns are focusing primarily on demand generation rather than driving short term effects.
Market Mix Modelling
We know that 58% of IPA submissions to the Effectiveness Awards between 2014 and 2016 included Market Mix Models (MMM) or other forms of econometric analysis. Increasingly, the panel reported that they are seeing these brand indicators playing a more powerful role within these forms of analysis.
Some vendors offer a more integrated approach to marketing response and effectiveness, considering more of a funnel approach. This allows one to trace the impact of upper funnel (demand generating) activity through a set of intermediate factors onto sales conversion, allowing all stages to be accurately valued. A relatively straightforward example is that OOH media regularly struggles to have a demonstrable impact on sales within a standard MMM. This is arguably because much OOH activity is explicitly upper funnel – generating demand and building awareness rather than a demand gathering media. Funnel analysis is required to identify this impact; MMM alone is unlikely to be sufficient.
Machine Learning and Experimentation
Machine Learning is neither a KPI nor methodology in itself, but the increasing availability of Machine Learning software and cloud based processing solutions is increasing the ease with which marketing effectiveness can be rolled out into ever more granular situations. Although there are many types of Machine Learning, one that is commonly used is Supervised Learning. As the name suggests, the ‘inference algorithm’ is developed using data where the outcome is known (buy, did not buy) before being used to predict answers for unknown outcomes. In most cases, learning is dynamic with continual feedback loops.
A common criticism of Machine Learning is that it is a black box solution – it is often not clear how an answer was derived to the end user. However, if predictive power is more important than “data story-telling”, then techniques from the world of Machine Learning can be very powerful.
Consider a very simple example - a piece of static digital display. Some small fraction of all the impressions will lead to an action – a visit to a website, potentially a sales conversion. By varying elements of the creative design it is relatively straightforward for a supervised learning approach to zero in on the particular aspects of the creative that are most successful at driving an action. For example, does one word resonate slightly better than another. Which font works best? What about colour? Is this picture slightly preferable to another? The accretion of all these marginal gains can lead to big improvements in impact. Note, the technique does not necessarily tell you “why” for any of these answers, it just tells you which works best. And for a piece of display copy with a short life cycle, this is probably enough.
Carefully designed experiments can also reveal the impact that different elements can have on the final conversion decision within a closed digital environment. For example, matching test and control groups Facebook have demonstrated a clear impact for paid ads on their platforms and Google have observed a similar phenomenon. Note that test and control does not necessarily describe the route by which something has worked (direct, or amplifying other steps on the path to purchase) but a clear impact is observed. Machine Learning techniques are not necessarily required for straightforward experiments like these, but to the volume of data being analysed, it can provide a convenient analytics framework.
ABOUT IPA EFFWORKS
The IPA’s new Marketing Effectiveness initiative seeks to create a global industry movement, to promote a marketing effectiveness culture in client and agency organisations, and improve our day-to-day working practices in three key areas:
marketing marketing: developing the case for marketing and brand investment in the short, medium and long term and promoting the benefits to internal and external stakeholders
managing marketing: providing awareness and understanding of how marketing works and how to write the best brief, develop the best process for planning and executing marketing programmes and motivating marketing and agency teams
measuring marketing: delivering the best models and guidance on tools and techniques, to plan, monitor, direct and measure the impact of marketing activity, using holistic approaches to return on investment.
This initiative takes the IPA’s effectiveness programme to a new level; working in collaboration with client advisors and association partners to showcase best in class, evidence based decision-making across the marketing function. By bringing together the best people in the industry Effectiveness Week (EffWeek) provides a trusted source of new thinking to address the issues that matter and an invaluable learning resource, under the umbrella of Effectiveness Works (EffWorks), our online hub.
As a partner sponsor of the IPA EffWorks initiative, Gain Theory has agreed to lead a 2018 research programme which will consolidate on, and take forward, this green paper, with best practice case examples. We will be conducting our research between November 2017 and June 2018 in the UK, the USA and China.