MORE DETAILED ANALYSIS ON KEY METRICS
An important metric and it should also be set against growth rates achieved by key competitors to provide context to the number. However, as one correspondent noted, although like for like growth is important, it is not always a straightforward comparison. Products and services might be withdrawn due to exogenous factors (risk, for example, within financial services) or tech developments might just make such comparisons misleading. So, often, proxies are used.
According to Binet and Field (Media in Focus; IPA Effworks 2017) just 16% of cases in the IPA Databank make any mention of price effects. And yet it has to be one of the key metrics out there. In a world where private label alternatives are often indistinguishable from branded alternatives, the ability to charge a premium price is a key benefit of marketing. As Laura Mazur writes in the preface to Doyle’s Value-Based Marketing ‘…it enables the purpose of marketing in commercial firms to be clearly defined. Its purpose is to build intangible assets that increase shareholder returns.’ And the effect is easy to show using econometric or MMM studies. A simple example is given on the next page.
Brand example operating within UK Healthcare (Over the Counter).
As you can see, the elasticity is getting closer to zero. This shows that consumers are getting less price sensitive and that future price increases will cause fewer consumers to defect. This effect is extremely unlikely to be seen in a few months – to be sure of a trend rather than a temporary blip, one should run the analysis and consider the effect over a number of years, as noted by Binet and Field.
And this brings us onto one of the key problems facing marketing effectiveness measurement strategies in this, or any other, age. This is relatively short tenure of key personnel. Some correspondents noted that brand managers move around an organisation fairly quickly, often staying in place for less than 12 months – this is the time frame within which they need to see and measure success.
So, an effect that might take years to materialise (and monetise) and needs regular support is unlikely to find favour where tenure is short.
Within marketing effectiveness studies, focus often defaults to the marketing return on investment, ROI. But is this sensible? In its defense, the ROI is seemingly a well understood diagnostic that can be readily understood and benchmarked against direct competitors in this and other markets as well as across verticals too. Great. And yet, as Binet and Field noted (Media in Focus; IPA EffWorks 2017), just 15% of submissions to the IPA Effectiveness Awards showed a high ROI AND strong profitable growth. Moreover, the metric has a number of shortcomings too.
First, what actually is the ROI? At its simplest, it is just incremental revenue caused by some event divided through by its cost.
But let’s look at this in more detail. What does cost actually mean? Does it include just the direct media cost, or does it include creative cost too? What about agency fees? And if the creative is re-used, how do we factor this into the calculation. And what about supplier funding? For some retailers, this is around 30% of their media budget. Is it included or stripped out?
And that’s just costs. The incremental benefit is also open to interpretation. For example, the media activity may well increase brand consideration and purchase intent which surely leads to increased future sales revenue. Is this included? Many media are both activators and enablers – how is the impact on other media assessed? What about pure upper funnel media and is it a short term result, or does it include the long term too? And if so, how? Does it include changes to base sales, or perhaps consider marketing’s role in decreasing price sensitivity?
And nor is the comparability straightforward, even within what one might consider the same vertical. Within FMCG, the ROI is likely to vary markedly between alcoholic and soft drinks. For example, on a £1m media campaign one has to sell many more cans of Coke than bottles of Tanqueray gin before a positive ROI is achieved just simply because of the differences in price. So how should we use the ROI? Really, it is best used to track changes campaign by campaign – has it improved, if so why? How many of the factors driving the change are controllable, and how can we protect ourselves from those factors over which we have no control. Also, the metric can play a useful role in allocating the media budget between channels, taking account of increasing and diminishing returns. But should it be used to set the media budget? – our panel think not and that this should be done by circling back to hard business targets.
One of the key aims of marketing lies in its ability to change the tastes and preferences of both current and future/potential customers thereby making them more likely to buy. There are a number of ways in which this might visibly affect sales volumes over time, but one is through an evolving level of base sales. This idea was popularized by Simon Broadbent [Adstock modeling for the long term; IJMR, 1995, Broadbent and Fry], and techniques that allow this phenomenon to be investigated are now widely available and allow attribution back to specific marketing initiatives.
Other metrics also have problems associated with them. For example, ESOV is often referenced, building on the work of John Phillip Jones in the 1990s. Binet and Field, for example, report that for every additional 10 points of share of voice, one might expect to see share growth of 50 basis points.
However, the world is a much more complicated place now and both market share and share of voice are hard to define in a way that makes them useful. First, what actually is market share – does the data exist? For many UK retailers outside of grocery, the answer is “no”. Estimates can obviously be made, based partly on annual reports but it is often difficult to separate out international business from domestic and to focus on just those categories that are of interest. Other estimates can be made using official statistics from the Office for National Statistics again, it is hard to zero in on the exact sector that is of interest.
And nor is share of voice immune from criticism. It is well known that publically available data for online media is dramatically underestimated, simply because of the long tail of web sites not included in surveys. You might know data for your own brand, but you will not know it for competitors. And the increasing importance of digital in all its forms makes this a serious omission. So ESOV may have been a relatively good predictor of success in the past, but it is less clear that it can be used now.
Fran Cassidy’s research conducted on behalf of CIMA and the IPA for the new publication ‘Culture First’ (IPA Effworks 2017) shows that brand health measures are routinely included as metrics that matter, with the most commonly referenced listed below:
These metrics are likely to be key where marketing campaigns are focusing primarily on demand generation rather than driving short term effects.
We know that 58% of IPA submissions to the Effectiveness Awards between 2014 and 2016 included Market Mix Models (MMM) or other forms of econometric analysis. Increasingly, the panel reported that they are seeing these brand indicators playing a more powerful role within these forms of analysis.
Some vendors offer a more integrated approach to marketing response and effectiveness, considering more of a funnel approach. This allows one to trace the impact of upper funnel (demand generating) activity through a set of intermediate factors onto sales conversion, allowing all stages to be accurately valued. A relatively straightforward example is that OOH media regularly struggles to have a demonstrable impact on sales within a standard MMM. This is arguably because much OOH activity is explicitly upper funnel – generating demand and building awareness rather than a demand gathering media. Funnel analysis is required to identify this impact; MMM alone is unlikely to be sufficient.
Machine Learning is neither a KPI nor methodology in itself, but the increasing availability of Machine Learning software and cloud based processing solutions is increasing the ease with which marketing effectiveness can be rolled out into ever more granular situations. Although there are many types of Machine Learning, one that is commonly used is Supervised Learning. As the name suggests, the ‘inference algorithm’ is developed using data where the outcome is known (buy, did not buy) before being used to predict answers for unknown outcomes. In most cases, learning is dynamic with continual feedback loops.
A common criticism of Machine Learning is that it is a black box solution – it is often not clear how an answer was derived to the end user. However, if predictive power is more important than “data story-telling”, then techniques from the world of Machine Learning can be very powerful.
Consider a very simple example - a piece of static digital display. Some small fraction of all the impressions will lead to an action – a visit to a website, potentially a sales conversion. By varying elements of the creative design it is relatively straightforward for a supervised learning approach to zero in on the particular aspects of the creative that are most successful at driving an action. For example, does one word resonate slightly better than another. Which font works best? What about colour? Is this picture slightly preferable to another? The accretion of all these marginal gains can lead to big improvements in impact. Note, the technique does not necessarily tell you “why” for any of these answers, it just tells you which works best. And for a piece of display copy with a short life cycle, this is probably enough.
Carefully designed experiments can also reveal the impact that different elements can have on the final conversion decision within a closed digital environment. For example, matching test and control groups Facebook have demonstrated a clear impact for paid ads on their platforms and Google have observed a similar phenomenon. Note that test and control does not necessarily describe the route by which something has worked (direct, or amplifying other steps on the path to purchase) but a clear impact is observed. Machine Learning techniques are not necessarily required for straightforward experiments like these, but to the volume of data being analysed, it can provide a convenient analytics framework.
The IPA’s new Marketing Effectiveness initiative seeks to create a global industry movement, to promote a marketing effectiveness culture in client and agency organisations, and improve our day-to-day working practices in three key areas:
marketing marketing: developing the case for marketing and brand investment in the short, medium and long term and promoting the benefits to internal and external stakeholders
managing marketing: providing awareness and understanding of how marketing works and how to write the best brief, develop the best process for planning and executing marketing programmes and motivating marketing and agency teams
measuring marketing: delivering the best models and guidance on tools and techniques, to plan, monitor, direct and measure the impact of marketing activity, using holistic approaches to return on investment.
This initiative takes the IPA’s effectiveness programme to a new level; working in collaboration with client advisors and association partners to showcase best in class, evidence based decision-making across the marketing function. By bringing together the best people in the industry Effectiveness Week (EffWeek) provides a trusted source of new thinking to address the issues that matter and an invaluable learning resource, under the umbrella of Effectiveness Works (EffWorks), our online hub.
As a partner sponsor of the IPA EffWorks initiative, Gain Theory has agreed to lead a 2018 research programme which will consolidate on, and take forward, this green paper, with best practice case examples. We will be conducting our research between November 2017 and June 2018 in the UK, the USA and China.