BY CHRIS SLOANE
Too little and the campaign risks having no impact, too much and advertisers are likely to be wasting money. And while it is true that even a perfect advertising schedule is not going to make an ineffective ad effective, it is also true that good scheduling can improve the payback of a campaign. There has been widespread debate since the 1970s about whether a low-frequency, high-reach strategy is better than a high-frequency lower-reach one. This debate has intensified in recent years with the balance tipping towards the former approach.
In this article, we look at:
Need to know There are several points that media planners should be aware of when designing a schedule to invest their clients’ money, understanding these will improve and help to justify their planning: 1. Can they make an estimation of the relative benefit of subsequent exposures? Various factors have the potential to make subsequent exposures in a given time-period more desirable, such as advertising NPD vs. established brands or levels of competitor pressure. 2. Is it a seasonal campaign? For example, a retailer advertising at Christmas or Easter – multiple frequency in a short period of time is likely to be beneficial here. 3. What is the degree of ad avoidance by media channel? The higher this is, the more frequency is to be valued. The level of ad avoidance by media channel should have an influence on how each channel is planned – maybe tight frequency capping for online isn’t always so desirable after all? 4. What levels of advertising and brand recall can be expected for the message that is to be delivered? The lower these are, the more role frequency has to play. 5. It is incumbent upon media planners not to have a one size fits all approach to their clients’ planning requirements. It is imperative to understand the nuance of the campaign, the brand and the category. Media planning in today’s world should be an exciting challenge, the evolving role of frequency makes it more so.
Frequency: is three the magic number? In most respects, the frequency debate began in 1972 with Herb Krugman’s research paper, ‘Why Three Exposures May Be Enough’, published in the Journal of Advertising Research (vol. 12, no. 6), and authored when he was head of advertising research at General Electric.
Although there had been studies of frequency and the shape of the advertising response function before Krugman’s 1972 paper1, it was the first to gain traction with advertising agency planners and provide an answer to the question of how many exposures are required to trigger a response, and over what time-period. Krugman’s hypothesis was that, psychologically speaking, there are three exposures:
1. First exposure: this is unique, and an individual’s reaction is dominated by a response that could be best characterized as ‘What is it?’ 2. Second exposure: is also unique. It is the only exposure that can be characterized by a response that Krugman termed ‘what of it?’ i.e. the individual knows that they have seen this before and there is an element of recognition. 3. Subsequent exposures: categorized by Krugman as ‘psychologically identical’ i.e. a reminder to earlier exposures and the start of the withdrawal of attention that had previously been built up. Krugman’s position was that an effective response from a single exposure was unlikely, and that advertising did not require a ‘large’ number of exposures for it to work. Krugman’s concept of three psychological exposures did not correlate to actual exposures – they could happen at any time, so the tenth actual exposure might be the first psychologically (because the consumer wasn’t in the market for the product earlier).
This was crucial, and arguably ill-understood: three by itself, was never the magic number. At the end of the 1970’s, a review of what was known about frequency was conducted by Mike Naples of the Association of National Advertisers (ANA) in the US, partly because of “escalating media costs in recent years, especially in television, and the increased concern among advertisers not to spend more than is necessary2. Naples ended his report with 12 conclusions several of which bear repeating: “One exposure of an advertisement to a target group consumer…has little or no effect” “…the central goal of productive media planning should be to place emphasis on enhancing frequency rather than reach.” “…optimal exposure frequency appears to be at least three exposures within a purchase cycle.” “Beyond three exposures…increasing frequency continues to build advertising effectiveness [albeit] at a decreasing rate.3 This review (intentionally or not) created a widely adopted rule of thumb that a frequency of 3+ over the courseof a purchase cycle should be targeted. This led to a preponderance of burst planning – putting out a mass of GRPs in a short period of time to hit the magical 3+ number, before going silent for long periods of time when the media dollars dried up.
What about recency? The frequency approach remained largely uncontroversial until the mid-1990s when first John Phillip Jones, analytically4, and Erwin Ephron, more intuitively, challenged the idea that a certain level of frequency was required for advertising to be effective “Perhaps effective frequency was never right. Perhaps it is just wrong for today’s consumer markets. Either way, we better start rethinking how to spend the client’s money.” The central tenet of what Ephron termed ‘recency planning’ is that the real media target isn’t consumers, its purchases, and since in almost all categories purchases occur 52 weeks a year the most sensible way to intercept them is with continuous advertising.
Even for a high-ticket item that lasts for many years (like a car or a consumer durable), or a specific repeat purchase (insurance), there are enough people in the market every week of the year to make a continuity schedule optimal if the first exposure is most effective.
The ideas behind recency planning or ‘continuous reach planning’ came to the mind of a wider audience following the publication of Byron Sharp’s ‘How Brands Grow’ in 2010. The Ehrenberg-Bass Institute’s theories of the advertising response function are firmly that “reach is more important than frequency of exposure; continuous advertising is more effective than bursts followed by long gaps5. This was followed by, or at least reinforced, the adoption of ‘reach based planning’ methodologies by many large corporations with greater or lesser success.
The final stop on this tour of the development of frequency thinking takes us back to Ephron. In 2010 Ephron’s point of view on the reach/frequency debate had evolved to a point where he described recency planning (creating plans that value reach and avoid frequency) as “…flawed. Audience exposure to an ad is no longer a given; it’s a probability of less than one. With multi-tasking and commercial avoidance, a large part of a program’s reported audience will not see the advertiser’s message. That means it will often take more than a single exposure to reach a consumer with an ad. And that calls for frequency6.”
Testing the theory How accurate was Ephron’s change of heart? And how important might it be for planning? Well the fact that adverts are avoided, ignored and incorrectly branded is certainly uncontroversial. In ‘How Brands Grow’, Sharp noted that fewer than 20% of television advertisements are noticed and correctly branded, and in a 2014 Admap article that ‘a mere 14% of online advertisements were seen by their (paid for) 'viewers'’.
While Sharp points out that advertising can work without people paying it much attention (Robert Heath’s Low Attention processing model springs to mind), he notes that ‘it is better if advertising can generate more conscious attention and processing.’
If this is true, what of Ephron’s view that it gives frequency a second chance even if the biggest response is the first response? To test this, we developed a simulation of a brand’s target audience, populating the simulation with 1,000 individual agents with different TV consumption habits to replicate the real world. We then ran various mock media schedules to see the level of reach achieved at each frequency and allowed each frequency to have a different effectiveness.
There are 4 Hypotheses:
Clearly each Hypothesis could have different weights, but to test the system, let’s go with the above and see the effect of different media schedules. We will then extend this to a world of ad avoidance to study the implications. Each Hypothesis was tested in a Study against four media schedules, with identical budget. Schedule 1 spends 200 GRPs in a week and then has seven weeks of zero GRPs (200,0,0,0,0,0,0); Schedule 2 is (100,0,0,0,100,0,0,0); Schedule 3 is (50,0,50,0,50,0,50,0) and Schedule 4 is continuous (25,25,25,25,25,25,25,25). Figure 2 shows the relative effectiveness of each media schedule and should be read by column: the highest number in each column shows which media schedule is ‘best’ under the assumptions we have made in each Hypothesis.
The results in Figure 2 make sense: H1 sees each schedule as equally effective; H2 (the 3+ Example) has its greatest effect from the higher weight schedules A and B with C and D progressively worse. The two reach based Hypotheses (H3 and H4) are progressively better the more the advertising is spread out, with Schedule D (the low-weight continuity strategy) twice as good as Schedule A (the pure burst) for H47.
What happens if we add in an ad-avoidance metric? If we take Sharp’s benchmark that 80% of TV ads are ignored or incorrectly branded, let’s assume that 80% of each OTS is not seen, processed or is incorrectly branded, how does this change the results? The results can be seen in Figure 3. Now H2 sees more overall impact from Schedule A, and all schedules are similar for Study 3. It is only in H4 where Schedule D is still most effective and that has come down from +99% to +24%.
Clearly in the above study, Hypothesis 1 is unlikely (we used it to verify the model); but unless every exposure after the first is much less effective (H4), there are potential benefits to a slightly higher frequency, particularly in a world of ad avoidance. We would argue that testing the shape of the response is important, particularly for newer brands or categories: even if the first exposure is best, how much better is it? It will have an impact for the optimal media plan particularly under conditions of greater ad avoidance. Given the results shown here, Ephron was probably over-playing the importance of frequency if the first exposure is always the most effective, but it is possible to envisage reasonable scenarios where a flat low weight plan might not always be best.
In summary
Sources 1. See Leon. A. Jakobovits, “Semantic Satiation and Cognitive Dynamics”, American Psychological Association meeting paper, September 1966; Valentine Appel, “The Reliability and Decay of Advertising Measurements”, National Industrial Conference Board meeting paper, October 1966; Robert. C. Grass, “Satiation Effects of Advertising”, 14th Annual Conference, Advertising Research Foundation, 1968 2. Krugman, H. (1979) Foreword to the ANA report Effective Frequency: the relationship between frequency and advertising effectiveness 3. Naples, M. (1979) Effective Frequency: the relationship between frequency and advertising effectiveness, McGraw-Hill 4. Jones, John Philip. (1995) When ads work: new proof that advertising triggers sales, PHI 5. Sharp, B. (2010) How brands grow: what marketers don’t know, Oxford University Press 6. Ephron, E. (April 10, 2010). ‘Back to the Future: or Frequency's Second Chance’ 7. The results shown here are built up using a Monte-Carlo estimation procedure. The reach and frequency calculations are built up from the individual level using media consumption data, so each time the estimation is run will produce slightly different results. The results shown are the average of running 100 simulations.
Chris Sloane is a marketing effectiveness professional with 14 years’ experience applying advanced analytical techniques, including econometrics and multiple regression, for a range of clients.
He has worked within WPP in a variety of roles, including Head of Business Science for MediaCom in Australia and as a Partner for Ohal. He holds an MSc in Economics and a BSc in Economics & Politics.