- The Big Lie – the complete book online
- Back cover
- Title page
- Publication Data
- The Author
- Table of Contents
- Jesuitical Reasoning
- Part I
- 1 Effectiveness
- 2 Influence
- 3 Measurement
- Part II
- 4 Branding
- 5 Creativity
- 6 Irrationality
- 7 Hyperbole
- 8 Attention
- 9 Involvement
- 10 Emotion
- Part III
- 11 Humour
- 12 Visualisation
- 13 Demonstration
- 14 Endorsement
- 15 Negativity
- 16 Tone
- 17 Style
- 18 Deconstruction
- Part IV
- 19 Fashion
- 20 Tobacco
- 21 Corporate
- 22 Banking
- 23 Politics
- Part V
- 24 Admen
- 25 Unreality
- 26 Commonweal
- 27 Morality
- 28 Behaviour
- Part VI
- 29 Technology
- 30 Internet
- 31 Future
How long is a piece of baloney?
Not everything that can be counted counts, and not everything that counts can be counted.
Albert Einstein, German physicist
Most people believe that the advertisements they see are the crafted product of some sort of scientific assessment, and advertising ideas are usually sold with this phrase: “It researched well”. What does that mean? At one extreme it may occasionally signify that the advertisement performed well in a laborious behavioural test such as a “single-source” investigation. In this type of research, purchasing habits may be correlated to television viewing: a compliant household is fitted out with a meter on top of the television set – perhaps even using heat sensors to determine how many people are in the room when it’s on – and the family uses a scanner to record all its grocery purchases. Though empirically based, the emphasis is on the technological gimmickry; the data is often too broad to pin down specific reactions to individual commercials. And real-life techniques like these are not suitable for assessing ideas before they are turned into expensive films. More often, the palliative phrase “It researched well” refers to a methodology which is far simpler, cheaper, more convenient, and more manipulable: it means that the advertisement or the concept was shown to a number of people who made favourable comments of some kind – in a “focus group”.
Like accountants appraising the productivity of the National Health Service, advertising researchers measure what they are able to measure, not necessarily what really matters. For instance, it is relatively easy to evaluate whether people are likely to have seen advertising, whether they remember seeing it, what message it conveyed, and whether they liked it. And so advertising is designed to meet these criteria. For many decades the American company Procter & Gamble used a research company called Burke based in its home town of Cincinnati, Ohio, to evaluate its television commercials. Burke’s primary instrument of measurement was whether viewers could recall the selling points contained in the advertising. Even today, wherever you see them, commercials for P&G products can be identified by their emphasis on implanting copy messages. While advertisers and agencies rejoice when their efforts pass simple tests like these, there is no evidence at all that liking an advertisement or the ability to recall its content has any effect whatsoever on creating “the sale in the mind” – the disposition to buy a product or subscribe to a belief.
So, more diligent advertising researchers have developed an inventory of other tools which attempt to measure the psychological influence of advertising on consumer judgement. These efforts are handicapped by the fact that consumers are usually unwilling to admit, or may not even be aware of, the factors influencing their decisions. Straightforward declarations of “intentions to buy” are not a valid guide to actual purchase decisions. Shifts in attitudes, the tendency to agree or disagree with certain opinions, may be reliably recorded, but the contribution of these views to behaviour can rarely be conclusively demonstrated. (Relatively few advertisements are tested in any way at all; time and cost pressures restrict research to major campaigns, and not all of these.) Advertising research is fraught with methodological problems, too. In an analogy to Heisenberg’s uncertainty principle, the simple act of asking questions introduces elements, such as bias for or against the interviewer, which change the situation researchers are trying to measure.
Even worldly-wise institutions which should know better have a touching faith in naive research based on what people say they may or may not do in a given set of circumstances. A 1995 leader in The Independent drew robust conclusions from a survey which had discovered that six out of ten people said they were ready to pay 2p in the pound income tax to boost NHS revenue, an extraordinary expression of altruism from voters generally deeply resentful of new taxes. Fortunately for the Labour Party, it was not sufficiently encouraged to include this idea in its 1996 election manifesto.
In statistical studies of advertising effectiveness, expenditure is given great weight, because it is an index of exposure to the message and something that is easily measured. Clearly, saturation techniques can make a dominant impression, even through quite meaningless advertising messages, e.g. “Heineken refreshes the parts other beers cannot reach”. As David Abbott, the creative director who founded one of Britain’s largest advertising agencies, Abbot Mead Vickers BBDO, put it: “On TV you can achieve results even with quite bad advertising if you spend enough on it . . . ‘If you throw enough pennies at the wall you make a hole in it’”. The question is, what kind of “results” does he have in mind? The extent of penetration of such associations into the public mind can easily be measured. But persuasion is usually left out of the equation.
The quest has inspired the proliferation of any number of theories about “how advertising works”. These range from the so-called “linear sequential models” like Starch, DAGMAR and AIDA1 , through H. E. Krugman’s theory that the only thing that matters is top-of-mind (“salient”) recall, to attempts to apply sophisticated theories drawn from psychology, like Martin Fishbein’s, dealing with the relationship between attitudes and behaviour.2 There is a conscientious handbook, written by the former chairman of a successful Dutch advertising agency, which summarises empirical findings of the many different things which can be measured about advertising exposure.3 What they all have in common is the attempt to find some intermediate criterion of advertising effect on the unproved assumption that it is related to sales. Not only is there no proof, there is also no reason why recalling the content of an advertisement, or literally believing what it says, should in any way determine the likelihood of buying the product. Furthermore, the attempt to find one final model of the advertising process is forlorn. Advertising, like any other form of communication works in different ways, in different circumstances, and on different people. The same advertisement can even work on the same person differently depending on the circumstances – what he has just eaten, for example, and whether he enjoyed it. No wonder Raymond Chandler’s private detective hero, Philip Marlowe, described playing chess with oneself as the greatest waste of intellectual energy outside of an advertising agency.
While market research techniques are good at measuring what is already happening, they are poor at predicting behaviour, for several reasons. Pre-testing cannot easily assess anything which is entirely new to consumers’ experience. For competitive brands, an assessment of an advertisement’s performance is only relevant if it is in comparison to other brands, yet few studies take this into account. Then there is the publicity environment of the marketplace. Anyone who could reliably predict next week’s “Top Ten” records could make a small fortune. People have tried, but there is no test situation which can allow for the hype factor, the massive influence of publicity on radio, television, and the press which affects the sale of hit records.
Nevertheless, advertising research thrives. For two reasons: first, there is a great deal it can do, at every stage in the creative development, to determine whether the campaign is “on message”. It can tell whether the idea is getting across (though not whether it is worth getting across) and how people react to it, and crucially, it provides everyone concerned with the creation or approval of advertising with a pseudo-scientific support for his or her decisions. This rubber crutch is a versatile and powerful weapon against opposing opinion, because few decision-makers in the industry have a real grasp of the basic principles of psychological research or statistics, or, if they do, no incentive to invoke them to limit the sweep of their judgement.
To fulfil this need, the research industry these days, and for some decades past, habitually tends to evaluate advertising on the basis of semi-structured group interviews. An interviewer guides a series of discussions, each probing chosen topics amongst homogeneous groups of six to ten individuals of the right sort – the primary target group. Where specific advertising approaches are being tested, the group will be exposed to these in rough or finished form. Properly selected and skilfully managed, these discussions can provide very useful stimulation to the development of advertising hypotheses, allowing the advertising planner or creative person to amplify intuitive thought by bouncing ideas off consumers. Researchers rigorously emphasise that such explorations are entirely qualitative in nature, and thus possess no quantitative statistical validity. Like a newspaper’s rough-and-ready “vox pop” man-in-the-street interview on an issue of the day, they can provide insight, colour, and inspiration. But not measurement.
Nevertheless, in the way of the world, such qualitative results are invariably used to support judgement. The researchers themselves, although they know better, in the analytical reports they submit cannot refrain from using statistical terms: “most respondents felt . . .”, “a majority of the sample agreed . . .”, “hardly anyone thought . . .”, etc. Those who take action on the basis of such reports are generally untrained in statistics and keen to support a favoured view. The combination is irresistible. In their hands the tentative, qualitative probe becomes a yardstick. This technique used to be called, unpretentiously, “small-group discussions”, but some bright spark eventually reckoned that this didn’t sound very authoritative. So now they are known as “focus groups”, which implies a greater precision, and since this repackaging they have been generally accepted, by advertisers, journalists, politicians, and the world at large as some kind of litmus test. Only the name has changed, not the methodology; but because they are quick and cheap, and can be interpreted to produce almost any answer you want, such probes are now widely advanced by the unprincipled and the unenlightened as valid predictive research.
Gerald de Groot ran Schwerin’s advertising testing operation in the UK, and later became Director of Marketing Services for Lintas, the advertising company partly owned by Unilever, and Chairman of the British Market Research Society. He believes that quantitative measurements such as Schwerin went out of favour for practical reasons. “In many product fields ‘competitive preference’ shifts were quite small, and it was impractical to get large enough samples to achieve statistically significant measurements. Commercial pressures prevailed in advertising research. Advertisers preferred speed to accuracy”.
Some advertising research practitioners, such as the American Kevin J. Clancy, Chairman and Research Director of Yankelovich, Clancy, Shulman, are outraged by the custom of extrapolating from inadequate qualitative research:
Focused groups (the current mania), importance ratings and gap analysis are all examples of pseudoscientific hallucinogenic drugs which inspire the cavalry generals to go crashing off into oblivion. A lot of this “research” is done among strange samples of homeless people wandering around malls, and sometimes – and this is really scary – stat types are called in to run multinomial logit regressions (or some other form of rocket science) on patently preposterous data.4
And from the other end of the spectrum, this cri de coeur from an equally outraged British copywriter, Tony Brignull, working as a creative consultant to the advertising agency D’Arcy, Masius, Benton & Bowles:
Here we have groups of people paid to watch an approximation of your commercial. Naturally, they will be glued to it, so the first objective of advertising, “grab attention”, is grabbed for you. Equally obvious, if three rubbish scripts are on the table there’ll still be a winner – even if it’s a loser. This explains why the breaks are still littered with dumb commercials.
Again, groups can be dominated by one strident voice. I wrote a commercial for a healthy breakfast cereal featuring Diana Dors in a silken bed saying “I never do anything because it’s good for me”. In research, East End ladies loved her but in the Kensington group one toffee-nosed woman said, “She looks like a prostitute to me”. Guess what the next woman said, and the next. “Well, I like her?” No, they all thought she was on the game with this muesli and, when the client heard, we lost the account.
It’s very hard to research a truly original idea. By definition, it won’t be like other commercials and the group will split down the middle. People tend to like things they know: superior butlers, stroppy kids, adoring mothers, moody blokes, pouty girls – you’ve seen them a million times.5
In this kind of qualitative research enormous reliance tends to be placed simply on whether or not people “like the ad”, as some kind of entertainment. This is fun for creative people, gives the people who are paying for the advertising a chuckle, and wins awards for the agency. Creativity is thus led away from sales effectiveness in its desire to please audiences who may or may not buy the product as a result.
Even where quantitative techniques provide valid measurements of behaviour, most people fail to understand research and are distrustful of statistics. Not surprisingly, since they are so frequently manipulated. In 1994 the head of the Government Statistical Service, Bill McLennan, expressed concern that there was little public confidence in the official unemployment statistics. Continual changes in the definition of who was out of work had prompted an inquiry by the Royal Statistical Society. Meanwhile Virginia Bottomley’s frequent recitations of health service performance figures were rewarded with a Gallup Poll rating her as the most insincere of all of Britain’s leading politicians. This distrust is exacerbated by the selective analysis of statistics by those with a vested interest. And the common man has a fervent disbelief in statistical probability.
So, people in advertising have a love/hate relationship with research, as an actor does with a critic. Creative people are uneasy about research and often disparaging. Some of the most highly placed habitually offer ill-considered soundbites such as: “If research was foolproof, no campaign would ever fail” and “According to research, we’ve got a Labour/Tory government”. Yet all creative people intuitively research their own experience; the difference is that the exploration is usually confined to a sample of one: oneself. Advertising researchers keep busy investigating effects which can be investigated, while leaving plenty of room for interpretation and little scope for enlightenment. Unlike scientific or academic research, there is no common effort to investigate basic questions about the psychology of advertising, and how behavioural changes occur.
No one knows for example, the answer to practical questions such as:
• As a general rule, how long should a TV commercial be? When should it be longer? Or shorter? (The industry presumes longer commercials have more value, that is, the extra time costs more.)
• Or, how big or small should a press advertisement be? (Bigger ads cost more, too.)
• Which media are most persuasive for which kind of presentations?
• Does humour work? How?
• How many times do you have to see an ad before it has an effect?
• How many times before you’re sick to death of it?
• Does it matter if you’re sick to death of it – or is that why it works?
Because of the competitive nature of the business there is no general funding of research, no sharing of information, and, because of its ephemeral nature, very little passing down of wisdom through the generations. And so while the ship of advertising ploughs through uncharted waters, the band plays the latest tunes.
Advertising clearly can have immense effect on mass behaviour. However, because it is difficult to isolate its influence, convincing demonstrations of effect are rare. As a result there is profound disagreement about how it works, and the people who create advertising have a vested interest in preserving the mystery. That doesn’t mean convincing guides do not exist. Research and experience from many fields – from psychology, physiology and sociology, from learning theory to salesmanship – provide rich instruction. The next section explores what we know about how people respond to advertising.
1 Starch: An advertisement must be 1. Seen, 2. Read, 3. Believed, 4. Remembered, 5. Acted upon. DAGMAR: Advertising must make the prospect 1. Aware of the brand, 2. Comprehend the product, 3. Wish to buy the product, and 4. Stir him to action. AIDA: Attention, Interest, Desire, Action.
2 For an examination of these and other learning theories, see The Persuaders Exposed: Advertising and Marketing, The Derivative Arts, Gerald de Groot, Associated Business Press, 1980.
3 Advertising Effectiveness, findings from empirical research, Giep Franzen, NTC Publications, 1994.
4 The Coming Revolution in Advertising: Ten Developments which Will Separate Winners from Losers, Journal of Advertising Research, February-March 1990, pp. 47-52.
5 The Guardian, 21 December 1992.