What do EEG, sailing meteorologists and the 2016 election have to teach marketers about prediction? According to neuroscientist Haydn Northover, just asking has never been sufficient in developing holistic advertising predictions.
Predicting the future is hard. Typically, a discussion surrounding neuroscience is couched in its implied superiority to other measures – that rely on asking people, for instance.
Sometimes we think fast, and other times slow (as Daniel Kahneman’s work popularised). Thus it makes sense to measure both fast and slow to predict the effectiveness of advertising creative.
The sailor’s prediction
Sailing relies on sound weather predictions. In their quest to wrestle the America’s Cup away from the Americans, predicting weather patterns was paramount for Team New Zealand.
It would so influence all elements of preparation and race day strategy that their meteorologist Dr Roger ‘Clouds’ Badham turned to PredictWind for the most accurate weather predictions possible.
PredictWind understands something fundamental to prediction – each model of prediction is likely to be different in several ways. Each has its own set of parameters, assumptions and model weightings to turn data into a probability for a given event.
And no one model is completely accurate. To minimise the chances of inaccuracy, PredictWind relies on four models.
Weather stations around the globe collect information (many use PredictWind) and separate models are run on this data. When all predictions align, confidence is high. Multiple data sources (models) with their own individual assumptions and parameters are far more accurate than single sources.
The same applies to understanding decision-making in general including prediction of an upcoming event such as the effectiveness of advertising.
For decades, the research industry relied on self-report data from consumers to determine the effectiveness of advertising. It has served us well. We have validated measures to demonstrate links between strong advertising and in-market effectiveness and weak advertising and no in-market effectiveness.
Are self-reported measures ineffective?
Or more generously – missing some of the picture? Some measures are fruitless to ascertain advertising effectiveness; and asking a respondent to respond to advertising itself is divorced from how most advertising works. This reveals the opportunity for neuroscience to emerge as a measure of advertising effectiveness.
There are various tools in a neuroscientist’s arsenal – fMRI, EEG, skin response and facial coding are all tools that allow a researcher to gauge a response from an individual – and importantly as the respondent views the material in question.
For many researchers, this was the Holy Grail – we could measure the effect of an ad by looking at an individual’s response to that ad – not by asking for their view.
But like the weather, looking at one source of information is not necessarily definitive.Individuals respond to stimuli in a number of ways.
Yes, neurons fire in their brain, their facial expressions alter during certain scenes and sweat glands open ever so slightly when they are interested in material. Yet, there are also other sources – we can collect their considered opinion, how much they say they like it, the scenes they recalled most often, the meaning they took from the creative etc.
Furthermore; what behavioural action they took, would they share it, or in the case of digital creative, did they watch the whole ad or skip as soon as they were able?
For example, Volkswagen’s 2012 Superbowl ad ‘The Dog Strikes Back’ features a slightly overweight dog that realises he needs to lose a few kegs.
After evaluating this creative, we found as people watched, the most engaging scenes were those with the dog getting in shape on the treadmill and dragging weights. Yet, when asked to recount the story of the ad, it was the outcome of this activity that was most recalled – the dog could now chase the new VW Beetle down the road.
The difference between experience and memory in this case is important. Both responses tell us something significant about the creative, neither on their own provides a holistic view of response.
We can further apply the two systems when optimising the advertising creative. For Audi’s ‘Ahab’ creative, we knew when people were engaging with the ad.
But to better understand a branding issue we looked at engagement split by those that said it was ‘poorly’ branded. This helped identify engagement with key branding moments and allowed course correction. Selectively isolating facial coding based on the System 2 responses gives more depth for optimisation.
This is also evident with intuitive responses to Trump and Clinton prior to the 2016 US election. Both System 1 and 2 responses were important to understand here. Our research revealed Trump and Clinton were equally seen as ‘trustworthy’, ‘caring’ and ‘passionate’ – but people applied those attributes faster for Trump than Clinton.
That is, they intuitively felt he was more trustworthy, caring and passionate. Unfortunately, Clinton was also intuitively seen as more ‘deceptive’, ‘arrogant’ and ‘secretive’. Both measures were important, but on their own we missed half of the story.
Neuroscience and its application in advertising research is a wonderful tool opening up a new wave of understanding. However, people have a range of reactions to stimuli (different models) and collecting only one such reaction limits our understanding. System 1 and System 2 imply whole brain thinking and for measurement it is imperative to collect a whole brain response.
P.S: Team New Zealand completed a 7-1 series result over Oracle Team USA.
This article first appeared in www.marketingmag.com.au
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: firstname.lastname@example.org or visit www.groupisd.com