Although our soon-to-be-published survey into the relationship between Procurement Teams and Sponsorship Practice suggests that Procurement is not heavily involved with sponsorship measurement, this post is aimed primarily at marketing procurement.
The biggest con in sponsorship evaluation
Despite the pandemic, many sponsorship research agencies are still doing a roaring trade in nonsense.
For many years, we at Redmandarin and a good few other right-minded people were hot on the trail of AVE and media value, keen to call out the absurdity of abstract media value and the baseless logic of the value calculations as a poor foundation for sponsorship strategy.
But the biggest con in sponsorship, sponsorship measurement and to some extent sponsorship strategy is still rife.
We’re referring to the use of sales funnel metrics as an index of sponsorship success eg 67% of fans show greater consideration to sponsors. 42% of fans are more likely to buy sponsors’ products. Choose any metric, bar spontaneous awareness, and just slide those rose tinted spectacles back up your nose.
Industry standard approach
Most sponsorship measurement agencies offer surveys, either direct audience, fan-based or panels, as a core service.
These surveys usefully establish demographic data, try to understand basic consumption habits – for sponsorship and product; and routinely probe the impact of sponsorship on sales funnel metrics, asking simple questions relating to awareness, consideration, propensity to buy.
There are three primary uses for this data.
Every rightsholder deck nowadays includes a whizzy infographic to present this data in the best light. Naturally.
It’s also a staple of rightsholder reporting, to demonstrate the value of a sponsorship. Our fans are much more likely to buy your product (and they happen to over-index in your category too!).
Sponsors are also presented with this data by agencies in their appraisal or recommendation of potential partnerships – and with this fan insight these agencies claim to inform and quite often lead sponsorship strategy. When it’s offered by the larger players, it’s packaged as valuable proprietary insight from their syndicated research – while smaller agencies sell surveys as a piece of due diligence to inform opportunity assessment.
So all in all, they’re quite pervasive both as a service and as metrics.
The problem is, they’re about as reliable as the fiction of media value. Here’s why.
As everyone is now acknowledging, decision making is not an objective, rational process. We’ve referred to Damasio in another blog. To put it simply, the actual process of decision making in most cases is to form a value judgment based on emotions and then find the data to substantiate that position rationally or to give our judgment a veneer of objectivity – confirmation bias, to give it a name.
That’s not to say that emotion-based judgments are purely capricious or superficial: far from it, they’re always linked to our experience and life learnings. The challenge is, our life learnings themselves are subject to all manner of bias and intra-psychic distortion.
So to ask questions relating to consideration or propensity to buy – entirely cognitive questions – will provoke purely cognitive responses. In theory yes, I’m more likely to drink Buxton Water because they sponsor the London Marathon, but in practice, I just don’t like the logo.
Cognitive questions are highly useful to establish facts, but close to irrelevant when it comes to establishing theoretical opinions with a meaningful degree of accuracy. This is clearly subjective, but if I were a panel member faced with highly theoretical questions of this sort, I certainly wouldn’t be knocking myself out to give a considered answer …
We’re writing this because a client recently undertook specific audience research regarding a longstanding sponsorship – as part of a review process we were leading. We lost the battle to remove the questions because stakeholders felt it would on balance provide useful information. The results predictably showed positive scores, which would have dominated the review process if we hadn’t won the more important battle : to question product penetration.
The bottom line was that, despite a sponsorship lasting way more than a decade, product penetration was minimal – despite year on year research reporting positive consideration. Bad research practice obviously (by Nielsen), for over a decade – with possibly a hint of the Monte Carlo fallacy from the sponsor.
The same sponsorship measurement agencies which are so rigorous when they’re talking about panel selection and the weightings will all too often quite happily sell misleading research methodologies.
More than once we’ve heard the defence that, although these questions don’t literally establish consideration or propensity to buy, they’re still a relevant barometer of brand sentiment. Delivered with authority, you can almost buy that line. But stop to think about it just a moment, and it’s absolutely preposterous : If we’d wanted a barometer, we’d be talking to Amazon.
With the convoluted and iterative journeys to purchase that are the reality for most products these days, the very notion of consideration is up for debate. Consideration only really has value at one point in the cycle – immediately pre-purchase. Consideration before then isn’t really worth much.
Many brands use sales funnel metrics, so if you’re buying research, the illusion of metrics which may align with your own is tempting. By all means ask about awareness, preferably unprompted. But questions about consideration and likelihood to purchase are only going to offer you false positives – which will cause problems down the road. Not only that but they come at the price of a better question – because there are better questions to ask.
Perceptions of brand fit have been shown to influence sponsor perception – heavily – and can be established by survey. Contribution to the sponsorship or the fan experience (v our thoughts on adding value to fans) is another. There are other metrics, such as perception of brand attributes, which deliver actionable insight, measures you can work on – all better questions, because they will inform messaging, tone of voice, activation and relationship with brand.
Sponsorship evaluation for marketing procurement
Sponsorship teams, marketing procurement teams – if your sponsorship measurement agency is proposing or worse asking these questions, push them to do better. The larger agencies have tremendous senior level IP and brainpower, but the process of productising their services dramatically dilutes your access to it. And their business model is predicated on standard methodologies, so they’re really not invested in doing it better.
And going back to that decade of false positives – be mindful of the research agency’s own interests. It is not in their interest to report consistently negative or consistently positive findings. Almost the best combination is average findings with positive highlights. The sales game is obviously to keep the client on the hook with the lure of future success.
We wouldn’t go so far as to say that agencies are faking results, but we would always say: make sure you look at the raw data.