Why Black Box Algorithms Are A Blocker To Transparent And Ethical Marketing

The advent of digital marketing was hailed as a golden age. Where marketing would be personalised, relevant, and measurable. Indeed, the rise of the internet and social media platforms offered powerful ways to engage with audiences and has offered new ways of consuming relevant content and created like-minded communities for its users. But somewhere along the way, measurability gave way to modelling, personalisation gave way to creepy, and transparency gave way to the ‘black box’ walled gardens of ad platforms. 

As an agency that prides itself on working closely with the likes of Google and Meta, we’ve often leveraged their advanced platform algorithms to create smart – and successful – advertising campaigns that demonstrate the best of their technological developments.

And precisely because these platforms have been so successful at digital marketing has led to the situation we find ourselves in today, where Google, Meta, and soon even TikTok, have become monoliths in their own right, with full ownership over the set-up, distribution and measurement of advertising on their platforms. 

Privacy regulation isn’t a fix-all for this situation either. Privacy legislation, while good in many ways, is actually helping to cement walled gardens and reduce transparency. When the transfer of data between companies and ecosystems is restricted, this makes the argument for walled gardens greater. Combined with the fact that companies are able to collect less data, this then fuels the use of models, models which are hard to understand (hence the rise of the term black box). 

What does all this mean for the industry? 

Ultimately, it means less transparency. With less control over the audiences of your advertising, how can marketers ascertain that they are targeting without bias? Without the means to attribute marketing spending transparently, how can marketers understand which platforms are pulling the levers?  

There is no need to take my word for it either. Precis Digital UK commissioned Forrester Consulting to ask 150 senior marketers in the UK and Nordics on the topic of marketing ethics: data privacy, transparency and customer experience. 

While 76% say marketing ethics is a high or critical priority for their organisation, only 49% indicated they would go beyond the requirements of regulations such as GDPR – now at its fourth anniversary – to adopt more ethical marketing practices.

On top of this, the vast majority of these marketers (80%) believe reducing bias in advertising models – largely a product of black-box models – should be prioritised by their brand over the next 12 months. Yet more than three-fifths (63%) admit they are struggling to reduce bias at present. 

The fact that many marketers struggle with this comes as no surprise. Many are not trained in machine learning or data science, and neither is this article a call to arms for marketers to start building their own marketing algorithms or halt using them altogether. But if we raise enough questions, and start taking more accountability for how we work with these platforms, our hope is that marketing ethics is raised to the top of the agenda for ad platforms, agencies and marketers alike – and not just have regulatory bodies set the agenda.   

For too long marketers have been on the back foot, responding to privacy regulations as they happen, rarely thinking ahead to how marketing should be – and, most importantly, what the experience is like for the customer.

How can marketers act against walled gardens and black box models?

As with any technology led by algorithms, marketers must ask themselves: ‘What are the motivations of the company selling the AI, where is the training data coming from, and what impact does our own data have on the outputs of these algorithms?’

Here are several actions marketers can take to redress the balance:

  • Avoid using black box technology which stunts understanding of data processing and outcomes
  • Implement systems that are inclusive, avoiding discrimination and bias in data
  • Establish explainable AI, with controls and governance for lookalike targeting
  • Improve the customer experience by capping the frequency of marketing communications
  • Identify and correct stereotyping and harmful messaging with qualitative reviews of creative
  • Avoid hyper-targeting, which might involve intrusive data practices and give rise to negative consumer sentiment towards digital marketing through ‘creepy’ experiences

Actions and solutions relating to privacy concerns, data bias and poor experience are a job for all of us, including the Government (as its forthcoming digital legislation attests). If we can make these changes successfully, the golden era of marketing can truly begin.

About the author: Rhys Cater is Managing Director at Precis Digital UK. 

Comments are closed.