Rebuilding Trust in AI is a Job for Humans

As consumers, we find ourselves in increasingly asymmetric relationships with the companies that sell goods and services to us. In the most extreme cases, companies discover even more about us than our own families do. But to us, the way companies operate and make decisions feels increasingly like dark arts, despite us living in a so-called age of transparency. Even analysts, who are adept at reading and interpreting financial reports, often miss vital details.

This growing corporate distrust extends into the world of AI and its algorithms. When immature AI applications whose algorithms are still ‘in training’ get something wrong, people rapidly lose trust in them. In one case, Hamburg resident Oliver Haberstroh’s neighbours called the police after hearing loud music playing in his flat at 1:50am. Oliver was out for the night, so after the police repeatedly rang the doorbell to no avail, they knocked down the front door to break up the ‘party’. The baffled cops were greeted with a lone Alexa spontaneously belting out tunes at top volume. After paying for new locks and making peace with his neighbours, Oliver sent the device back to Amazon.

This relatively minor incident highlights a serious point about the use of AI and machine learning. Besides helping run our homes, these technologies are making their way into all aspects of life, from self-driving cars to health insurance premiums to even prison sentencing. It doesn’t matter how sophisticated the AI technology is, once people lose trust in it, they will reject it.

In the past week as I write this both Facebook and Twitter have lost 20% of their market caps. For Facebook this represents a staggering £120 billion in value. On the one hand, these declines partly result from profit warnings about commitments to invest considerably more on security and self-regulation to prevent misinformation, hate speech and fraud. Twitter’s stock plunged after it deleted a million ‘bots’ and accounts spreading fake news, death threats and other toxic content. Unless you’re an investor, most will view these moves as largely positive. However, one could argue these companies spent way too long in denial, abdicating responsibility and may struggle to fully recover.

The crisis of trust goes beyond the technology itself. If people are to accept AI in their homes, their jobs and other areas of life, they also need to trust the businesses that develop and sell AI to act responsibly. Presented with a list of popular AI services, 41.5% of respondents in a recent survey could not cite a single example of AI that they trust. Self-driving cars are a case in point: Just 20% of people said they would feel safe in a self-driving car, even though computers are less likely to make errors than people.

The industry needs to confront these challenges if we’re all to continue reaping the many positive benefits that AI can bring. Let’s start by examining where trust intersects with AI and then consider ways to address each.

Trust in businesses. Consumers need confidence that early AI adopters like Twitter and Facebook will apply AI in socially responsible ways. A recent Salesforce study found that 54% of customers don’t believe companies have their best interests at heart. Businesses need to earn that trust by applying AI wisely and judiciously. Key to this is creating the optimal symbiosis between machines and people. AI depends on human oversight to make tough judgement calls and help train algorithms over time.

Trust in third parties. Consumers must have confidence that a company’s partners will use AI and data responsibly and lawfully. AI and machine learning require massive amounts of data to function. Greater volumes and varieties of data are becoming available for consumer profiling and other new use cases. This is usually used to improve the quality of targeted marketing, but as the Cambridge Analytica scandal demonstrates it can equally be abused. Incidences like these dramatically hinder AI’s long-term potential.

Trust in people. For all its potential to automate tasks and make smarter decisions, AI is essentially a sophisticated calculator programmed and controlled by humans. People build the models and write the algorithms that allow AI to do its work. It has neither the consciousness nor the assets to be held legally liable. Consumers must feel confident that ethical professionals are responsible for AI technologies with users’ interests at heart. AI can be a powerful tool for criminals, but developers of the technology need to work with police and law experts to ensure algorithms make decisions that are legally compliant.

Trust in the technology. AI’s “black box problem” causes distrust because, very often, people don’t know how decisions are made. In important areas like criminal sentencing, the accused will have a strong legal defence if it turns out the algorithms were programmed to apply racial, religious or other illegal bias. In the business world, the black box problem can also inhibit adoption. This partly explains the enduring popularity of spreadsheets, which if nothing else, are at least transparent. Employees won’t devote time and resources to machine-made recommendations unless they have confidence in their provenance.

Oliver Haberstroh may never trust Alexa or any other home assistant to behave responsibly alone in his apartment again. However, if we humans apply the same stringent ethics and transparency to algorithms as we expect of our colleagues, we will start to build (and rebuild) the trust needed for AI to thrive.

Leave A Reply