CEOs of the Future: Your Brain Is a Black Box

Artificial intelligence as a decision-making aid has improved to the point where its efficacy and trust may be starting to exceed that of a human decision-maker.

Who is right? writes Pete Hirsch, Chief Technology Officer at finance and accounting software leader BlackLine.

This intersection of trust is a relatively new phenomenon, albeit one that mirrors the pace of adoption of artificial intelligence in the enterprise. Two years ago, there were concerns amongst CEOs that unintended bias could creep into an AI algorithm.

But in a mid-2018 study published by the Harvard Business Review, researchers tested whether a 15-minute conversation between humans, or an algorithm, was the better method for measuring the trustworthiness of a new colleague.

While both methods were considered reliable, the algorithm was seen as “a more rational and less intuitive approach in evaluating an individual’s trustworthiness” with 61 percent of participants opting to use AI rather than the judgements of humans.

The efficacy of AI is now forcing us to make tougher calls on who is better-placed to make a particular call: an algorithm or us?

The efficacy of AI is now forcing us to make tougher calls on who is better-placed to make a particular call: an algorithm or us?

Getting uncomfortable

There will undoubtedly be some challenges in relieving humans of more of their decision-making and contesting their conclusions and logic.

Your brain has no audit trail. Its assumptions and biases aren’t clear. And, at some point in the not-too-distant future, this kind of opacity in the decision-making process will no longer be acceptable.

Consider the subjective process of hiring decisions. These are often based purely on the judgment of the interviewers. Ultimately, when a decision is made, candidates may never have full transparency on what the decisive factors were in whether they were successful or not.

Contrast that with how an AI algorithm can make recommendations. You can be explicit in defining the attributes that will lead to a particular outcome. If you build or train the model yourself and understand how it works, then you have full transparency over the decision criteria, which should give you trust and confidence in the recommendation.

You may or may not like the outcome, but at least you understand it and can have confidence in it, as opposed to an opaque ‘black box’ process driven entirely by a human brain.

Isn’t AI also a black box?

The challenge is that transparency over the algorithm is not a given. It is still the exception rather than the norm and will continue to chill deployments while it remains so.

Transparency of the reasons used to reach a decision is arguably just as important as the conclusion or outcome itself.

People want to be able to understand how and why a conclusion was reached. That directly influences how much trust they are willing to put in the conclusion (whether reached by AI, a human or both).

Transparency is a core principle in BlackLine’s AI investments. We see it is a critical success factor, but this isn’t necessarily a commonly-held view. Levels of transparency into algorithms still vary between industries and solutions, and this will need to change if AI is to achieve its potential.

Greater levels of AI and automation are being introduced to accounting and finance, and those algorithms now recommend paths and decisions to varying degrees of probability and accuracy.

Greater levels of AI and automation are being introduced to accounting and finance, and those algorithms now recommend paths and decisions to varying degrees of probability and accuracy.

But an accountant isn’t going to feel comfortable certifying books or numbers based on an AI recommendation unless they understand why. They want to understand the attributes and variables used to come up with a particular recommendation and how they correlate.

Until there is greater transparency over the algorithms, trust in AI may be determined by the perceived reputational cost of backing an AI-based decision.

If AI provides an answer with an 80 percent probability that it is right the first time, does that really tell you what you need to know or do, and what the cost of making that decision could be?  In the case of an autonomous driving car, where the costs of a bad decision can be fatal, clearly not.

The cost of an error is likely to determine how much trust and level of accuracy is required. Until such time, people may be unwilling to put their name to AI-led or AI-recommended decisions if it is not completely transparent how they were reached.

Leave A Reply