Banking Hurtles Toward Ethically Problematic Credit Decisioning

© Can Stock Photo / Paha_L

The paramount technology objective today in banking is the implementation of artificial intelligence to enhance customer experience, products and, in turn, financial results, for both consumers and financial institutions.

As artificial intelligence and its compatriot technology, machine-learning algorithms, take a more prominent role in financial services, the ethical challenges posed by these technologies advance.

The banking industry needs to confront these ethical challenges — before they rile consumers.

AI and ML use data to produce algorithmically generated analysis and solutions. Not surprising to readers of this blog, AI/ML is a key area of development for financial services today. At INV Fintech, Bank Innovation’s sister technology accelerator, approximately 17% of the applications for its spring class, which officially starts tomorrow, were from startups focusing on artificial intelligence solutions.

The issue is that AI/ML will increasingly open the banking industry to discriminatory practices, not the other way around, and despite convention wisdom. In short, AI can make redlining or reverse redlining child’s play in comparison.

Consider how AI in underwriting works. The FI takes an inordinately large data set — the larger, the better — creates an algorithm that analyzes that data for various criteria, and then produces an underwriting decision based on that analysis. There are obvious factors that a lender can avoid using in the underwriting decision, such as gender or race.

More Complicated

But it gets much more complicated. Say the lender imports social media data, a common intent among AI adherents. If there are two applicants that possess nearly identical characteristics when looking at common criteria, such as credit score and debt-to-income ratio, the deciding factor between which of these applicants gets funding could boil down to their social media data. And how is that determined? Is it possible that one applicant, because she lives on a particular street, is rejected for a loan, while the other, who lives in, say, a better part of town is approved? The answer is at least, possibly.

“It can get complicated,” said Prema Varadhan, a chief architect at Temenos, the banking technology company. “Which one to pick? There are layers that you don’t explicitly code. They lead to very valid ethical questions.”

Varadhan said that many in financial technology are starting to inject “explainability” into their AI models.

“If you are responsible for [producing a] credit score, there are models,” she said. “But replace that with a machine-learning model — it learns and behaves differently. But if it can’t explain the decision, [the lender] can’t rely on machine learning alone. ‘Explainability’ is hard. But this year we are trying to inject explainability into the models.”

In other words, Temenos is aiming to have models not just produce credit decisions, but to add the “why” of the decision into the credit algorithm. Elements of a credit decision are called “features” at Temenos, and some features are obvious, like gender. Other features are more subtle, and that’s what needs to be included in the machine learning algorithm, Varadhan said.

Feeding Bias

“Machine bias doesn’t come unconsciously,” she said. “Someone has to feed them. If you introduce the feature, you are giving that consideration to the machine. It will get biased to that. … Machine [learning algorithms] get biased in there because it is tolerated. You can’t say, ‘We don’t know why the model is skewed.’”

But are bankers – or regulators – confronting the potential pitfalls of machine-learning algorithms? The panacea of robotic process automation is such that there is a headlong rush to embrace algorithm-generated products and services. Yet, the more data that is included in credit decisioning, the more likely it is that discriminatory subtleties will determine who gets and does not get a loan. “Features” like income, geography, employment, and even relationship status all can become primary drivers of credit decisions – without the tacit knowledge of lenders. And that’s the problem: the plausible deniability does not make such discrimination OK.

Other than a few Medium posts, rarely are ethics a part of the fintech discourse today. There is far more attention paid to which startup is becoming a unicorn or how much revenue that new technology will engender for the financial services industry than whether technology is treating all consumers equally.

But AI has profound ethical factors that demand consideration. Of all 15 sectors tracked by Edelman, the publicity firm, financial services has the lowest percentage of trust among consumers at 57%, compared with 78% for technology. How much lower will that drop when ethics are violated by ML? Lest you think this is a far-off problem, the availability of ML algorithms is becoming commonplace for developers.

“It used to be that AI required geeky data scientists,” Varadhan said. “Not anymore. AI algorithms have become so commoditized, and a lot of it is coming from Driverless AI. A lot is open source, so it comes at little cost.”

All this points to ML-driven advances coming at a faster speed. It is time the considerations of their ethical implications speed up, too.