AI systems may be getting smarter with the ever-expanding flow of data, but are they also getting more ethical?
When it comes to fintech, artificial intelligence has a seemingly endless range of possibilities to transform present systems, so long as the AI system that is applied is constructed with the correct values in mind.
Explored in the IEEE Ethically Aligned Design paper on the topic, the ethics of AI design is as complex as financial compliance regulations—perhaps more so, as some of these systems are at the point now where they are being tentatively applied to those regulations.
From the report:
Society does not have universal standards or guidelines to help embed human norms or moral values into autonomous intelligent systems (AIS) today. But as these systems grow to have increasing autonomy to make decisions and manipulate their environment, it is essential they be designed to adopt, learn, and follow the norms and values of the community they serve, and to communicate and explain their actions in as transparent and trustworthy manner possible, given the scenarios in which they function and the humans who use them.
The report cites a need for ethical consideration within the actual design of the AI system, making education of “technologists”—which are, as defined by the paper, “anyone involved in the research, design, manufacture or messaging around AI/AS including universities, organizations, and corporations—critical.
“If we’re talking about technologists, who are making decisions for the whole fintech market—we do see that they are able to use these [ethical] principles—then we do need to do more education with the people who are working within the design of the AI industry,” says Kay Firth-Butterfield, executive director for AI-Austin and executive committee vice-chair for The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. “We talk about human wellness—we need to encourage fintech industrialists to think about that.”
According to Firth-Butterfield, one of the best ways to begin knitting ethical AI design into the fabric of fintech could be the creation of a “chief values officer” for every company working on artificial intelligence.
For use within fintech, companies are exploring the technology for usage in factors such as data privacy and identity, and adherence to financial regulation for more transparency to the consumer; if that sounds familiar that’s probably because these are the use cases that tend to make other technologies so attractive to financial institutions—technology like, oh, say, smart contracts, which as self-executing code fits the definition of AI.
“I see smart contracts as part of the system that AI will manage… as part of a system that will be very integrated in the future,” says Firth-Butterfield.
This integrated system will have to be managed quite carefully to avoid exploitations like the DAO hack, which means the system will have to be built with the needs of the consumer in mind. Luckily, this is an approach fintech particularly excels in. Now that approach just needs to be wired in to the bare-bones design of the technology.