Technology

The Good, Bad And Ugly Of Using AI In Financial Risk Management

Mikhail Dunaev 1 May 2024

The Good, Bad And Ugly Of Using AI In Financial Risk Management

The author of this article looks at how AI both increases and adds to certain risks in financial services, and can also help to manage and mitigate them.

The following article is from Mikhail Dunaev, chief AI officer at Comply Control, a figure in the fintech sector who works in areas such as AI and machine learning. He examines how these fast-growing technologies create, and may also provide a solution to, certain risks. (More on the writer below.) 

The editors are pleased to share this content; the usual editorial disclaimers apply. To respond, email tom.burroughes@wealthbriefing.com


The financial system is so old that sometimes even the people from within the industry forget the obvious – the regulatory environment that we see today in finance has its basis in the aftermath of the 2008 financial crisis. It took centuries and lessons from the school of hard knocks to get where we are today. Can you imagine how long it will take for the regulations to catch up on artificial intelligence in finance?

It will take years and decades for the same level of compliance to happen regarding AI in banking and financial services. However, we already see the beginning of this discourse today. Regulators express their concerns, naming AI an “emerging vulnerability” and a threat to financial stability, and sharing a common need to introduce a clear legal framework for it. 

With every privilege comes responsibility, and AI is no exception to the rule. While bringing significant advancements and driving efficiency, generative AI is notorious for “inheriting” human bias, lacking traceability due to its “black box” nature, and dangers to data privacy and cybersecurity. 

I’d like to argue that although the risks associated with AI applications in finance are substantial and must be addressed, the future of financial services lies in regulated trustworthy AI.

Dealing with diverse AI-associated risks
Understanding the various risks that AI brings to the table is essential in creating realistic tech-agnostic regulations. I like the comparison with algorithmic trading made by Andrew Bailey, Governor of the Bank of England: when few people understand how AI works, it is harder for regulators to come up with relevant legal frameworks and hold people accountable for actions.

Speaking of tech- and human-related aspects, potential errors in AI algorithms can lead to poor decisions and financial losses due to bias in training and carry significant risks. In addition to risks in the machine learning process, there are also serious risks in the transparency and explainability of AI decisions. AI audit is an incredibly complex task that cannot yet be solved by universal tools.

Let’s not forget about the risks of cybersecurity and data leakage due to a high level of centralisation of information.

However, AI poses an even greater threat to the banking industry as a tool for criminal activities. AI is great at analysing bank customers’ personal data, such as names, addresses, and account numbers, which is a double-edged sword. This way, scammers can use AI to generate very believable and seemingly authentic personalised phishing emails.

In addition, AI algorithms can analyse cardholder transaction patterns to generate fake transactions that security systems may not label as suspicious. This method is used by criminals to steal smaller amounts of money without being detected.

Finally, deepfake phishing is becoming a serious threat to the world of financial services, as these crime cases have surged by 3,000 per cent in 2023. Using AI, scammers can create fake voicemails or videos of bank executives, for instance, to trick bank employees into transferring money to fraudulent accounts or providing access to sensitive data. 

Properly regulated AI has the potential to be effectively used to manage risks and compliance to bring tangible benefits to the industry.

In safe hands, AI can deliver long-term results
While AI can be a threat in the hands of fraudsters, as we have discussed above, it can also help financial institutions proactively identify such fraud and suspicious transactions in real-time, 24/7.

AI is actively used to improve risk forecasting by analysing big data and identifying unobvious patterns. This will certainly lead to faster and cheaper internal processes, such as issuing loans or investing.

In the next three years, we will see a surge in the trend towards highly personalised banking services and risk management to suit the profile of a specific client, depending on their needs, also with the help of AI. 

But the main trend I see developing in the next few years is the development of explainable AI, trustworthy AI to ensure transparency and audit so that users can better understand the complex mechanisms by which AI systems work.

Following this logic, the trend towards collaboration between banks and regulators will prevail. Banks will actively collaborate for data exchange and joint training of AI models for better identification of risks. Meanwhile, the cooperation between banks and regulators will lead to a deeper understanding of technologies and better decision-making. Thus, it will be possible to come up with tech-agnostic laws that increase trust in AI and help technology become an indispensable tool in the world of financial technology.

About the author

Mikhail Dunaev is an experienced technical lead and software developer in the fintech sector. Joining the Comply Control team in 2023, he oversees product management, machine learning engineering.
 

Register for WealthBriefing today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes