Artificial Intelligence (AI) and Machine Learning (ML) are playing an increasing role in assessing credit risk. The key questions are, can they solve everything, and will regulators buy into their use in credit management?
AI & ML technology could find a plethora of use cases in the BFSI (Banking, financial services and insurance) sector, says Sanjay Bajaj, a senior VP with Birlasoft, writing online in Economic Times CFO.com. “And risk management is at the top of this list.”
Between 2017-18, the number of organizations using AI more than doubled, and 40 percent of financial services firms are applying it to risk, says Bajaj, because AI and ML can add value. That means all the way from underwriting, risk measurement and analysis, up to deciding on the final risk exposure.
Deloitte risk advisory partner Hervé Phaure and senior consultant Erwan Robin noted in a recent online commentary that AI can help developers to reduce model risk and improve general model predictive power. But, they say, a wide part of the financial industry remains cautious regarding the explainability barrier faced by ML techniques.
“Moreover, this lack of explanation constitutes both a practical and an ethical issue for credit professionals,” they say.
“AI as a topic has reached an inflection point…AI is already transforming the financial ecosystem, offering a wide range of opportunities and challenges, across different sectors, therefore the definition of AI model governance is becoming a key concern. As a consequence, understanding and explaining the output of machine learning is becoming a top priority for banks and regulators.”
Capgemini noted in a recent online commentary that AI can certainly improve the NYC (Know Your Client) process. Banks have realized that data quality is key – and is becoming possibly even more important than the risk management models themselves. And data is an area in which AI may become a standard part of the field.
“Even if models work perfectly, the results are unreliable if data quality is not up to standard.,” says Capgemini.
ML algorithms can be used to detect data anomalies, promoting further investigation of data entries that don’t make sense. AI technologies can also free up staff for other more useful tasks by allowing for automated report generation. Capgemini also noted that it has been seeing data and modelling departments working together more closely than before.
The ML algorithms of AI can play a role in of PD (Probability of Default), LGD (Loss Given Default) or EAD (Exposure At Default) credit risk models, simply because they can spot relationships that more traditional approaches don’t identify. But this is a relatively new field for credit risk and best practices are still being established.
Birlasoft’s Bajaj notes that the stake in credit risk assessment are extremely high for lending banks.
“Inaccurate assessments can cost organizations sizeable amounts,” he says.
“This is further intensified by sub-optimal underwriting, inaccurate portfolio monitoring methodologies, and inefficient collection models.”
Bajaj says AI and ML can play a major role in banks meeting the requirement for market credit risk models that can process vast volumes of data in shorter time ties, and be able to respond with adjusted credit risk assessment in response to updated real time data.
These technologies are able to learn from complex datasets and become incrementally more accurate over time. Further, the need for human data science expertise and analysts’ efforts is also minimized, as AI & ML models can “black box” the underlying technology to show only the final insights.
One option for increasing the use of AI technologies in credit risk would be for banks to develop AI models alongside the usual AIRB (Advanced Internal Rating-Based) approaches and use these models for evaluation and insight, in order to prove their value to management.
However, the Capgemini commentators noted that banks often did not have the resources to begin integrating AI because they were too busy redeveloping and tweaking their existing AIRB systems and responding to new regulatory pressures.
Supervisors may not always be keen to embrace complicated “black box” algorithm models, so it may take time and much more experience for banks to be fully on board with them.
But once AI becomes standard in all sectors, banks will need to ensure they don’t fall behind – especially as they will need to keep on innovating.