AI use in Credit Decisioning - Internal Audit Risk Considerations
Ravinder Singh • 10 June 2025

Why the need of AI in credit decisioning?
An individual consumer wanting credit to support a purchase typically furnished couple months’ worth of payslips, bank statements, identification and a credit bureau check. This required the individual to be in possession of the underlying matters e.g. credit history, bank account, a job.
There are several millions that are unbanked in the UK for various reasons, e.g. recently relocated and doesn’t have a bank account or not enough credit history and gets turned down for credit. Banks could be making more money from this relatively untapped market.
With advancements in chip processing power, there are access to more data sources on financial behavioural aspects of personal characteristics that can support credit decisioning which can lead to more credit approvals and provide financial support to consumers.
Where AI is used in credit decisioning?
Access to data sources on consumer behaviours and advanced algorithms designed to assess the applicants probability of payment and default is one of the key uses of AI in credit decisioning in financial services.
This allows access to the ‘unbanked’ consumer which can expand the customer base for a lender. Efficiency can also be improved through automatic approvals with less manual referrals due to enhanced algorithms. Some systems monitor results and fine tunes decision making models which they claim increases accuracy.
Risks within AI in credit decisioning
Most banks create custom models which integrates with their existing loan originations system or replaces the existing. A growing version is AI powered credit decisioning system – Software as a Service (SaaS) basis which integrates with existing loan origination systems with Application Program Interfaces (APIs) and doesn’t require time and cost consuming resources like custom made programs. Therefore, risks associated with these are better understood with detailed system knowledge and models used.
Broader risks associated with AI in credit decisioning are:
- Data source integrity – where does the data originate from, where can there be biases within here, age of the data, completeness and accuracy checks from the provider and the user.
- Access to the ‘unbanked’ – where individual has limited or non-existent financial information and behavioural data from the wider society is used similar to the individuals circumstances, how much of a ‘success’ in the ability to pay is there within the model design and how much of this risk is acceptable for the banks strategy and risk appetite.
- How does the model monitor performance of consumers credit and ‘fine-tunes’ the system to improve credit worthiness
- Does the applicant know that AI is used for credit decisioning and for what aspect. An application that is turned down due to an aspect of AI could have an impact on its credit history.
- What metrics or periodic testing is performed by the 1st and 2nd line to ensure there isn’t a bias being generated within the credit decisioning models.
How we can help
It is an important step to start to map out in a process diagram how the credit decisioning system works, highlighting in particular where AI is used in the control process. An assessment of where there is greater risk within the process should be then undertaken. A greater understanding of AI’s use in credit decisioning can be achieved by using these starting points. In addition, best practise adopted within the industry can provide valuable insights in to risk management.
To see how we can develop and support your audit needs on this subject, please reach out to us. We not only form part of the audit team to provide our experience, insights and knowledge but we also collaborate with the audit team so that they can benefit and lead on similar projects in the future.