We use cookies to offer you a better experience and analyse site traffic.

By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.



I agree


Mortgage Lenders Beware: AI Transparency and the Rise of AI Discrimination Claims

Mortgage Lenders Beware: AI Transparency and the Rise of AI Discrimination Claims

Like many in my profession, I find myself arriving a little late to the conversation about AI’s growing influence on our work. Then again, judging by the sparse number of blog posts on the subject, perhaps I’m not as late as I thought.

One colleague who is ahead of the curve is Matthew Lee of Doughty Street Chambers. Matthew is a civil and public law practitioner, specialising in housing and property law, however in recent months he has shown an increasing interest in AI and its imminent impact on our profession. After reading a couple of his LinkedIn articles, I had the pleasure of discussing this topic with him. We then decided to collaborate on this article together, where I shall be covering the first half, focusing on where the use of AI in mortgage lending practices may have serious legal ramifications, while Matthew will take on the second half with his analysis of AI-related discrimination in mortgage lending

While speaking with him, he pointed me to a recent speech by the Master of the Rolls, Sir Geoffrey Vos, at the LawtechUK Generative AI Event on 5 February 2025 . In it, Sir Geoffrey predicted that AI negligence claims would soon become one of the biggest fields of legal activity, arising from both “the negligent or inappropriate use of AI, and also the negligent or inappropriate failure to use AI”. Or, as Matthew aptly paraphrased, ‘the misuse and missed-use of AI’.

Upon reading this, I was immediately reminded of my recent mortgage renewal application, wherein I was being told by my mortgage lender that based on the size of my existing mortgage and the value of my property, I had a loan to value ratio of 67%. The value seemed to be off by at least £25,000, and when I challenged it, I was told that this figure was generated by the bank’s automated AI calculation system. With my fixed-term contract nearing its end, I was tempted to simply accept the percentage and move on. However, I thought that if there was a chance I could save approx. £100 per month by challenging it, then I would certainly do so – and this is what I did. I stuck to my guns and formally challenged the valuation, instructing a valuation surveyor to carry out a site inspection and to provide a report thereafter.

Thankfully, the surveyor assessed the property at £25,000 more than the bank’s AI-generated figure, and I was given an LTV of 65%. All’s well that ends well.

However, this experience left me wondering a few things: Firstly, how many borrowers would simply accept these automated calculations without question?; Secondly, what issues might this raise for borrowers entering into mortgage contracts without first been informed as to how such calculations are generated; And thirdly, what issues might this raise for mortgage lenders relying on AI programs without disclosing such reliance to their borrowers?

It’s here that the legal questions start to get interesting. Under the FCA’s Mortgage Conduct of Business (MCOB) rules, lenders are required to treat customers fairly and ensure that all communications are clear, fair, and not misleading. While there isn’t yet a specific rule that says a lender must tell you if an AI system is making the calculations, the general principles of transparency and fairness run through the FCA’s regulatory framework. If a borrower is given something as important as an LTV ratio which naturally has a direct impact on their mortgage terms, and that figure is generated by an AI system which was undisclosed to the borrower before entering into the mortgage contract, there’s at least an argument to suggest that the borrower ought to have been told about the process, especially if there’s a risk of error or bias.

The legal test here is, in many ways, about whether the lender has provided sufficient information for the borrower to make a fully informed decision about entering into a mortgage contract. If the use of AI is material to the decision-making process, and especially if it leads to either an error or a biased outcome, then a failure to disclose could potentially amount to a breach of the FCA’s principles or even give rise to a misrepresentation. After all, the courts and the Financial Ombudsman tend to look at whether the borrower has been treated fairly and whether anything material has been withheld that might otherwise have influenced their decision.

There’s also the question of redress. If a borrower suffers a loss because an AI system generated an inaccurate figure and the lender failed to disclose how that figure was reached, the borrower could have grounds to complain to the Ombudsman and could possibly even claim damages for any losses suffered as a result. Both the Ombudsman and the court look at whether the lender acted with due care, skill, and diligence, and whether the borrower was given a fair opportunity to challenge or query the automatically generated calculation.

Therefore, as lenders increasingly turn to AI for efficiency and speed, the legal landscape is shifting, perhaps faster than they are able to catch up with. Case in point - while writing this article, I happened to look at the initial (unsigned) mortgage contract that was provided to me in reliance on the AI generated LTV ratio, and nowhere within its terms was there a clause setting out how exactly the LTV ratio was calculated.

Transparency, explainability, and fairness are becoming more than just regulatory buzzwords, as they’re likely to be the standards by which lenders are going to be judged in the years to come.

I shall now hand over to Matthew for his analysis of AI-related discrimination in mortgage lending.

AI Discrimination in Mortgages

I’m grateful to Michael for his points on MCOB and observations above. I need to think about these in more detail because they are very important. My focus will be on potential discrimination in mortgage lending practices which have been highlighted in the US in claims such as Williams v Wells Fargo (2022), filed in the US Northern District of California.

In that case, it is alleged that there has been systematic racial discrimination by the Defendant in its mortgage lending practices. The Claimant, an African American citizen of Georgia with a high credit rating, was offered significantly less favourable mortgage terms compared to white applicants with similar or lesser financial credentials. This is said to reflect a broader policy of discrimination affecting thousands of African American customers, resulting in higher costs, increased financial vulnerability, and broader socio-economic harm.

The lawsuit specifically cites breaches of:

  • Equal Credit Opportunity Act (ECOA)
  • Fair Housing Act
  • Civil Rights Acts (Sections 1981 & 1982)

Artificial Intelligence plays a critical role here, though indirectly mentioned. The lawsuit alleges Wells Fargo used a "unique scoring model" beyond standard credit checks (like FICO scores). Such proprietary algorithms often incorporate machine learning or AI-driven tools which can unintentionally amplify existing racial biases due to historical data or criteria chosen, even when these biases are not explicitly programmed. The use of algorithmic decision-making thus potentially reinforces systemic discrimination by embedding biases into lending outcomes, perpetuating historical disparities.

Previous litigation (e.g., Opal Jones v Wells Fargo) explicitly referenced discriminatory outcomes from algorithmically informed software, illustrating the significant legal and ethical risks of AI-driven lending decisions.

In the UK, there are growing concerns about the use of algorithms and AI tools in mortgage and credit decisions. Under the Equality Act 2010, lenders are prohibited from engaging in conduct under Chapter 2 which includes direct and indirect discrimination against those with protected characteristics including race, sex or disability.

Discrimination arguments rarely appear in UK mortgage proceedings, but these issues have already reached senior courts, for example, Green v Southern Pacific Mortgage Ltd [2018] EWCA Civ 854. In that case, the Court of Appeal highlighted the complexity involved in discrimination claims, particularly around mortgage costs and the exact nature of the services lenders provide. The difficulty and expense of pursuing these claims might explain why few cases ever come before UK courts.

But could this be about to change?

UK lenders are increasingly using algorithmic and AI-driven systems to determine mortgage affordability, risk, and pricing. Without transparency or proper oversight, these systems might unintentionally discriminate against certain groups. This type of discrimination may be subtle and difficult to identify, meaning affected borrowers might not even realise they have been disadvantaged. Even if identified, it’s unclear whether borrowers will have access to sufficient legal support to effectively challenge such decisions or whether the high costs and risks involved might deter them from even trying. In that regard, the observations of Lord Justice Peter Jackson in Green are informative.

The recent US class action against Wells Fargo offers a stark warning to lenders everywhere, including in the UK. Banks face significant legal risks if their AI-driven decision-making disproportionately harms protected groups without clear and evidence-based justification.

As AI becomes more prevalent in UK lending, banks must proactively ensure fairness, transparency, and accountability in their systems to avoid discrimination claims. Borrowers, and perhaps regulators, will likely demand greater scrutiny of AI-driven lending decisions, potentially reshaping future mortgage litigation and influencing lender practices significantly.

Ultimately, to safeguard against discrimination and costly litigation, UK financial institutions will need to prioritise fairness and transparency in their adoption of AI-driven mortgage lending tools.


2nd May 2025

Michael Grant

Call 2009

Michael Grant

42BR Barristers - Headline Sponsor of the Family Law Awards 2025

42BR Barristers is delighted to once again be the Headline Sponsor of the Family Law Awards Read more >

Family Law Webinar Series - January to July 2025

Register now for our upcoming private, public and financial remedies webinars, taking place between January and July 2025. Read more >

GET IN TOUCH

 

 

Social media:

    

Awards & Recognition











Developed by CodeShore.Ltd