They’re costly, they might break down – and, more and more, debate rages concerning the extent to which they will, or ought to, be automated.
Reading the FCA’s Mortgage Rule Review session paper, what strikes me most is what isn’t mentioned. Nowhere does it point out synthetic intelligence (AI) or automation.
Yet I can’t assist however really feel that unleashing AI is among the major goals for a regulator in search of to reply chancellor Rachel Reeves’ name to focus much less on threat and extra on progress.
We shouldn’t resist change. But we have to be clear eyed about what sort of change we’re inviting
For those that haven’t learn the paper, there are three core elements. Two are simple and self-contained. One could possibly be a watershed for our trade.
Least contentiously, the FCA proposes retiring outdated steering on interest-only clients and cost-of-living help, and eradicating the necessity for affordability checks for these remortgaging to cheaper offers with out growing their debt. These are wise tweaks to current guidelines that ought to assist widen entry to the remortgage market.
Now to the meaty bit. The FCA needs to “simplify and make clear” the mortgage recommendation perimeter by eradicating the idea of the “interactive dialogue” set off.
I firmly imagine the proposed adjustments to the recommendation perimeter are designed to pave the best way for widespread use of AI — however through the again door
As Seb Murphy wrote for Mortgage Strategy in May, eradicating the present clear steer in the direction of regulated recommendation on the first signal of buyer interplay creates a heap of tough implications — for the recommendation course of, for advisers, for the Consumer Duty and finally for buyer outcomes. I received’t repeat these arguments right here.
High jeopardy
So how does all this relate to automobiles? And why ought to we be cautious?
We heard final month that Uber is able to launch AI-guided driverless automobiles on UK roads — however regulation received’t be prepared till the second half of 2027. I see vital parallels with the potential for AI mortgage recommendation — and a must proceed very cautiously.
The similarities are clear. Both are presently dominated by human interplay, with no monitor document for full automation or digital supply. Both are excessive jeopardy: the potential penalties of failure have led to extremely regulated environments, with wise checks and balances to take care of security.
We’d higher make certain who’s accountable when the crash comes
Both depend on the provision and interpretation of high quality knowledge to ship ‘protected’ outcomes. Both face scepticism about reliability, equity and accountability. Both are topic to scrutiny over how selections are made and the way to make sure transparency. Both increase critical questions on legal responsibility within the occasion of failure — particularly if that failure is systemic.
And, finally, each would require a seismic shift in cultural and shopper consolation earlier than mass adoption turns into mainstream.
Back door
Returning to mortgages, I firmly imagine the proposed adjustments to the recommendation perimeter are designed to pave the best way for widespread use of AI — however through the again door.
Currently, there isn’t a strategy to ship ‘robo’ recommendation that’s compliant with each MCOB and the Consumer Duty. Quietly eradicating the “interactive dialogue” set off means recommendation delivered via non-human channels (like AI chatbots) could possibly be handled the identical as recommendation delivered by people, with the regulatory focus shifting as an alternative as to whether a private suggestion is made.
We’ve seen in different sectors what occurs when expertise outpaces regulation
Likewise, the proposed “clarification” of the recommendation boundary — i.e. specializing in the presence of a private suggestion — means companies can extra confidently design AI instruments that provide steering or filtered product choices with out essentially crossing into regulated recommendation territory; until they wish to (which, from a value perspective, they certainly would).
The shift from “interactive dialogue” as the edge for recommendation isn’t only a technicality. It’s a possible inexperienced mild for companies to construct digital journeys that feel and appear like recommendation, with out essentially triggering the regulatory obligations that go together with it. In a world the place AI is getting smarter, slicker and extra human like, we threat a future the place clients don’t know in the event that they’re being suggested, guided or merely nudged.
This issues as a result of, in contrast to a self-driving automotive that takes a improper flip, poor mortgage recommendation can take years to floor — and be far tougher to reverse. If we automate badly — or regulate too loosely — customers received’t simply undergo inconvenience. They might undergo long-term monetary hurt.
Currently, there isn’t a strategy to ship ‘robo’ recommendation that’s compliant with each MCOB and the Consumer Duty
The FCA talks about simplifying the foundations to permit for innovation. But innovation with out readability on accountability is harmful. If an AI device offers a private suggestion, who’s liable if it seems to be improper? The software program developer? The lender? The compliance officer? The algorithm itself?
These aren’t simply hypothetical considerations. They are the sorts of questions regulators shall be pressured to reply in courtrooms in the event that they aren’t addressed in session papers now.
We’ve seen in different sectors what occurs when expertise outpaces regulation. From social media to crypto to generative AI, the sample is all the time the identical: disruption, adoption, backlash, reform.
Innovation with out readability on accountability is harmful
The mortgage market has an opportunity to get forward of that curve. But provided that we ask the fitting questions now. We shouldn’t resist change. But we have to be clear eyed about what sort of change we’re inviting.
Removing the recommendation set off might appear like a tidying-up train nevertheless it could possibly be the equal of handing the keys to an AI driver earlier than the seatbelt laws are even written.
And, if we do this, we’d higher make certain who’s accountable when the crash comes.
Alex Beavis is founding father of Beavis Advisory
This article featured within the June 2025 version of Mortgage Strategy.
If you wish to subscribe to the month-to-month print or digital journal, please click on right here.