Tackling Fraud in 2025

Dan McLoughlin, Fraud Specialist at Lynx Tech

5 Mar 2025

AI, Fraud

In the first half of 2024, criminals stole £571 million in the UK through unauthorised and authorised fraud — a stark reminder that existing fraud prevention methods are failing to keep pace. The scale and sophistication of fraud in the UK, and around the world, is rapidly rising, but what isn’t rapidly rising is banks’ abilities to stop criminals in their tracks. Despite the £710 million in unauthorised transactions thwarted during this same period of 2024, , the amount of fraud that was able to leave victims’ bank accounts is shocking.

The introduction of regulations like PSD2 and Strong Customer Authentication forced fraudsters to adapt their tactics. This led to a significant shift towards social engineering, exploiting customers’ trust on their own devices, directly resulting in the surge of Authorized Push Payment (APP) fraud. The ease with which criminals can now target victims on their trusted devices, combined with the rise of sophisticated AI tools, has made these attacks significantly more difficult to detect.
Whether the scammer befriends the individual through a lengthy romance scam, perhaps asking them to send money to support a sick family member masquerading under a fake profile on a social media app selling tickets, scammers are looking to impersonate ‘someone trusted’ for a direct bank transfer. Artificial Intelligence (AI) has been a game changer for criminals here, turning easy to spot scams into deep, complex layered social engineering attacks.

AI is in the pockets of criminals

Criminals are using AI to create incredibly realistic and convincing fake profiles online. The technology is leveraged to create photo-realistic identities, communicate in any language and develop personalised messages used for manipulation. This sophisticated use of AI renders traditional methods of detecting fake profiles largely ineffective, making it unrealistic to rely solely on end users to identify and prevent these attacks.
Deepfake technology can make calls appear authentic, presenting criminals as the individuals they are impersonating. Five years ago, if you were speaking to a friend who had met someone online but never spoken to them on the phone or met them in person, concerns would be raised. Today, scammers speak to their victims on the phone and on video calls, nurturing deep, intimate relationships over months, or even years.

The AI technology they use can enable them to bypass identity verification processes. They may employ techniques such as “injection stream attacks,” which involve inserting malicious data or code to deceive systems into accepting fraudulent inputs. Or they might use straightforward methods, like a phone app, to alter their appearance and resemble someone else.

The technology also enables criminals to have more time to scale operations and scam hundreds, even thousands, of individuals at the same time. These AI-generated interactions make it even easier to build emotional connections online, often leading victims to trust and eventually send money to scammers.

Banks are now mandated to pay victims back up to £85,000 but for fraudsters, this is a win, win. Not only is committing these crimes relatively easy, but criminals also know victims are likely to recoup their losses, leaving financial institutions to bear the brunt of the financial burden.

Adapting new models to tackle fraud

To date, banks have typically used static fraud prevention models, which are trained once and then used for a long period of time without being updated. This approach is limited as they are unable to keep up with an ever-changing fraud pattern, which criminals know and rely on.

Banks need to evolve and find new ways in which they can fight the fraudsters. With Daily Adaptive AI-driven Models, financial institutions can monitor spending behaviours and identify suspicious activity, keeping one step ahead of the scammers. By analysing hundreds of thousands of data points – from the location of the receiving bank account to the erratic nature of the payment made in an app – Daily Adaptive Models can identify and prevent fraudulent transactions in less than a second.

The rapid evolution of fraud tactics necessitates a paradigm shift in fraud prevention. Static models are no longer sufficient. Daily Adaptive AI-driven models offer real-time detection capabilities, analyzing hundreds of thousands of data points to identify and prevent fraudulent transactions in less than a second. To effectively combat AI-driven fraud, financial institutions must adopt AI-powered solutions that continuously learn and adapt, keeping them one step ahead of the ever-changing landscape of financial crime.

Read More Articles

Get in touch.
Reach out to our expert team to learn how we can help.