Fraud Prevention: Why a rules-based solution is ineffective

We can confidently say that our machine learning algorithms outperform conventional rules-based approaches by a staggering factor of up to 100. [1]

This can translate into a significant reduction of up to 10 times less fraud and 10 times fewer false positives.


Intrigued? Read on to uncover the rationale behind this claim.


Greg Hancell

25 Mar 2024


As artificial intelligence becomes increasingly ingrained in our daily lives, so does the adoption of such technologies. Like a tech tug-of-war between the good and the not-so-good. This article will address how both sides play out, what this means for you, and Lynxs role in illuminating attackers and protecting customers and the financial institution.1 


Setting the Scene

Picture this: Attackers are armed to the teeth with the latest tools, techniques, and procedures to overcome the defense of financial institutions and their customers.  

Through AI wizardry: 

  • They’re slipping past conventional fraud defenses, 
  • Expanding their reach, and
  • And making it a puzzle to pin them down.

Here is the scoop: At onboarding, attackers use generative AI to create synthetic identities and synthetic IDs such as passports or driver’s licenses. They’re even using generative AI to transform who we see when going through video identification. They’re even using AI to cook up new flavors of phishing missions or slick scams.  

The UK is grappling with a surge in Authorized Push Payment (APP) Scams, tallying an eye-popping £485.2 million in financial fraud in 2022 alone. In the face of this onslaught, the UK Government is on the warpath to slash fraud rates to levels below those of 2019 by December 2024. They’ve even sweetened the deal by splitting costs evenly between the sending and recipient institutions back to the victim as of October 7, 2024. 

Globally, APP fraud is stealing the spotlight, accounting for 75% of all digital banking fraud in cold, hard cash. Get ready – The losses in the US, UK, and India are expected to double, reaching $5.25bn by 2026 with a 21% compound annual growth rate across the period, according to Fintech Global [2] .

As scammers get creative with synthetic identity cons, a wave of mule accounts becomes more of a Tsunami. Automated synthetic identity fraud is becoming a big challenge, resulting in mass mule accounts following the compromise of identities. 

Money mule bank accounts can be used by criminals to wash money. According to new insights from Experian, 42% of first-party current account fraud is now mule-related, with the fraud rate for current accounts rising by 13% in the first three months in the UK in 2023*. [3] .

This new approach taken by attackers can leave a financial institution vulnerable to a network of mule accounts, sometimes actively used in money laundering, other times staying dormant and waiting in the shadows for the right time to pounce.   


What Changed?

In the wake of the COVID-19 paradigm shift, the digital landscape has witnessed a surge in remote interactions, intensifying the requirements for increased trust in digital interactions. This puts more pressure on eliminating risk during onboarding, provisioning, login, authentication, and account recovery for financial institutions and their customers. 

With remote work arrangements becoming the norm, more individuals potentially never meet their employer, leaving them vulnerable to sophisticated employment scams, adding to the array of threats looming over unsuspecting consumers. 

Cases like secret shopper scams, where the victim believes they are working for a company that specializes in secret shopping reviews yet are unknowingly acting on behalf of an organized crime group. They may be making purchases on compromised cards or using their own money to purchase goods in expectation of a salary remuneration for the purchase. The reality is that they may be part of the attack chain, exfiltrating funds on compromised accounts and cards or enabling money laundering through the purchase of highly liquid goods. 

The problem with payment scams, like APPF (authorized push payment fraud), is that the person carrying out the transfer or purchase believes it is something they need to do, as they have been socially engineered. 

Such scenarios illustrate how genuine customers of financial institutions or merchants can unwittingly become pawns in elaborate financial scams. This change of attack approach is significant for three reasons: 

  • Circumvention of client-side and interaction-based fraud solutions, 
  • Proliferation of the network and coverage for the attackers, and  
  • Increased complexity in identifying the attacker(s). 

This is because it negates components typically used by a fraud prevention solution to identify indicators of compromise such as device, location, headers, and cookies. Additionally, it enables the attackers to scale up by offering jobs to an unknowing network of victims, further distancing the malicious actor and adding a layer of anonymity to their attacks. 

This ever-evolving threat landscape necessitates financial institutions to adopt dynamic strategies to counter emerging risks. They do this by: 

  • Educating their customer base with partial effectiveness, 
  • Implementing new technologies and technological enhancements, 
  • Setting up intervention teams that call victims to “break the spell,” 
  • Using tools to determine user interaction and atypical interactions, 
  • Identifying those who are being coerced or acting strangely, and, 
  • Understanding user spending and payment methods. 

Understanding customer interactions, financial behavior, and early detection of scams is only possible with machine learning due to the need to learn behaviors and predict and identify atypical behaviors.  As the financial landscape grapples with novel threats and rapidly evolving attack vectors, deploying AI technologies remains essential to address vulnerabilities overlooked by traditional rules-based systems. 

In addition, financial institutions are continuously innovating with new products and new ways for people to interact with them. This can result in a change in user behavior as the user experience is changed, new risks for new products, and generally more data to analyze and react to. It is important that FI’s have a solution that can keep up with the pace of change of the attacker, innovation, and customer behaviors. 


Evolving Your Defense

Lynx applies the most advanced machine learning and anti-fraud techniques to fortify against emerging threats. Leveraging over two decades of experience partnering with financial institutions (FIs), we facilitate the evolution of fraud prevention initiatives, catch more fraud, reduce losses, and improve operational costs. 

Typically, an FI that uses a rules-based solution is constantly inundated with a high number of false alerts and fraud. They can marginally reduce the number of false alerts and fraud by moving to unsupervised machine learning.  

As good as unsupervised machine learning is at identifying unusual patterns, it is not trained to identify fraud. Unfortunately, something unusual does not necessarily mean fraud, and there can be genuine reasons why customers do something strange. 

However, the fundamental paradigm shift occurs when they evolve to highly performant supervised machine learning models. We have the statistics to back this up; by comparing our daily adaptive model performance to the competition, we can see a significant uplift in fraud identified with a reduction in false positives. 

As you improve the solutions, people, and processes, identifying and mitigating advanced new attacks becomes second nature. Our extensive experience puts us in a unique position to offer firsthand insights into effective solutions that combat advanced attacks while steering clear of ineffective strategies. Let us assist you, the subject matter expert, in navigating the landscape of best practices and essential questions you must consider when evaluating a vendor for your financial institution. 

Contact us today to discover how we can enhance your fraud prevention strategies and safeguard your institution’s integrity. 

Is There a Difference Between Rules and Machine Learning?

Absolutely. Our robust machine learning algorithms consistently outperform rules by up to a factor of 100, translating to potentially 10 times less fraud and 10 times fewer false positives. 

Why Do Rules Not Perform as Well as Machine Learning?

Rules-based solutions rely on predefined logic set by rule builders with prior knowledge of attack patterns. However, this approach has limitations: 

  • The rules must match the specific attacks we are defending,  
  • Rules lack the ability to learn or adapt to new attacks,  
  • The rules are only effective at the time of creation, as they are based on historical attacks, 
  • They become less effective over time and do not account for evolving customer and attacker behaviors.  
  • The rules only consider a limited number of dimensions, and  
  • The attacker receives a response from the rule and changes their attack to bypass the rule.  

The rule writer is subject to many biases, such as availability bias, confirmation bias, and recency bias. Ultimately, this means that unless you are going to write tens of thousands of rules, which you update every day, you will not get close to an algorithm (which can be seen as 100s of thousands of rules) in terms of the ability to identify fraud and capability to reduce false positives. In contrast, machine learning offers dynamic capabilities to predict and adapt to various types of fraud, making it a more robust solution for modern security challenges. 

It is important to discuss different forms of machine learning, how a model is trained, the data used, the features used, and how these limit or enable their success. 

Unsupervised Machine Learning and Static Machine Learning Models

When implementing machine learning, it’s crucial to explore different methodologies, such as unsupervised and supervised learning, each with unique capabilities and limitations. Unsupervised machine learning is typically used when a financial institution can’t access relevant labeled data (fraud), so the model is trained on patterns. When an unusual pattern is detected, it is flagged as atypical and needs to be reviewed.  

Meanwhile, supervised machine learning uses labeled data (fraud) in the training data, meaning the algorithm not only learns what is unusual but also the likelihood of it being associated with fraud. This makes the model more accurate in identifying fraud with fewer false positives. 

This is a really important point, as customers of financial institutions can do irregular things from time to time. However, it’s not fraud. In this case, an unsupervised model will generate a false alert, whereas a supervised model will not. 

Additionally, where the machine learning model is only trained at set periods, i.e. “static model”, these models drift2 over time and become less effective. A great example is the pandemic, where a lockdown changed the amount people spent online or in person at shops. Overnight, this broke most of the static supervised models being used. 

User behavior changes daily as do the products offered by financial institutions. It is, therefore, important that the machine learning model is updated to understand new attacks and new behavior to maintain accuracy and reduce false positives. 

Supervised Machine Learning

Supervised machine learning is a powerful technique that trains a machine learning model to predict fraud. This is because the training set used includes both non-fraud and fraud. When you compare the performance of supervised and unsupervised machine learning, there is typically an improvement between supervised and unsupervised as a factor of three. That means that Supervised machine learning will find more fraud and generate three times fewer false alerts in doing so. 

Supervised machine learning is the recommended path for our clients. By doing so, we will reduce fraud, decrease the operational cost of monitoring alerts, and improve the job satisfaction of fraud analysts by removing alert fatigue. 

Static Models

Typically, the starting point of most machine learning (ML) fraud prevention applications, static models refer to a machine learning model trained on prior data, which is then deployed to identify fraud. The problem with static models, much like advanced rules, is that they are only relevant for the past and not for the future. Since static models do not learn from new data and new attacks, the machine learning model becomes less accurate over time, referred to as model drift. After a few weeks or months, the model will need to be retrained. 

Static models can perform better than static rules as the model can identify more user behaviors and accurately determine more fraud. However, they are still some way off the capability that daily adaptive models benefit from. 

Lynx may use static models where the client requests us to do so. Typically, we recommend switching to daily adaptive models to ensure accuracy is maintained and the model is drift-resistant. This will result in less fraud and fewer false positives. 

Self-Learning Models

Self-learning models can learn from existing and new data. They are typically retrained frequently, for example, daily, ensuring that the model’s performance is maintained over time and may even become more effective. 

Lynx recommends that clients use daily adaptive models. These machine-learning models are trained every day on new and existing data. This ensures that the models adapt to changes in customer behavior, attacks, and new types of fraud. 

Ultimately, daily adaptive models significantly reduce fraud and alert fatigue by reducing false positives and accurately identifying fraud. 

Why Don’t You Give Us a Try?

We live and breathe data and are experts in data science; we offer world-class algorithms, insight, and intelligence. Ensuring that: 

  • Models are the best in the business, 
  • Costs are reduced by lowering false positives by up to a factor of 100 compared to rules,   
  • Fraud reduction is significant,  
  • The complexity of rule-building is simplified,  
  • Job satisfaction is enhanced through the reduction of alert fatigue by providing meaningful alerts and  
  • Continuously learning to adapt to changing attacks and new products/customer behavior. 

We’re confident that Lynx can thwart the attacks you face, and we have been doing so very successfully for over two decades. 

So why not reach out and ask for a proof of concept today? We will demonstrate the cost savings against your current solution in just three weeks. 

You won’t be disappointed! 

Email us today Contact

Or talk to me Greg Hancell (Head of Product – FRAUD)

[1]Carlos Santa Cruz CTO Lynx

Copied link

This site is registered on as a development site. Switch to a production site key to remove this banner.