A data breach where personal data is exposed is one of the most common ways that fraudsters are able to commit identity theft. But it’s not just names, birth dates, and addresses that identity thieves are targeting. In a technique called ‘credential stuffing’, credentials that are obtained from a data breach on one website can be used to attempt access to any number of a victim’s different online accounts, ranging from bank, department store, airline, and hotel accounts, to dating and gaming services.
About the author
John Briar is Co-founder and COO at BotRx
Credential stuffing and identity theft
Today’s sophisticated cybercriminal isn’t just seeking direct access to bank and credit card accounts, they’re also stealing airline miles and hotel points to redeem them for gifts or their monetary value, or employing social engineering on compromised dating accounts, tricking users into giving up personal information that can be used to commit online fraud.
People have a tendency to reuse the same usernames and passwords across all of their different online accounts, and credential stuffing takes advantage of this habit. Hackers use automated bots to launch repeated password-guessing attempts to log into secure user accounts, on hundreds of different websites.
These attackers have millions, and sometimes billions, of login credentials to work from. So even though the success rate of these bad bots is low – estimated to be between 0.1% and 3% –, the sheer volume of compromised credentials means that an attacker with one million pairs of usernames and passwords will still make one thousand to tens of thousands of successful matches.
Once matched, hackers use these logins to commit a variety of fraud, as well as compiling these “good credentials” and selling them on for further attacks.
Existing defences aren’t enough
Machine learning and AI-backed automation are often touted as the end all, be all in fighting cyber-attacks, like credential stuffing. But in practice, AI and machine learning hasn’t been the definitive answer, as these financially-motivated and resourceful threat actors themselves employ AI tools and dynamic tactics, acting human-like to evade detection.
As well, AI systems are only as good as the information fed into them, and although effective at identifying data anomalies, AI still requires manual intervention to classify if anomalies are real or false events. In order to detect bad bots, traditional security defences work to verify the true identity behind the transaction – something which requires client and network signatures like IP addresses, behavioural-based detection, and defence-in-depth strategies to uncover the attacker’s digital DNA and unique fingerprint.
This approach doesn’t keep pace with the dynamic nature of adversaries who modify bots daily, and even hourly, so that attack behaviours and signatures remain unique. Consequently, reliance on detect-and-block methods can be largely ineffective at catching new or altered tools.
Although AI and other more traditional defence methods still have merit, they each also have their own weaknesses. When it comes to firewalls and intrusion prevention systems, signatures and rules aren’t able to differentiate changing attack patterns. Big data analytics, where analysts evaluate large amounts of data to detect irregularities within a network, is often outpaced by fast-changing attack patterns. Even with threat intelligence on new threats and sources, this intelligence is “after the fact,” which allows early attackers to go undetected. Bots continue to evade current protections because these defences are not dynamic enough.
Taking a dynamic approach
Moving Target Defense (MTD) has emerged as a game changer in fighting credential stuffing and other malicious bot attacks. Created by the US Department of Homeland Security, MTD begins with the assumption that perfect security is unattainable, and that all systems are compromised. From that starting point, the primary goal of this “moving target” approach is to make systems defensible rather than perfectly secure.
To this end, MTD makes the attributes of the network dynamic rather than static, obfuscating the attack surface, much like attackers do to ensure bots go undetected. Through these dynamic changes of the attack surface, MTD increases system resiliency by hiding the entry points and vulnerabilities. By controlling change across multiple system dimensions, MTD increases uncertainty and apparent complexity for attackers – reducing their window of opportunity and raising the costs of their probing and attack efforts.
Where other reactive security defences have focused on improving detection of human-like bots, MTD changes course sharply and takes a proactive approach. It enables organisations to shift the tactical advantage back to defenders, and deploy mechanisms and strategies that are as diverse as the attackers. MTD can cope with the speed and frequency with which attackers modify their bots – deflecting bot attacks, and at the same time, being able to function alongside other traditional detection methods.
Levelling the playing field
Nearly half of the traffic going to the world’s 1.79 billion websites in 2020 is attributed to bots, and while bots can be beneficial to businesses and users – such as Google’s search engine bots – bad bots, and the malicious activities behind them, are firmly on the rise. Although identity thieves and other malicious actors are becoming increasingly proficient at using automated bots to complete malicious activities, MTD is a powerful new tool that can help redistribute the balance of power between defenders and attackers.
When combined with existing threat detection methods, the proactive approach of MTD – which deploys mechanisms and strategies as diverse as the bad bots themselves – can ensure organisations are in the best position to effectively fight today’s (and tomorrow’s) bot attacks.