In 1988, what is now deemed ‘the world’s first cyber-attack’ hit the headlines. It was the Morris Worm – a personal malware project of the Harvard graduate Robert Tappan Morris which came to infect an estimated 10% of the 60,000 computers online at the time, prompting a seismic shift in attitudes to computer security.
About the author
Max Heinemeyer, Director of Threat Hunting, Darktrace.
Three decades later, cyber security is one of the greatest challenges of our age. Cyber-crime has rapidly evolved from an academic research project into a global marketplace of professional cyber-crime services, and on the geopolitical stage, governments have turned to hyper-advanced cyber-attack tools that can start in cyber space and lead to physical damage and disruption to their adversaries’ critical IT infrastructure.
Since businesses, schools, hospitals, and every other thread in the fabric of society has embraced the internet, cyber-attacks now stand among natural disasters and climate change in the World Economic Forum’s annual list of global society’s gravest threats.
New detection signatures
As the years have gone by, hackers have consistently reinforced the old adage: ‘where there’s a will there’s a way’. As defenders have inputted new rules into their firewalls or developed new detection signatures based on attacks they have seen, hackers have constantly reoriented their attack methodology to evade traditional defenses, leaving organisations playing catch-up and scrambling for a plan B in the face of an attack.
A paradigm shift came in 2017 when the destructive ransomware ‘worms’ WannaCry and NotPetya caught the security world unaware, bypassing traditional tools like firewalls to cripple thousands of organisations across 150 countries, including a number of NHS agencies.
A crucial response to the onset of increasingly sophisticated and novel attacks has been AI-powered defenses, a development driven by the philosophy that information about yesterday’s attacks cannot predict tomorrow’s threats. AI has been leveraged to understand what is ‘normal’ for a digital environment and detect deviations as they emerge, signalling a movement away from legacy approaches to defense.
In recent years, thousands of organisations have entrusted machine algorithms to react at computer-speed to fast-moving attacks. This active, defensive use of AI has changed the role of security teams fundamentally, freeing up humans to focus on business communication and remediation plans to make the overall environment more resilient in the future.
In what is the attack landscape’s next evolution, hackers are now taking advantage of machine learning themselves to deploy malicious algorithms that can adapt, learn, and continuously improve in order to evade detection, signalling the next paradigm shift in the cyber security landscape: AI-powered attacks. A recent study by Forrester found that 88% of security professionals expect AI-driven attacks will become mainstream in what has already proven to be an era of hyper-change in cyber-attacks – it is only a matter of time.
‘Offensive AI’ will harness AI’s ability to learn and adapt, ushering in a new era of attacks in which highly-customized and human-mimicking attacks are scalable – and travel at machine speed. Offensive AI could land on a target’s network and use the information that it sees to direct an attack, automatically working out where the most valuable data lies.
We’re already seeing the early signs – AI-manipulated ‘deepfake’ content designed to spread misinformation is a pressing concern for social media giants, and last year we saw a UK energy firm scammed out of £200,000 when a hacker used AI to impersonate a CEO’s voice in a phone call.
Open source AI research projects, tools which could be leveraged to supercharge every phase of the attack lifecycle, already exist today. Soon, they will indubitably join the list of paid-for hacker services available for purchase on the dark web.
At Darktrace’s AI labs, we have offensive AI prototypes that autonomously determine an organisation’s most high-profile targets based on their social media exposure – all in a matter of seconds. The AI then crafts contextualized phishing emails and selects a fitting sender to spoof and fires the emails away, tricking victims into clicking on a malicious link or opening an attachment that will grant further access into the target organisation.
We have tested this prototype against our own defensive AI, mimicking what we expect to see happening soon in the real world: AI combating AI in a battle of algorithms. Armed with this research, the defenders have time on their side. Defensive AI has been around for 7 years, empowering real-world organisations to understand their digital environments with machine-speed intuition. Today, just under 4,000 organisations use AI every day in their daily battle against malicious attackers.
Armed with more data, defensive AI sees more. Powered by unsupervised machine learning, defensive AI is equipped with a complex understanding of every user and device across the network it’s protecting, and uses this evolving understanding to detect subtle deviations that might be the hallmarks of an emerging attack. With this ‘birds eye’ view of the digital business, cyber AI will spot offensive AI as soon as it starts to manipulate data.
Machine vs machine
When an AI attacker makes any kind of noise, defensive AI will make intelligent micro-decisions to block the activity – offensive AI may well be leveraged for its speed, but this is something that defensive AI will also bring to the arms race. Humans must must step aside, this is a machine fight.
When this major leap in attacker innovation inevitably occurs, investigation, response and remediation must be conducted with the speed and intuition of AI. Only AI can fight AI.
A new age in cyber defense is just beginning, but we have some cause for optimism: this is a new phase in cyber warfare that the defenders have long been arming themselves for, ensuring that when the AI arms race starting pistol sounds, the good guys will have a head start.