Matthew Stevens, CTO at Dolos.
AI is enabling threat actors to launch increasingly sophisticated attacks. At the same time, it is bolstering blue team efforts to mitigate risk. By pairing AI with zero trust architecture (ZTA), organisations can build more modern, robust security frameworks.
This is according to Matthew Stevens, CTO at Dolos, who was speaking during a webinar on zero trust and AI.
Stevens highlighted numerous ways threat actors are using unconstrained, aggressive AI systems to breach enterprise defences.
“The primary use case for AI in cyber attacks is around phishing,” he said. “Natural language processing drastically improves the believability of phishing e-mails. Phishing campaigns have become more complex and better able to create engagement, resulting in drastic improvements in the level of effectiveness of phishing campaigns.”
Stevens said AI had also enabled more phishing personalisation through scraping and data mining, as well as high volume automation and targeted mails, and campaign optimisation automation. Other common applications of AI for cyber crime are deepfakes for voice and video vishing, social messaging abuse, automation of robocalls and real-time social engineering.
He said: “AI cyber abuse has also come to the fore in automated reconnaissance – the first step in the cyber kill chain. It’s great at implementing complex workflows and has drastically increased the ability of attackers to target victims, with infrastructure profiling/enumeration, harvesting of structured and unstructured data – including old data leakages, and automated prioritisation.
“Where it gets technical and exciting is automated agent attacks,” he added. “There are a number of AI models and automated AI agents that can respond dynamically to the environments they encounter. Automated agents can now work in an unsupervised manner to attack your environment, to carry out automated phishing and social engineering, autonomous reconnaissance and co-ordinated multi-agent attacks.”
In addition to using AI to launch attacks, threat actors were now targeting enterprise AI, Stevens noted.
“Attacks against AI platforms include prompt injections that cause the model to leak sensitive data, execute unauthorised actions or bypass safety filters. Jailbreaking – or prompt engineering exploits – enable attackers to force models into producing disallowed outputs such as malware code or private data,” he said.
Stevens said training data attacks were also evolving. These include data poisoning attacks which can degrade model accuracy, create bias or embed hidden backdoors. Data injection attacks insert new malicious samples into an open or crowdsourced training pipeline. Data extraction or training data leakage recovers confidential or personal information unintentionally memorised by the model. Label flipping attacks confuse the model’s learning process and reduce classification accuracy.
From a blue team perspective, AI is proving to be a valuable tool in security applications such as threat hunting and malware classification, where it can improve outcomes and reduce costs, Stevens said.
He emphasised that zero trust is crucial for bolstering security in this evolving landscape, highlighting WatchGuard’s work over the past 10 years to embed AI in its multilayered security portfolio, which helps customers implement zero trust, defend more proactively and respond faster.
These include the WatchGuard Unified Security Platform with IntelligentAV, APT Blocker and ThreatSync, which uses AI to bring powerful self-learning security to the WatchGuard Total Security Suite. This can reinforce zero trust principles of ‘verify explicitly, least privilege, assume breach, continuous monitoring, secure every asset and strong identity and access management’.
“WatchGuard released our zero trust application service 10 years ago so users could run only known applications and weren’t able to run random applications that make changes to your environment. This is very effective technology – we have a blacklist and a whitelist, and we keep users operating in the safe whitelist. Because we adopted cloud-based AI for malware classification 10 years ago, we have a very mature AI classification model with very high reliability,” he said.
WatchGuard’s XDR layer, ThreatSync, uses ML to do correlation automation to respond to threats, Stevens said. “WatchGuard also uses AI models for threat detection – we run AI models on our firewall appliances, doing customer-specific real-time baseline analysis, and spot any deviations rapidly. It then co-ordinates changes to rule sets and triggers a response in an AV context,” he explained.
WatchGuard also enforces explicit verification and least privilege with AuthPoint, he said.
“When adopting zero trust, organisations should think of it in a layered context, starting with multifactor authentication at the user layer. This is the most effective – and most cost-effective – place to start,” he advised.