- Attackers are already using AI, particularly to create phishing materials and write code.
- The arrival of Agentic AI is more likely to affect the quantity of attacks more than the quality
- Defenders have longer experience in using AI
The emergence of artificial intelligence (AI) has shaken up the world of cybersecurity for both defenders and cybercriminals, presenting both new challenges and powerful defensive opportunities, as the Symantec and Carbon Black Threat Hunter team explore in a new whitepaper.
The rapid adoption of generative AI (Gen AI) by malicious actors has accelerated this arms race between attackers and defenders, with all the evidence pointing to AI-assisted attacks becoming increasingly sophisticated and widespread. However, the same technology is simultaneously empowering defenders with advanced threat detection and response capabilities.
In this new whitepaper, we explore the ways threat actors have begun exploiting Gen AI to enhance their malicious activities in a variety of ways. We look at these developments under three main headings – AI and phishing, AI and malware development, and the emergence of Agentic AI. We also explore how defenders have used and are continuing to use AI to enhance cybersecurity.
AI and phishing
One of the main ways we have seen attackers using Large Language Models (LLMs) most effectively is the creation of phishing materials – emails, lure documents etc. This is because LLMs can help many attackers overcome one of their key weaknesses, which is often that they are non-native English speakers trying to target native English speakers. LLMs can help overcome these issues by offering natural language translation, writing emails, correcting grammar, adjusting tone, and more.
While most LLMs are now built with key safety features that try to stop them being used for malicious purposes, cybercriminals continue to try and find ways to abuse the software for their own means. While LLMs won’t simply “write a phishing email” if asked, prompts can be crafted in such a way as to get the LLM to produce an email that could be used for phishing.
LLMs have also further lowered the barrier to entry for those carrying out phishing attacks by making phishing-as-a-service (PaaS) services even more straightforward to use and tailor to your needs, opening the pool of potential attackers to even more, lower-skilled individuals.
In this whitepaper, we demonstrate how we were able to get Gemini and ChatGPT to help us write phishing-style emails using simple prompts, as well as how the translation capabilities of LLMs can be leveraged by malicious actors to help them make their phishing campaigns more effective.
Source & full article: Broadcom Inc.

