The rise of artificial intelligence in recent years has brought incredible technological advancements to the cybersecurity landscape, as well as significant challenges. Over the past year, the development and adoption of AI (Artificial Intelligence) has increased the speed, scale, and sophistication of attacks. One of the main concerns following this trend is the rise and the development of GenAI-generated malware.
GenAI, including models such as ChatGPT, Co-Pilot, and Gemini, represents a significant leap in AI capabilities, allowing machines to generate human-like text, images, and even code, revolutionizing how we interact with computers. While these advancements have opened the door for a new age of innovation, they also carry the potential for misuse and exploitation.
GenAI-generated malware leverages the power of advanced AI algorithms to produce increasingly sophisticated, complex, and elusive code autonomously, making it challenging for traditional cybersecurity defenses to detect and mitigate. GenAI is being used by threat actors to create sophisticated social engineering campaigns, identify network vulnerabilities, create fake media for identity theft, intimidation, or impersonation, and automate malware creation and phishing attempts.
Unlike traditional threats that rely on pre-programmed exploits, GenAI-generated malware is constantly evolving, evading detection methods that depend on identifying known patterns. This poses a significant challenge and calls for a change in approach in combating this new wave of attacks.
Traditional LLMs (Large Language Models), such as ChatGPT and Gemini, have several mechanisms in place to prevent malicious use. Threat actors frequently use two approaches with malicious AI: adversarial attacks (jailbreaks) and dark LLMs.
Adversarial Attacks is an umbrella term for different techniques used to cause AI technologies to malfunction, including poisoning, evasion, and extraction attacks. These attacks can create malware or can be combined with a malware attack. Also known as “jailbreaks”, these attacks are often shared or sold on underground markets.
The most common approach to generating malware through GenAI is to build or buy “dark LLMs” designed for threat actors, such as WormGPT, FraudGPT, DarkBERT and DarkBART. These LLMs lack the safeguards and security checks ChatGPT and Gemini have.
WormGPT is an open-source unsupervised learning system, making it much easier to develop realistic attacks on a larger scale. WormGPT's advanced capabilities allow it to bypass security measures, impersonate legitimate users to access confidential information, generate large amounts of spam emails or text messages, and more.
FraudGPT is a malicious GenAI that operates on a subscription base and acts as a threat actor’s starter kit, utilizing pre-existing attack tools, such as custom hacking guides, vulnerability mining, and zero-day exploits. It is designed to create phishing emails, cracking tools, and carding schemes.
Upcoming bots DarkBART and DarkBERT, have alarming capabilities, ranging from launching business email compromise (BEC) phishing campaigns to exploiting zero-day vulnerabilities and probing critical infrastructure weaknesses. Moreover, they facilitate the creation and distribution of malware, such as ransomware, and provide valuable information on zero-day vulnerabilities, giving threat actors a competitive advantage over defenders.
The emergence of these tools highlights the speed and scale at which cybersecurity threats evolve. Essentially, they provide even inexperienced attackers with the potential to carry out sophisticated attacks that were previously exclusive only to highly skilled, experienced attackers.
Feature |
General-Purpose LLMs |
Malicious Intent LLMs |
Purpose |
Designed for informative and helpful tasks |
May be designed for tasks with malicious intent (generating phishing emails, malicious code) |
Training Data |
Diverse, filtered, and curated datasets |
Potentially unfiltered or adversarial datasets |
Safety Measures |
Implements filters and trained with safeguards to mitigate bias and harmful outputs |
May lack safeguards, leading to increased risk of bias, misinformation, and toxicity |
Regulation |
Growing discussion on regulations for LLM development and use |
Currently unregulated, posing ethical concerns |
Accessibility |
Often available through APIs or research collaborations |
Typically, not publicly available |
Performance |
Can be highly effective for tasks like writing different kinds of creative text formats, translation, and answering questions |
May outperform regular LLMs in specific tasks due to unfiltered training data, but with higher risk of negative outputs |
Threat actors are looking at AI and LLMs to increase productivity and further their motives. As different AI technologies emerge, cybercrime groups, nation-state threat actors, and other adversaries investigate and test them to determine their potential value to their operations and the security restrictions they may need to evade.
While motives and complexity may vary between threat actors, they have common tasks to perform during attacks. This includes reconnaissance, language support for social engineering and additional tactics which rely on misleading communications tailored to the targets, and coding assistance, which includes improving software scripts and malware development.
Microsoft and OpenAI track threat actors who use LLMs in their operations.
The growing threat of dark LLMs hasn't gone unnoticed. The growing prevalence and effectiveness of LLMs in malicious activities have caught the attention of Mitre, leading them to incorporate LLM-themed TTPs into the ATT&CK framework. This highlights the seriousness of the situation and the need for proactive defenses.
Traditionally, malware targeting OT/SCADA systems has been relatively unsophisticated. However, GenAI-generated malware poses a significant threat for several reasons:
While GenAI presents a significant, new challenge, OT/SCADA organizations can mitigate the risk and improve their preparedness and resilience against this evolving threat.
Additionally, we recommend following the best practices
The SCADAfence Platform protects the physical safety of your employees and the communities where they live from the negative consequences of cyber security attacks.