The rise of artificial intelligence in recent years has brought incredible technological advancements to the cybersecurity landscape, as well as significant challenges. Over the past year, the development and adoption of AI (Artificial Intelligence) has increased the speed, scale, and sophistication of attacks. One of the main concerns following this trend is the rise and the development of GenAI-generated malware.
Understanding GenAI-Generated Malware
GenAI, including models such as ChatGPT, Co-Pilot, and Gemini, represents a significant leap in AI capabilities, allowing machines to generate human-like text, images, and even code, revolutionizing how we interact with computers. While these advancements have opened the door for a new age of innovation, they also carry the potential for misuse and exploitation.
GenAI-generated malware leverages the power of advanced AI algorithms to produce increasingly sophisticated, complex, and elusive code autonomously, making it challenging for traditional cybersecurity defenses to detect and mitigate. GenAI is being used by threat actors to create sophisticated social engineering campaigns, identify network vulnerabilities, create fake media for identity theft, intimidation, or impersonation, and automate malware creation and phishing attempts.
Unlike traditional threats that rely on pre-programmed exploits, GenAI-generated malware is constantly evolving, evading detection methods that depend on identifying known patterns. This poses a significant challenge and calls for a change in approach in combating this new wave of attacks.
The Dark Side of LLMs
Traditional LLMs (Large Language Models), such as ChatGPT and Gemini, have several mechanisms in place to prevent malicious use. Threat actors frequently use two approaches with malicious AI: adversarial attacks (jailbreaks) and dark LLMs.
Adversarial Attacks
Adversarial Attacks is an umbrella term for different techniques used to cause AI technologies to malfunction, including poisoning, evasion, and extraction attacks. These attacks can create malware or can be combined with a malware attack. Also known as “jailbreaks”, these attacks are often shared or sold on underground markets.
Malicious Intent LLMs, “Dark LLMs”
The most common approach to generating malware through GenAI is to build or buy “dark LLMs” designed for threat actors, such as WormGPT, FraudGPT, DarkBERT and DarkBART. These LLMs lack the safeguards and security checks ChatGPT and Gemini have.
WormGPT is an open-source unsupervised learning system, making it much easier to develop realistic attacks on a larger scale. WormGPT's advanced capabilities allow it to bypass security measures, impersonate legitimate users to access confidential information, generate large amounts of spam emails or text messages, and more.
FraudGPT is a malicious GenAI that operates on a subscription base and acts as a threat actor’s starter kit, utilizing pre-existing attack tools, such as custom hacking guides, vulnerability mining, and zero-day exploits. It is designed to create phishing emails, cracking tools, and carding schemes.
Upcoming bots DarkBART and DarkBERT, have alarming capabilities, ranging from launching business email compromise (BEC) phishing campaigns to exploiting zero-day vulnerabilities and probing critical infrastructure weaknesses. Moreover, they facilitate the creation and distribution of malware, such as ransomware, and provide valuable information on zero-day vulnerabilities, giving threat actors a competitive advantage over defenders.
The emergence of these tools highlights the speed and scale at which cybersecurity threats evolve. Essentially, they provide even inexperienced attackers with the potential to carry out sophisticated attacks that were previously exclusive only to highly skilled, experienced attackers.
Feature |
General-Purpose LLMs |
Malicious Intent LLMs |
Purpose |
Designed for informative and helpful tasks |
May be designed for tasks with malicious intent (generating phishing emails, malicious code) |
Training Data |
Diverse, filtered, and curated datasets |
Potentially unfiltered or adversarial datasets |
Safety Measures |
Implements filters and trained with safeguards to mitigate bias and harmful outputs |
May lack safeguards, leading to increased risk of bias, misinformation, and toxicity |
Regulation |
Growing discussion on regulations for LLM development and use |
Currently unregulated, posing ethical concerns |
Accessibility |
Often available through APIs or research collaborations |
Typically, not publicly available |
Performance |
Can be highly effective for tasks like writing different kinds of creative text formats, translation, and answering questions |
May outperform regular LLMs in specific tasks due to unfiltered training data, but with higher risk of negative outputs |
Threat Actors Increase Productivity: GenAI in the Wild
Threat actors are looking at AI and LLMs to increase productivity and further their motives. As different AI technologies emerge, cybercrime groups, nation-state threat actors, and other adversaries investigate and test them to determine their potential value to their operations and the security restrictions they may need to evade.
While motives and complexity may vary between threat actors, they have common tasks to perform during attacks. This includes reconnaissance, language support for social engineering and additional tactics which rely on misleading communications tailored to the targets, and coding assistance, which includes improving software scripts and malware development.
Microsoft and OpenAI track threat actors who use LLMs in their operations.
- APT28: a Russia-based APT who often targets defense, transportation, government, energy, and information technology. The group has been observed using GenAI to generate scripts to perform tasks like file manipulation and data selection.
- APT43: a North Korean-based APT. The group has been observed using GenAI for scripting tasks that accelerate attacks, such as identifying certain user events on a system, and to create spear phishing and other social engineering attacks.
- Imperial Kitten: an Iranian-based APT who often targets defense, maritime shipping, transportation, healthcare, and technology. The group has been observed using GenAI to generate code to evade detection and attempting to disable security through Windows Registry or Group Policy.
LLM-themed TTPs
The growing threat of dark LLMs hasn't gone unnoticed. The growing prevalence and effectiveness of LLMs in malicious activities have caught the attention of Mitre, leading them to incorporate LLM-themed TTPs into the ATT&CK framework. This highlights the seriousness of the situation and the need for proactive defenses.
- LLM-informed Reconnaissance: gathering intelligence on technologies and potential vulnerabilities using LLMs.
- LLM-enhanced Scripting Techniques: generating or refining scripts, or basic scripting tasks using LLMs.
- LLM-aided Development: LLMs are used in the development lifecycle of tools and programs, including those with malicious intent.
- LLM-supported Social Engineering: LLM assistance with translations and communication, likely to establish connections or manipulate targets.
- LLM-assisted Vulnerability Research: understanding and identifying potential vulnerabilities in software and systems using LLMs.
- LLM-optimized Payload Crafting: LLM assistance in creating and refining payloads for deployment in cyberattacks.
- LLM-enhanced Anomaly Detection Evasion: developing methods that help malicious activities blend in with normal behavior or traffic to evade detection systems using LLMs.
- LLM-directed Security Feature Bypass: finding ways to circumvent security features, such as MFA, CAPTCHA, or other access controls using LLMs.
- LLM-advised Resource Development: using LLMs in tool development, tool modifications, and strategic operational planning.
GenAI-generated malware: The Risks for OT/SCADA Organizations
Traditionally, malware targeting OT/SCADA systems has been relatively unsophisticated. However, GenAI-generated malware poses a significant threat for several reasons:
- Novelty: GenAI can produce never-before-seen strains of malware, which exhibit advanced evasion techniques, polymorphic behavior, and adaptive capabilities, making it difficult for signature-based detection methods to identify and block.
- Customized and Targeted Attacks: AI-generated malware can be tailored specifically to target OT/SCADA systems, exploiting vulnerabilities unique to these environments. Moreover, GenAI can create zero-day exploits, exploiting unknown vulnerabilities before patches or signatures are available, significantly increasing the risk.
- Speed and Scale: GenAI is capable of rapidly generating vast amounts of malware, overwhelming traditional security measures.
SCADAfence's Recommendations to Mitigate Risk
While GenAI presents a significant, new challenge, OT/SCADA organizations can mitigate the risk and improve their preparedness and resilience against this evolving threat.
- Network Exposure: Minimize network exposure for control systems and ensure they are not accessible from the Internet to limit the potential impact of a breach.
- Network Traffic Monitoring: Monitor access to the production segments to identify unusual activity that might indicate a novel attack.
- Patch Management: Prioritize timely vulnerability patching to minimize attack surfaces.
- Threat Intelligence: Stay up to date regarding the latest vulnerabilities and malware trends, including the development of GenAI threats.
Additionally, we recommend following the best practices
- Apply the latest security patches on the assets in the network.
- Use unique passwords and MFA on authentication paths to OT assets.
- Disable ports and protocols that are not essential.
- Enable strong spam filters to prevent phishing emails from reaching end users.
- Educate personnel on cybersecurity best practices, including phishing awareness and social engineering tactics.
- Make sure secure offline backups of critical systems are available and up to date.
The SCADAfence Platform protects the physical safety of your employees and the communities where they live from the negative consequences of cyber security attacks.