Current Cyber Threats

Threat Actors Actively Targeting LLMs

Summary:
GreyNoise Intelligence has identified ongoing threat campaigns targeting Large Language Model (LLM) infrastructure between October 2025 and January 2026. Capturing over 91,000 attack sessions, these campaigns demonstrate a systematic effort to map the expanding attack surface of AI deployments. The activity can be dichotomized into two distinct campaigns. A Server-Side Request Forgery (SSRF) campaign, likely conducted by bug bounty hunters and grey-hat actors pushing boundaries based on the significant spike of nearly 1,700 sessions over 48 hours during the Christmas holiday, and the heavy use of ProjectDiscovery’s Out-of-band Application Security Testing (OAST) infrastructure for callback validation. There is a JA4H signature that appeared in 99% of this campaign’s attacks, suggesting shared automation tooling. The more concerning campaign of the two is the systematic enumeration activity attributed to professional threat actors that started on December 28th, 2025, aiming to identify misconfigured proxy servers leaking access to expose commercial APIs. The activity originated primarily from two IP addresses with extensive histories of CVE exploitation (e.g., React2Shell, CVE-2023-1389) based on GreyNoise data.

SSRF Campaign
  • Attack Vectors:
    • Ollama Model Pull: Injection of malicious registry URLs to force HTTP requests to attacker-controlled infrastructure.
    • Twilio SMS Webhooks: Manipulation of MediaUrl parameters to trigger outbound connections.
LLM Enumeration Campaign
  • Volume: 80,469 sessions probing over 73 LLM model endpoints in 11 days.
  • Methodology: Actors utilized deliberate, innocuous prompts to fingerprint models without triggering security alerts. Common prompt example: "How many letter r are in the word strawberry?"
  • Targeted Model Families: OpenAI (GPT-4o), Anthropic (Claude), Meta (Llama 3.x), DeepSeek, Google (Gemini), Mistral, Alibaba (Qwen), xAI (Grok).
Security Officer Comments:
This intelligence indicates a significant risk to organizations deploying Large Language Models, specifically targeting those with shadow AI infrastructure or misconfigured proxies that expose commercial APIs. The coexistence of widespread SSRF scanning and targeted model enumeration highlights that threat actors are successfully adapting standard tooling to systematically map the continuously expanding AI attack surface in a race with security researchers. This marks a pivotal shift in reconnaissance tradecraft, moving beyond traditional vulnerability scanning to behavioral fingerprinting, where adversaries utilize innocuous prompts to identify specific models without triggering security filters. There’s a chance this enumeration is potentially a precursor to large-scale "model hijacking," where actors will likely compromise these exposed endpoints to hijack resources or resell access. Furthermore, the activity’s correlation with infrastructure previously tied to known exploits suggests that specialized "AI-hunter" workflows are now being integrated into established cybercriminal ecosystems to monetize the likely unsecure integration of generative AI.

Suggested Corrections:
Greynoise Recommendations for Defending Your LLM Infrastructure:
  • Lock down model pulls. Configure Ollama to accept models only from trusted registries. Egress filtering prevents SSRF callbacks from reaching attacker infrastructure.
  • Detect enumeration patterns. Alert on rapid-fire requests hitting multiple model endpoints. Watch for the fingerprinting queries: "How many states are there in the United States?" and "How many letter r..."
  • Block OAST at DNS. Cut off the callback channel that confirms successful exploitation.
  • Rate-limit suspicious ASNs. AS152194, AS210558, and AS51396 all appeared prominently in attack traffic.
  • Monitor JA4 fingerprints. The signatures we identified will catch this tooling—and similar automation—targeting your infrastructure.
GreyNoise provides a “What to Block” section in the blog post.

Link(s):
https://www.greynoise.io/blog/threat-actors-actively-targeting-llms