Docker fixes critical Ask Gordon AI flaw that allows code execution via image metadata

5 Min Read
5 Min Read

Cybersecurity researchers have detailed a patched security flaw affecting Ask Gordon, the artificial intelligence (AI) assistant built into Docker Desktop and the Docker command-line interface (CLI). This flaw could be exploited to execute code or leak sensitive data.

This critical vulnerability has been codenamed docker dash By cybersecurity company Noma Labs. This issue was resolved by Docker with the release of version 4.50.0 in November 2025.

“With DockerDash, a single malicious metadata label in a Docker image can be used to compromise a Docker environment through a simple three-step attack: Gordon AI reads and interprets the malicious instructions, forwards them to an MCP (Model Context Protocol) gateway, and executes them through MCP tools,” Sasi Levi, head of security research at Noma, said in a report shared with The Hacker News.

“Leveraging the current agent and MCP gateway architecture, all stages occur without validation.”

Successful exploitation of this vulnerability could result in remote code execution with high impact against cloud and CLI systems or data disclosure with high impact against desktop applications.

According to Noma Security, the issue stems from the fact that the AI ​​assistant treats unverified metadata as executable commands, allowing the metadata to propagate through various layers without verification, allowing attackers to bypass security boundaries. As a result, simple AI queries open the door to tool execution.

If the MCP acts as the connective tissue between the large-scale language model (LLM) and the local environment, the problem is a failure of context trust. This problem is characterized as a case of metacontext injection.

“MCP Gateway cannot distinguish between informational metadata (such as standard Docker LABELs) and pre-approved executable internal instructions,” Levi said. “By embedding malicious instructions in these metadata fields, attackers can hijack the AI’s inference process.”

See also  Google Chrome implements distrust and issues over two certificate authorities over compliance

In a hypothetical attack scenario, an attacker could exploit a serious trust boundary violation in the way Ask Gordon parses the container’s metadata. To accomplish this, the attacker creates a malicious Docker image with instructions embedded in the Dockerfile LABEL field.

Metadata fields may seem innocuous, but when processed by Ask Gordon AI, they become vectors for injection. The code execution attack chain is as follows:

  • An attacker publishes a Docker image in a Dockerfile that contains a weaponized LABEL instruction.
  • When a victim queries Ask Gordon AI for an image, Gordon reads the image’s metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to distinguish between legitimate metadata descriptions and embedded malicious instructions.
  • Ask Gordon to forward the parsed instructions to the MCP Gateway, a middleware layer between the AI ​​agent and the MCP server.
  • The MCP Gateway interprets this as a standard request from a trusted source and calls the specified MCP tool without any additional validation.
  • The MCP tool executes commands with the victim’s Docker privileges, resulting in code execution.

This data extraction vulnerability weaponizes the same prompt injection flaw, but targets Ask Gordon’s Docker Desktop implementation and leverages the assistant’s read-only privileges to capture sensitive internal data about the victim’s environment using the MCP tool.

The information collected may include details about installed tools, container details, Docker configuration, mounted directories, and network topology.

It’s worth noting that Ask Gordon version 4.50.0 also resolves the prompt injection vulnerability discovered by Pillar Security. This vulnerability could allow an attacker to hijack the Assistant and exfiltrate sensitive data by modifying the Docker Hub repository metadata with malicious instructions.

See also  New TEE.Fail side-channel attack extracts secrets from Intel and AMD DDR5 secure enclaves

“The DockerDash vulnerability highlights the need to treat AI supply chain risk as a major threat today,” Levi said. “This proves that trusted input sources can be used to hide malicious payloads that easily manipulate the AI’s execution path. To mitigate this new class of attacks, zero trust validation must be implemented for all contextual data provided to AI models.”

Share This Article
Leave a comment