Current Cyber Threats

New Font-Rendering Trick Hides Malicious Commands From AI Tools

Summary:
Recent research has detailed a sophisticated technique called "Poisoned Typeface," which leverages the discrepancy between how AI assistants parse a webpage's Document Object Model (DOM) and how a browser visually renders that same page for a human. By using custom font glyph remapping and strategic CSS, an attacker can present "benign" content to an AI’s text-based parser while simultaneously displaying malicious instructions to the user. The attack functions as a visual substitution cipher. For example, a font file is engineered so that the character "A" in the HTML code is rendered as the visual shape of a "Z" on the screen. Consequently, an AI assistant analyzing the site sees harmless "filler" text (like video game lore), while the user sees a clear call-to-action to execute a reverse shell or download a malicious payload. As of late 2025, this technique was effective against a wide array of AI tools, including ChatGPT, Claude, and Gemini. Notably, Microsoft has already taken steps to address this within its own ecosystem after the researchers' disclosure.


Security Officer Comments:
This research underscores a new layer of complexity in securing the "AI-to-Human" interface. Rather than a traditional exploit, this is a semantic gap in how automated tools interpret visual information. Many of our members are currently integrating AI assistants into their internal workflows to help employees summarize technical documentation or vet suspicious links. The primary risk is that a user might defer to the AI's "all clear" signal, not realizing the AI is literally blind to the visual instructions the user is reading.

This technique is particularly relevant for member organizations with large helpdesks or DevOps teams. If a technician uses an AI to verify a command found on a community forum, the AI may report the page is safe because it only sees the background text. This places a premium on "human-in-the-loop" verification. We should view this not as a failure of AI, but as a reminder that AI assistants currently operate as text-parsers rather than visual observers. The proactive response from Microsoft demonstrates that the industry is already moving toward "rendering-aware" security models, which is a positive step for the collective security of our membership.


Suggested Corrections:
To address the risks associated with font-rendering manipulation, the following strategies are recommended:
  • Adopt Rendering-Aware AI Workflows: Where possible, utilize "agentic" or vision-capable AI models that can analyze a screenshot of a page in addition to the raw HTML. This "render-and-compare" approach can identify discrepancies between the code and the visual output.
  • Enhance Browser Security Heuristics: Security teams can configure web gateways or browser isolation tools to flag pages that exhibit suspicious CSS properties, such as font sizes below 5px, text colors that match the background (hidden text), or the use of high-entropy "gibberish" strings that are likely being used as cipher keys for custom fonts.
  • Font Surface Inspection: For high-security environments, consider inspecting custom web fonts ($\text{.woff2}$, $\text{.ttf}$) for abnormal Unicode remapping. Legitimate fonts typically follow standard character mappings; widespread remapping of standard ASCII characters is a strong indicator of a Poisoned Typeface attempt.
  • Verification Protocols: Establish a policy that AI "safety summaries" should be treated as informational rather than authoritative. Employees should be encouraged to use sandboxed environments (like a virtual machine or container) when executing any command derived from a public webpage, regardless of an AI’s sentiment analysis of that page.

Link(s):
https://www.bleepingcomputer.com/ne...trick-hides-malicious-commands-from-ai-tools/