×
Hidden text in images can hijack AI browsers, Brave research shows
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Brave Software has uncovered critical security vulnerabilities in AI-powered browsers that allow malicious actors to hijack the software using hidden text embedded in images or websites. The research demonstrates how prompt injection attacks—where hackers secretly feed malicious commands to AI systems—can manipulate browsers like Perplexity’s Comet and Fellou into executing unauthorized commands, including accessing user emails and visiting hacker-controlled websites.

What you should know: The attacks exploit AI browsers’ ability to process visual content by hiding malicious instructions that users cannot see.

  • Brave researchers successfully embedded prompt injection commands using “faint light blue text on a yellow background” in images, making the malicious instructions effectively invisible to users.
  • When AI browsers analyze these compromised images or visit infected websites, they read and potentially execute the hidden commands without user awareness.
  • The timing coincides with OpenAI’s launch of its ChatGPT Atlas browser, highlighting growing security concerns as AI browsing tools become more prevalent.

How the attacks work: The vulnerabilities target different aspects of AI browser functionality depending on the specific software.

  • In Comet browser attacks, users are tricked into analyzing a malicious image containing hidden instructions, which the AI then reads and executes.
  • Fellou browser can be compromised simply by navigating to a hacker-controlled website, where the AI automatically processes malicious content embedded in the site.
  • Brave’s attack demonstrations showed successful execution of commands including email access and redirection to attacker-controlled websites.

The big picture: These prompt injection attacks represent a systemic security challenge facing the entire category of AI-powered browsers rather than isolated incidents.

  • “The scariest aspect of these security flaws is that an AI assistant can act with the user’s authenticated privileges,” Brave noted, explaining how hijacked browsers could access banking, work email, or other sensitive accounts.
  • While users can potentially intervene to stop visible attacks, the research underscores fundamental security gaps in how AI browsers process and trust external content.

Industry response: Companies are implementing patches while defending their security practices amid the disclosure.

  • Perplexity confirmed they “worked closely with Brave on this issue through our active bug bounty program” and stated the flaw is “patched, unreproducible, and was never exploited by any user.”
  • However, Perplexity pushed back against Brave’s characterization, saying they were “dismayed to see how they mischaracterize that work in public” while claiming to be “the leaders in security research for AI assistants.”
  • Fellou did not immediately respond to requests for comment regarding the vulnerability disclosure.

What experts recommend: Brave is calling for enhanced safeguards across the AI browser ecosystem to prevent similar attacks.

  • The company advocates for “explicit consent from users for agentic browsing actions like opening sites or reading emails,” noting that OpenAI and Microsoft already implement similar protections to some extent.
  • The research emphasizes the need for AI browser makers to treat external content as potentially untrusted input rather than automatically processing it through their language models.
Be Careful With AI Browsers: A Malicious Image Could Hack Them

Recent News

AWS suffers 15-hour outage after laying off hundreds of tech workers

Veteran engineers knew which seemingly unrelated systems to check when DNS wobbled.

UTSA researchers use AI digital twins to fight 120-degree indoor heat

Digital twins allow testing renovation strategies virtually before real-world implementation.

Google Gemini adds image markup tools for targeted AI analysis

The feature addresses a key flaw in current image analysis tools.