×
Brave research reveals AI browsers vulnerable to hidden image attacks
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Brave Software has uncovered security vulnerabilities in AI-powered browsers that allow hackers to manipulate them through hidden text in images and malicious websites. The research, published on the same day OpenAI introduced its ChatGPT Atlas browser, demonstrates how prompt injection attacks can trick AI browsers into accessing sensitive user data like emails and banking information.

What you should know: The vulnerabilities exploit AI browsers’ ability to analyze visual content, turning a helpful feature into a security risk.

  • Brave’s research team successfully hid malicious instructions in images using faint light blue text on yellow backgrounds, making the commands invisible to users but readable by AI systems.
  • When Perplexity’s Comet browser analyzed these compromised images, it executed the hidden commands, including accessing user emails and visiting hacker-controlled websites.
  • A similar attack worked on the Fellou browser simply by directing it to navigate to a malicious website containing hidden instructions.

How the attacks work: The exploits rely on “indirect prompt injection,” where AI browsers treat web content as trusted input for their large language models.

  • For Comet, hackers embed invisible text instructions in images that users might screenshot or analyze.
  • With Fellou, simply visiting a compromised website causes the browser to send the site’s content to its AI system, which then reads and executes hidden commands.
  • “Surprisingly, we found that simply asking the browser to go to a website causes the browser to send the website’s content to their LLM,” Brave researchers noted.

In plain English: Think of these AI browsers as having a helpful assistant that can read and analyze everything you show it—including hidden messages you can’t see. Hackers can slip secret instructions into images or websites, like writing with invisible ink that only the AI can read, then trick the AI into doing things like checking your email or visiting dangerous websites on your behalf.

Why this matters: AI browsers operate with users’ authenticated privileges, making successful attacks particularly dangerous.

  • “The scariest aspect of these security flaws is that an AI assistant can act with the user’s authenticated privileges,” Brave explained in a tweet.
  • “An agentic browser hijacked by a malicious site can access a user’s banking, work email, or other sensitive accounts.”
  • The research highlights that “indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers.”

What the companies are saying: Perplexity, the company behind the Comet browser, has patched the vulnerability but disputes Brave’s characterization of the research.

  • “We worked closely with Brave on this issue through our active bug bounty program (the flaw is patched, unreproducible, and was never exploited by any user),” Perplexity told PCMag.
  • The company added: “We’ve been dismayed to see how they mischaracterize that work in public. Nonetheless, we encourage visibility for all security conversations as the AI age introduces ever more variables and attack points.”
  • Perplexity also claimed: “We’re the leaders in security research for AI assistants.”

Recommended safeguards: Brave is calling for stronger security measures across the AI browser category.

  • The company recommends implementing “explicit consent from users for agentic browsing actions like opening sites or reading emails.”
  • OpenAI and Microsoft already implement similar consent mechanisms to some extent in their AI implementations.
  • Users can currently intervene to stop attacks, which become visible when the AI processes malicious tasks, though this requires active monitoring.
Curious About AI Browsers? Read This Before You Try Them

Recent News

Law firm pays $55K after AI created fake legal citations

The lawyer initially denied using AI before withdrawing the fabricated filing.

AI experts predict human-level artificial intelligence by 2047

Half of experts fear extinction-level risks despite overall optimism about AI's future.

OpenAI acquires Sky to bring Mac control to ChatGPT

Natural language commands could replace clicks and taps across Mac applications entirely.