×
AI browsers vulnerable to hidden screenshot attack commands
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers from Brave, a privacy-focused browser company, have uncovered serious security vulnerabilities in AI-powered browsers, specifically Perplexity’s Comet Browser, that could allow hackers to access users’ personal accounts through prompt injection attacks. These findings raise significant concerns as major companies like OpenAI prepare to launch their own AI browsers, potentially exposing millions of users to these inherent security risks.

What you should know: AI browsers can be hijacked through malicious websites that embed hidden instructions in screenshots, tricking the AI into performing unauthorized actions.

  • Brave’s research demonstrated how a simple screenshot request could cause Perplexity’s AI browser to open personal email and visit hacker-controlled websites without user knowledge.
  • The attack works by embedding text instructions imperceptible to humans but readable by AI, which the browser follows without distinguishing them from legitimate user commands.

The big picture: AI browsers represent the latest trend in the AI boom cycle, following the success of chatbots and autonomous agents, with OpenAI announcing its “ChatGPT Atlas” browser this week.

  • These browsers are designed to supercharge web experiences with machine learning features, including AI analysis of screenshots and automated web actions.
  • The technology allows AI assistants to act with users’ authenticated privileges, accessing banking, work email, and other sensitive accounts.

Why this matters: Prompt injection attacks aren’t new, but AI browsers dramatically escalate the potential damage by giving malicious actors control over users’ authenticated web sessions.

  • “The scariest aspect of these security flaws is that an AI assistant can act with the user’s authenticated privileges,” Brave warned in their report.
  • Previous Brave research showed how a single Reddit post could trick Perplexity’s browser into potentially giving hackers bank account access.

Key technical details: The vulnerabilities stem from fundamental issues with how AI browsers handle the boundary between trusted user input and untrusted web content.

  • The attacks “boil down to a failure to maintain clear boundaries between trusted user input and untrusted Web content when constructing LLM prompts while allowing the browser to take powerful actions on behalf of the user.”
  • These problems are inherent to large language models and their integration with web browsing functionality.

In plain English: Think of it like a trusted assistant who can’t tell the difference between your genuine instructions and fake ones written by a scammer. When you ask the AI browser to analyze a screenshot, malicious websites can hide invisible commands in that image that the AI reads and follows—like telling it to open your email or visit dangerous websites—even though you never intended those actions.

What’s next: Security experts expect similar vulnerabilities to appear in OpenAI’s upcoming AI browser, potentially exposing millions more users to these risks.

  • “AI-powered browsers that can take actions on your behalf are powerful yet extremely risky,” the Brave report concluded.
  • The research highlights the need for better security measures before AI browsers become mainstream consumer products.
Researchers Find Severe Vulnerabilities in AI Browser

Recent News

University of Illinois launches CropWizard AI for real-time farming guidance

Academic institutions are now driving practical AI applications into America's oldest industry.

Lawyer faces sanctions for using AI to fabricate 22 legal citations

Judge calls AI legal research "a game of telephone" requiring verification with original sources.