Researchers from Brave, a privacy-focused browser company, have uncovered serious security vulnerabilities in AI-powered browsers, specifically Perplexity’s Comet Browser, that could allow hackers to access users’ personal accounts through prompt injection attacks. These findings raise significant concerns as major companies like OpenAI prepare to launch their own AI browsers, potentially exposing millions of users to these inherent security risks.
What you should know: AI browsers can be hijacked through malicious websites that embed hidden instructions in screenshots, tricking the AI into performing unauthorized actions.
The big picture: AI browsers represent the latest trend in the AI boom cycle, following the success of chatbots and autonomous agents, with OpenAI announcing its “ChatGPT Atlas” browser this week.
Why this matters: Prompt injection attacks aren’t new, but AI browsers dramatically escalate the potential damage by giving malicious actors control over users’ authenticated web sessions.
Key technical details: The vulnerabilities stem from fundamental issues with how AI browsers handle the boundary between trusted user input and untrusted web content.
In plain English: Think of it like a trusted assistant who can’t tell the difference between your genuine instructions and fake ones written by a scammer. When you ask the AI browser to analyze a screenshot, malicious websites can hide invisible commands in that image that the AI reads and follows—like telling it to open your email or visit dangerous websites—even though you never intended those actions.
What’s next: Security experts expect similar vulnerabilities to appear in OpenAI’s upcoming AI browser, potentially exposing millions more users to these risks.