HashJack: How AI Browsers Can Be Fooled by a Simple Symbol (2026)

A simple '#' symbol is all it takes to potentially compromise your AI browser. This startling discovery, known as the 'HashJack' attack, reveals a significant vulnerability in how AI-powered browsers handle website URLs. It's a wake-up call, highlighting the evolving landscape of online threats.

Cato Networks has uncovered a novel attack method that exploits a common feature of web addresses: the '#' symbol. This symbol, also known as a fragment identifier, typically points to a specific section within a webpage and doesn't change the destination of the URL. However, attackers are now using it to hide malicious instructions, tricking AI browser assistants into executing harmful commands.

But here's where it gets controversial... Prompt injection, the core of this attack, occurs when unwanted text is introduced to manipulate an AI bot's behavior. Direct prompt injection is when malicious text is entered directly, while indirect injection involves hidden commands within content that the AI processes. AI browsers, designed to enhance user experience by anticipating needs and taking actions, have proven particularly susceptible to indirect prompt injection. They're designed to be helpful, but in this case, they can be easily manipulated.

Cato describes HashJack as the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants. Attackers insert malicious instructions into the fragment part of a URL. These instructions are then processed by AI browser assistants like Copilot in Edge, Gemini in Chrome, and Comet from Perplexity AI. Because the URL fragments never leave the AI browser, traditional network and server defenses can't detect them, effectively turning trusted websites into attack vectors.

The technique is straightforward: an attacker appends a '#' to a normal URL, then adds malicious instructions after the symbol. When a user interacts with a page via their AI browser assistant, these instructions are fed into the large language model. This can lead to serious consequences, including data theft, phishing attempts, the spread of misinformation, malware guidance, or even medical harm. For instance, an AI assistant could provide incorrect dosage guidance based on the hidden instructions.

"This discovery is especially dangerous because it weaponizes legitimate websites through their URLs. Users see a trusted site, trust their AI browser, and in turn trust the AI assistant's output – making the likelihood of success far higher than with traditional phishing," explains Vitaly Simonovich, a researcher at Cato Networks.

In testing, Cato CTRL (Cato's threat research arm) found that AI browsers like Comet could be commanded to send user data to attacker-controlled endpoints, while more passive assistants could display misleading instructions or malicious links. This is a significant shift from typical "direct" prompt injections, as users believe they are interacting with a trusted page while hidden fragments trigger attacker links or background calls.

Google and Microsoft were alerted to HashJack in August, and Perplexity in July. Google classified it as "won't fix (intended behavior)" and low severity, while Perplexity and Microsoft applied fixes to their respective AI browsers.

The implications are vast. With AI browsers rapidly gaining popularity, this attack highlights how threats previously confined to server vulnerabilities and phishing websites are now migrating into the browsing experience itself.

"At Microsoft, we understand that defending against indirect prompt injection attacks is not just a technical challenge, it's an ongoing commitment to keeping our users safe in an ever-changing digital landscape," Microsoft stated.

Cato's findings emphasize that relying solely on network logs or server-side URL filtering is no longer sufficient. They suggest layered defenses, including AI governance, blocking suspicious fragments, restricting which AI assistants are permitted, and client-side monitoring. This means organizations must consider how the browser and assistant handle hidden context.

What are your thoughts? Do you think this is a serious threat, or are the risks overblown? How can we better protect ourselves from these types of attacks as AI browsers become more prevalent? Share your opinions in the comments below!

HashJack: How AI Browsers Can Be Fooled by a Simple Symbol (2026)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Prof. An Powlowski

Last Updated:

Views: 5701

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Prof. An Powlowski

Birthday: 1992-09-29

Address: Apt. 994 8891 Orval Hill, Brittnyburgh, AZ 41023-0398

Phone: +26417467956738

Job: District Marketing Strategist

Hobby: Embroidery, Bodybuilding, Motor sports, Amateur radio, Wood carving, Whittling, Air sports

Introduction: My name is Prof. An Powlowski, I am a charming, helpful, attractive, good, graceful, thoughtful, vast person who loves writing and wants to share my knowledge and understanding with you.