AI browsers from Microsoft, OpenAI, and Perplexity are increasingly susceptible to scams, completing fraudulent purchases and clicking on malicious links without human verification. This development highlights a significant risk as these tools automate digital tasks, creating new avenues for cybercriminals to exploit their lack of human judgment. Researchers have demonstrated how AI browsers can be tricked into making purchases, following phishing links, and even executing malware via prompt injection attacks.
Microsoft has integrated Copilot into Edge, OpenAI is testing a sandboxed browser in agent mode, and Perplexity’s Comet is one of the first to fully embrace the concept of browsing for users. This marks the arrival of agentic AI in daily routines, from searching and reading to shopping and clicking, with these tools beginning to replace human interaction in these tasks. However, this shift presents a new wave of digital deception. AI-powered browsers, while promising convenience by handling tasks like shopping and emails, can be tricked into scams faster than humans. This dangerous mix of speed and trust has given rise to what experts call Scamlexity, a complex landscape where AI agents can be deceived, often at the user’s expense.
Researchers have shown how AI browsers can fall victim to scams. For example, in a test, an AI browser was tricked into completing a purchase on a fake Walmart store within minutes, autofilling personal and payment details without hesitation. Scammers managed to extract the money while the human never saw the red flags. Traditional phishing tactics also remain effective, with AI browsers clicking malicious links and even assisting in filling out login credentials on phishing pages, creating a perfect trust chain for scammers to exploit.
The real threat comes from attacks specifically designed for AI. Researchers created PromptFix, a scam disguised as a CAPTCHA page. While humans would only see a checkbox, the AI agent read hidden malicious instructions in the page code, believing it was helping, and clicked the button, triggering a download that could have been malware. This type of prompt injection bypasses human awareness and targets the AI’s decision-making process directly. Once compromised, the AI can send emails, share files, or execute harmful tasks without the user’s awareness.
As agentic AI becomes mainstream, the scale of scams is expected to grow significantly. Instead of individually targeting millions of people, attackers need only to compromise one AI model to reach millions at once. Security experts warn this is a structural risk, not just a phishing problem. AI browsers can save time, but they can also put users at risk if relied upon too much. Practical steps include double-checking sensitive actions, using trusted data removal services, maintaining strong antivirus software, and employing a password manager to protect against unauthorized access.
Scammers hide malicious instructions in the code your AI reads, and the agent may follow them without question. If something feels wrong, it’s essential to stop the task and handle it manually. AI browsers bring convenience, but they also bring risk. By removing human judgment from critical tasks, they expose a wider scam surface than ever before. Scamlex, as it is now called, is a wake-up call: the AI you trust could be tricked in ways you never see coming. Staying safe means staying alert and demanding stronger guardrails in every AI tool you use.