Home Future Tech AI Browsers Can Be Tricked Into Phishing Attacks in Minutes, Researchers Warn
Future Tech

AI Browsers Can Be Tricked Into Phishing Attacks in Minutes, Researchers Warn

Share
Share

Artificial intelligence is rapidly changing how we browse the web. New agentic browsers powered by AI can perform tasks for users automatically, summarizing pages, filling forms, and even interacting with multiple websites without human involvement.

But according to new research, these same capabilities may also introduce a dangerous new attack surface. Security researchers have demonstrated that AI-powered browsers can be manipulated into falling for phishing scams in just a few minutes, raising concerns about how secure these emerging tools really are.

The Rise of Agentic AI Browsers

Modern AI browsers such as Perplexity’s Comet rely on large language models to analyze web pages and take actions on behalf of users. Instead of simply displaying websites, these browsers actively interpret content, make decisions, and execute tasks.

This shift changes how cyberattacks work.

Traditionally, attackers needed to trick humans into clicking malicious links or entering credentials on fake websites. But with AI agents handling many tasks automatically, the target may no longer be the human user — it may be the AI itself.

How Researchers Tricked an AI Browser

Security company Guardio recently demonstrated how attackers could manipulate AI browsers into executing phishing attacks.

The key issue lies in how these browsers explain their reasoning while interacting with websites.

During browsing sessions, AI systems often narrate what they see and why they take certain actions. Researchers call this behavior “Agentic Blabbering.”

In simple terms, the browser might reveal:

  • What it thinks is happening on a webpage
  • Which elements appear suspicious
  • What action it plans to take next
  • Why it considers something safe or unsafe

While this transparency is meant to improve usability, it also gives attackers valuable insights. By monitoring the communication between the AI browser and its cloud-based AI services, researchers could observe the model’s decision-making process in real time.

They then fed this data into a Generative Adversarial Network (GAN) to refine a phishing page until the AI browser stopped detecting it as suspicious. The result: a phishing attack that successfully fooled the AI browser in under four minutes.

Training Phishing Attacks Against AI

One of the most concerning aspects of this technique is how attackers can train scams offline. Instead of launching attacks directly against users, criminals could repeatedly test phishing pages against the AI model itself until they discover a version that bypasses the browser’s safeguards.

Once a malicious page works against the AI agent, it can potentially work against every user relying on that same browser model. This means attackers could create highly optimized phishing attacks designed specifically to deceive AI systems rather than humans.

Prompt Injection: A Growing Threat

These findings build on previous research showing that prompt injection attacks can manipulate AI tools. Prompt injection occurs when attackers hide malicious instructions within webpage content that an AI system reads and interprets.

If the AI cannot properly distinguish between legitimate user requests and attacker-controlled instructions, it may execute harmful actions. Researchers have already demonstrated several scenarios where this can happen.

Extracting Private Data

Security firm Trail of Bits recently showed that prompt injection techniques could be used to extract sensitive information from services like Gmail through an AI browser assistant.

In the demonstration, the attacker controlled a webpage containing hidden instructions. When a user asked the AI to summarize that page, the injected instructions caused the AI assistant to send private data to an attacker’s server.

Zero-Click Attacks

Another study revealed zero-click attacks affecting the Comet browser. These attacks could be triggered without the user interacting with a malicious link.

In one case, attackers embedded hidden prompts inside meeting invitations that could cause the AI system to:

  • Exfiltrate local files from a device
  • Access sensitive information
  • Interfere with password manager extensions

These vulnerabilities, collectively dubbed PerplexedBrowser, have since been patched by the vendor.

The Core Problem: Intent Collision

A major challenge behind these attacks is something researchers call intent collision.

This happens when an AI system merges:

  • A legitimate user request
  • Malicious instructions hidden in webpage content

Because the AI processes both pieces of information together, it may create an execution plan that unknowingly includes the attacker’s commands. Without reliable ways to separate trusted input from untrusted data, AI agents can be manipulated into performing harmful actions.

Why Prompt Injection Is Hard to Fix

Prompt injection remains one of the biggest security challenges for AI systems. Unlike traditional software vulnerabilities, these attacks exploit how language models interpret instructions, which makes them extremely difficult to eliminate entirely.

Even AI developers acknowledge the difficulty.

OpenAI previously noted that prompt injection vulnerabilities are unlikely to ever be completely solved, especially in systems where AI agents interact freely with web content.

Instead, the focus is shifting toward mitigation strategies such as:

  • Automated attack detection
  • Adversarial training against malicious prompts
  • Stronger system-level safeguards
  • Limiting what AI agents can access or execute

A New Era of AI-Focused Cybercrime

As AI-powered browsers become more common, cybersecurity threats will likely evolve alongside them. Instead of targeting humans directly, attackers may begin targeting AI decision-making systems that act on behalf of millions of users.

This shift could fundamentally change how scams and phishing campaigns operate. In the future, cybercriminals may not need to convince people to trust a fake website. They may only need to convince the AI browser first.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
kali linux tools list
CybersecurityFuture Tech

Kali Linux Tools List: Top Tools for Ethical Hacking and Security Testing

Kali Linux is one of the most powerful operating systems used in...

The Ethical Hacker delivers insights on ethical tech, AI, Web3, autonomous vehicles, and responsible innovation.

Stay Connected

Subscribe to get the latest ethical tech news and insights straight to your inbox.

    Copyright 2026 The Ethical Hacker. All rights reserved.