> Back to All Posts

Bubble AI Exploited in Microsoft Credential Phishing

Bubble AI

Phishers have found a new way to slip past email security filters, and it involves a tool most people associate with building legitimate apps. Attackers are now abusing Bubble AI, a no-code app-building platform, to create and host malicious web apps designed to steal Microsoft account credentials. The approach is convincing, hard to detect, and security researchers warn it is already spreading across the cybercriminal ecosystem.

What Is Bubble AI?

Bubble AI is a no-code platform that lets users build web and mobile applications without writing code. Users describe what they want through a visual interface, and the platform generates a finished, functional app. It is popular with startups and independent developers who need to ship products quickly. Its apps are hosted on the bubble.io domain, which has a strong reputation and is widely trusted by email security systems.

That trust is exactly what attackers are exploiting.

How the Attack Works

The campaign follows a clear sequence. A target receives a phishing email containing a link to a Bubble AI-hosted web app. Because the link points to bubble.io, a legitimate and well-regarded domain, email security filters do not flag it as a threat. The user clicks through and lands on the Bubble-hosted app.

From there, the app redirects the victim to a fake Microsoft login page. In some cases, this redirect is hidden behind a Cloudflare verification check, which adds another layer of apparent legitimacy. The victim sees what looks like a routine Microsoft sign-in prompt and enters their credentials. Those credentials go straight to the attacker.

With a stolen Microsoft account, the attacker can access email, calendar data, files, and anything else connected to that Microsoft 365 account.

Why Security Tools Miss It

The evasion is not accidental. Bubble AI generates apps using large, complex JavaScript bundles and a structure called Shadow DOM. Shadow DOM is a web technology that isolates parts of a page’s code from the rest of the document, making it harder for external tools to inspect what the app is actually doing.

Kaspersky researchers, who discovered the campaign, described the code as a “massive jumble” that even experienced analysts struggle to interpret. Automated analysis tools fare worse. They scan the code, find no obvious malicious signatures, and classify the app as a legitimate, functional site. The phishing layer stays hidden until a human digs deep enough to find it, and most security pipelines never get that far.

A Technique Built for Scale

What makes this more than a one-off tactic is how well it fits into the broader phishing-as-a-service ecosystem. PhaaS platforms are criminal services that rent out ready-made phishing infrastructure to other attackers. They already include tools for bypassing two-factor authentication, stealing session cookies, running adversary-in-the-middle attacks, and generating personalised phishing emails with AI assistance.

Kaspersky researchers assess it is highly likely that the Bubble AI abuse technique is already being integrated into these platforms. If that is the case, it will not stay limited to a niche group of attackers. It will become a standard option in widely distributed phishing kits, available to anyone willing to pay.

That shift matters. Techniques that require skill and setup stay rare. Techniques packaged into a service spread fast.

What Microsoft Users Should Know

This campaign targets Microsoft accounts specifically, but the underlying method could be adapted for any platform with a recognisable login page. Microsoft 365 is the target here because corporate accounts carry high value. Access to a business email account can open doors to financial fraud, internal data, or deeper network compromise.

Users should treat any unexpected login prompt with caution, even if the email that led them there looked clean. A link that passes through a legitimate platform like Bubble AI does not mean the destination is safe. The threat arrives at the end of the redirect chain, not at the start.

Enabling multi-factor authentication on Microsoft accounts adds a meaningful barrier, though it is not a complete defence. PhaaS platforms already include tools designed to defeat 2FA in real time. The stronger protection is learning to recognise the pattern: an unsolicited email, a link to an unfamiliar app, a login page that appeared without a direct visit to microsoft.com.

Bubble has not responded to requests for comment on the findings or any plans to introduce stronger anti-abuse protections on its platform.

Final Thoughts

The Bubble AI phishing campaign is a clear example of how attackers adapt to the tools that defenders rely on. Email security filters trust established platforms. Automated code analysis looks for known patterns. Attackers have learned to exploit both assumptions at once. By generating apps on a trusted platform with deliberately complex code, they create a path through defences that most organisations have not prepared for. The no-code revolution has made app-building accessible to everyone, and that now includes people building infrastructure for credential theft.

Janet Andersen

Janet is an experienced content creator with a strong focus on cybersecurity and online privacy. With extensive experience in the field, she’s passionate about crafting in-depth reviews and guides that help readers make informed decisions about digital security tools. When she’s not managing the site, she loves staying on top of the latest trends in the digital world.