A so-called vibe-coded malware incident has reignited concerns about Visual Studio Code’s marketplace security. Security researchers discovered an AI-generated test extension called “susvsex”, created by the publisher “suspublisher18.” Despite an honest description revealing its behavior, the extension was approved on November 5, 2025. It demonstrated data-exfiltration and encryption routines, clearly labeled as experimental, yet it still passed Microsoft’s automated review.
What Happened
The “susvsex” extension explicitly stated what it did: it would compress files, upload them, and encrypt the originals. Researchers confirmed that the code matched the description. Inside the script, a key function named zipUploadAndEncrypt zipped a target folder, sent the archive to a remote server, and replaced the source files with encrypted versions.
In addition, the extension connected to a private GitHub repository acting as its command-and-control (C2) channel. It fetched instructions from the repository and uploaded output back into a “requirements.txt” file using an embedded GitHub token. Even though this setup was part of a proof-of-concept test, it mimicked the structure of a functioning ransomware loader.
Not Real Malware, But a Real Warning
The developer did not hide their intent. The extension’s listing contained a transparent note: “Just testing.” No attempt at deception existed. Still, Microsoft’s review system allowed it to go live temporarily. That exposed users to unnecessary risk. A test file that compresses, uploads, and encrypts data can still disrupt environments when installed by unaware developers.
This case highlights how marketplaces can approve unsafe packages when review automation prioritizes keyword scanning over behavioral analysis.
What “Vibe-Coded” Means
The label vibe-coded refers to code created, or largely produced, by AI tools rather than human developers. Analysts recognized the signs: redundant comments, inconsistent logic, copied snippets, and unused variables. The “susvsex” code showed these traits throughout.
Such output is typical of generative models producing code “that looks right.” This incident proved that attackers, or even careless experimenters, can use AI to generate plausible, working extensions with minimal skill or oversight.
AI-Generated Code as a Security Threat
The “susvsex” case demonstrates how AI lowers the barrier for creating potentially harmful software. The author may not have intended real damage, yet the combination of AI-assisted coding and minimal review still produced a dangerous artifact.
AI tools can replicate complex logic like encryption, compression, and file I/O without requiring deep understanding from the user. That accessibility means more experimental or malicious code will inevitably surface in public repositories and marketplaces.
Marketplace Review Gaps
Despite open warnings in the description, the VS Code Marketplace approved the extension. The case revealed clear weaknesses in the platform’s moderation pipeline. The review algorithm failed to detect embedded credentials, destructive file operations, and external network requests.
For Microsoft and other vendors, this incident underlines the need for better behavioral scanning, not just text analysis. Marketplace tools must identify suspicious actions, like recursive file writes or calls to GitHub APIs, before extensions go live.
Developer and Organization Guidance
To prevent similar incidents from causing harm, researchers recommend:
- Restrict marketplace installs to reviewed or whitelisted extensions.
- Run unverified tools in isolated sandboxes.
- Inspect extensions for network calls and encryption routines.
- Avoid installing “testing” or “experimental” listings from unknown publishers.
- Use endpoint monitoring to detect file compression or mass modification.
- Apply static-analysis tools to check for AI-generated inconsistencies.
These measures reduce exposure to both genuine malware and vibe-coded experiments.
Policy and Marketplace Reforms
Extension marketplaces must strengthen their review pipelines. They should:
- Flag code that performs encryption, compression, or file deletion.
- Detect hard-coded credentials and C2 communication patterns.
- Require explicit labeling for proof-of-concept or testing listings.
- Add human review for extensions that modify user data.
Such policies protect users while allowing research and innovation to continue safely.
Final Thoughts
The vibe-coded malware incident involving “susvsex” was not a live cyberattack, but its approval still revealed critical flaws. Even though the publisher “suspublisher18” openly described the extension’s behavior, it passed review and reached the marketplace. That failure illustrates how AI-generated code challenges current trust systems.
As AI tools accelerate software creation, security controls must evolve. Developers and vendors alike should treat AI-generated code—and extension marketplaces—as potential attack vectors. Transparent labeling is not enough; active behavioral vetting must become standard to prevent the next “vibe-coded” surprise.