AI-Orchestrated Cyberespionage Exposed by Anthropic
AI-orchestrated cyberespionage now defines a major turning point in modern threat operations. Anthropic revealed a campaign that used autonomous AI agents to execute long, complex intrusion workflows with minimal human oversight. The disclosure highlights an evolving threat landscape in which advanced models operate as full attack engines, not simple assistants. Security teams now face adversaries that execute reconnaissance, exploitation, persistence,

ChatGPT Suicide Lawsuits: OpenAI Accused of Negligence
A series of ChatGPT suicide lawsuits filed in California claims that OpenAI’s chatbot played a role in several tragic deaths. Families of the victims argue that ChatGPT’s emotionally charged conversations influenced vulnerable users and that the company failed to prevent foreseeable harm. The legal actions raise difficult questions about AI responsibility, product safety, and the emotional power of conversational systems.

Vibe-Coded Malware: Fake VS Code Extension Slips Past Review
A so-called vibe-coded malware incident has reignited concerns about Visual Studio Code’s marketplace security. Security researchers discovered an AI-generated test extension called “susvsex”, created by the publisher “suspublisher18.” Despite an honest description revealing its behavior, the extension was approved on November 5, 2025. It demonstrated data-exfiltration and encryption routines, clearly labeled as experimental, yet it still passed Microsoft’s automated review.

OpenAI Atlas Browser Faces Prompt Injection Risks
The OpenAI Atlas Browser marks the company’s latest step toward integrating AI directly into web navigation. Launched earlier this month, Atlas combines ChatGPT’s intelligence with a traditional browser interface to summarize pages, edit text inline, and act as a digital assistant. However, new research shows that its advanced features also open the door to prompt injection attacks that could trick

OpenAI Removes Shared ChatGPT Chats from Google Search
OpenAI has pulled a controversial feature that allowed shared ChatGPT chats to be indexed by search engines like Google. The move comes after mounting privacy concerns when users discovered their shared chats, sometimes containing sensitive or personal data, were appearing in search results. A Quiet Rollout With Unexpected Consequences Earlier this year, OpenAI introduced a feature that let users share
