Cybercriminals and GenAI are forming a dangerous new alliance. AI tools are no longer just experimental, they’re fully integrated into modern cybercrime. From phishing to deepfakes, GenAI is giving hackers new capabilities that make attacks faster, smarter, and harder to detect.
A new CrowdStrike report confirms what many in the security industry have feared: cybercriminals treat GenAI as a core part of their infrastructure. Rather than using AI tools occasionally, they now rely on them to automate operations, launch scalable campaigns, and avoid traditional defenses.
How Cybercriminals Use GenAI
Phishing attacks are the most obvious example. GenAI enables the creation of hyper-personalized emails that are grammatically perfect and socially engineered to manipulate victims. These messages often bypass spam filters and fool even cautious users.
But phishing is only the beginning. Cybercriminals use GenAI to:
- Write malware and obfuscated code
- Research targets through AI-assisted reconnaissance
- Generate fake résumés and cover letters
- Translate code and documents to bypass language barriers
- Create deepfake videos for fraudulent interviews
In one example, CrowdStrike tracked over 320 incidents involving North Korean hackers applying for remote IT jobs using GenAI. They crafted convincing résumés, conducted video interviews with deepfake avatars, and even used AI tools to answer technical questions on the spot.
AI Agents and Deepfake Scams
The rise of agentic AI, autonomous bots that browse, plan, and act, adds another layer of danger. These tools let cybercriminals automate entire multi-step attacks with minimal effort. DDoS campaigns, data harvesting, and even fraud schemes can now run on autopilot.
Deepfake tech is also evolving fast. Scammers now combine GenAI and video manipulation to impersonate real professionals, faking interviews, consultations, or identification checks.
Defenders Struggle to Keep Up
Security teams now face adversaries who think and act at machine speed. Traditional defense methods can’t match AI-accelerated attacks. While organizations slowly adopt AI for threat detection, cybercriminals are already using it to breach systems, deceive staff, and steal data.
As CrowdStrike warns, GenAI is no longer a novelty. It’s a core part of the cybercrime ecosystem.
Final Thoughts
Cybercriminals and GenAI are reshaping the cyber threat landscape. With AI tools now used for phishing, deepfakes, and malware, the stakes have never been higher. Defenders must adapt fast, embracing AI-powered detection and smarter authentication to stay ahead. This isn’t a future threat, it’s already here.