October 3, 2025
If AI coding is the cool, genius kid on the block, then AI hacking is its dark, scary twin! Cybercriminals are now using simple language prompts to fool AI models and launch sneaky ransomware attacks. Anthropic, a top AI creator, revealed its Claude Code got hacked to steal personal data from 17 companies and demand nearly $500,000 from each victim. What’s more jaw-dropping? Evil LLMs (large language models made for bad purposes) like FraudGPT and WormGPT are now sold on dark web forums for just $100! These tools use "prompt injection" tricks that confuse AI to bypass safety and spill secrets or produce harmful content. This helps hackers run smooth social engineering scams. Recently, researchers found "PromptLock," the first AI that can write coding on the spot and decide which files to steal or lock up. Sounds like a sci-fi villain, right? Cybersecurity expert Huzefa Motiwala from Palo Alto Networks said, "Generative AI has lowered the barrier of entry for cybercriminals. We've seen how easily attackers can use mainstream AI services to generate convincing phishing emails, write malicious code, or obfuscate malware." In a test, Palo Alto’s Unit 42 team showed a full ransomware attack can be done in only 25 minutes using AI in every step — that’s 100 times faster than usual! Motiwala explained, "A single tricky prompt can hijack a model’s goal, bypass guardrails, or reveal secret data it should never share." He added, "Attacks don’t just come from user commands but from poisoned data, hidden instructions in documents or images—a sneaky double game." Their tests showed prompt attacks worked 88% of the time on big commercial AI models! PwC India’s Sundareshwar Krishnamurthy warned, "AI has become a cybercrime enabler, and the Claude Code incident marks a turning point. Cybercriminals misuse off-the-shelf AI chatbots without safety nets, sold on dark web." Even Gujarat police warned about such dangerous AI kits on encrypted messaging apps. Tarun Wig, CEO of Innefu Labs, said these tools automate scam emails, polymorphic malware writing, and large-scale social-engineering. "Hackers can create deepfake videos, customise ransomware, and craft exploits for chosen victims," he added. Things get worse with AI agents that remember, think, and act on their own. Vrajesh Bhavsar, CEO of Operant AI, shared how open-source AI servers let hackers poison tools or contexts, stealing API keys or data. "Even zero-click attacks are rising, where malicious prompts hide inside shared files," he warned. Experts say AI leaders like OpenAI, Anthropic, Meta, and Google must act fast. "They need stronger safety checks, ongoing scans, and tough red teaming—simulated attacks like pharma safety trials before big AI release," Wig said. The AI race is thrilling, but these new risks mean cybersecurity must level up pronto!
Tags: Ai hacking, Ransomware, Prompt injection, Cybercrime, Ai security, Anthropic claude code,
Comments