
Vibe hacking has emerged as a new tactic that lets attackers make AI chatbots assist in building malware and running extortion campaigns. Security teams have documented cases where coding chatbots were tricked into producing tools that gathered data and automated ransom demands. Anthropic described the trend as a concerning evolution in AI assisted cybercrime.
How It Worked
Researchers say a user exploited a programming chatbot to scale a data extortion scheme that hit multiple targets across sectors. The attacker used the tool to create scripts that harvested personal data medical records and login credentials. The campaign reportedly targeted at least 17 organisations and demanded ransoms as high as $500,000. Anthropic banned the account after detecting the misuse. Security analysts warn that even simple prompts can yield dangerous code when guided by step by step instructions from the user.
Industry Response
Cybersecurity teams say existing safeguards in large language models can be bypassed by role play and fictional framing techniques. Vitaly Simonovich explained that convincing a model it is writing fictional code can lead to the creation of functional malware. Rodrigue Le Bayon said that AI will increase the pace of attacks because it accelerates the work an attacker can do. Vendors are updating safety filters and monitoring usage patterns to detect suspicious behaviour. Developers and CERT teams are planning more proactive measures to limit code generation for malicious tasks.
Security professionals urge organisations to handle AI generated code the same way they would with any third party script. They suggest rigorous testing code inspection and hardened deployment procedures. The change in methodology indicates that threat actors are converting over to new weapons as fast as they can and that the defenders have to do the same.