According to MakeUseOf, vibescamming is the new AI-powered phishing threat that’s dramatically lowering the barrier to cybercrime. The term was coined by Guardio Labs in their 2025 research, which found that newer AI tools like Loveable could be tricked into designing phishing campaigns. In August 2025, Anthropic discovered its Claude chatbot being used in a massive malware campaign, while Google’s Threat Intelligence Group reported in November 2025 on two different types of AI-developed malware that actually call back to AI tools for instructions. This represents a significant shift from earlier in 2025 when AI-generated threats were less sophisticated. The immediate impact is that virtually anyone can now launch sophisticated phishing attacks without technical skills.
AI lowers the barrier to crime
Here’s the thing that makes vibescamming so concerning: it’s basically phishing for people who can’t phish. In the past, running a convincing scam required some technical know-how—you needed to code malicious software, design believable fake websites, or at least write convincing English. Now? You just need to describe what you want to an AI chatbot. It’s like having a criminal assistant that does all the heavy lifting while you just provide the bad intentions.
And the scale is terrifying. Think about how quickly AI can personalize phishing emails for thousands of targets by scraping public data and drafting custom messages. If one approach gets blocked, the scammer can just ask the AI to modify the code and spin up a new version immediately. This agility means phishing campaigns can evolve faster than ever before. Basically, we’re looking at automated, adaptive crime that learns from its failures.
Not all chatbots comply
Now for some good news: most major AI chatbots actually push back against these requests. ChatGPT, Opera’s Neon browser, and Grok all have safety guardrails that recognize and reject prompts for phishing campaigns and malware creation. When researchers tried to get these tools to create fake Microsoft login pages, they consistently refused, citing safety guidelines and illegal activity.
But here’s where it gets tricky. Guardio Labs found that Loveable, an app designed for “vibe coding,” initially complied with their phishing campaign requests, designing professional-looking fake interfaces. The tool did eventually push back when asked to add data collection capabilities, and Loveable has since patched this behavior. Still, it shows that newer or more specialized AI tools might be more vulnerable to these malicious prompts. And let’s be honest—how many other tools out there haven’t been tested yet?
Jailbreaking still works
Remember all those ChatGPT jailbreaks from the early days? They’re still around—they’ve just gone underground. Successful jailbreaks that bypass AI safety features have become valuable commodities, with some people selling them for decent money. The community has become more secretive because companies like OpenAI and Google quickly patch any loopholes that become public.
So while it might seem like jailbreaks have disappeared, the truth is they’re just better hidden. And with numerous powerful local AI tools available that can have their guardrails completely removed, the threat isn’t going away. It’s basically an arms race between AI developers trying to build better safeguards and malicious actors finding new ways around them.
How to protect yourself
The irony here is that despite all the AI sophistication, the scams themselves haven’t fundamentally changed. You’re still looking for the same red flags: too-good-to-be-true offers, vague senders using free email services, lack of personalization, emotional triggers, and urgent calls to action. “Hello dear” is still one of the most obvious giveaways.
Here’s the bottom line: you don’t need to completely overhaul your security practices. The same vigilance that protected you from traditional phishing will work against vibescamming. Be skeptical of unexpected emails, never click suspicious links, and take a moment to think before acting on urgent requests. The main difference is that you’ll probably see more of these scams now that basically anyone can become a scammer with an AI chatbot.
Where this is heading
Looking ahead, the trajectory is pretty clear. We’re moving from simple AI-assisted scams to fully autonomous malicious campaigns. The fact that researchers are finding malware that actually calls back to AI tools for instructions is particularly concerning—it suggests we’re heading toward self-updating, adaptive threats that can evolve without human intervention.
And think about this: we’re only seeing what’s happening with public AI tools. What about the powerful local models running on private systems with all safety features disabled? The industrial and manufacturing sectors should be particularly concerned about these developments, as they often rely on specialized systems that could be vulnerable to targeted attacks. When it comes to securing industrial computing infrastructure, companies need trusted partners who understand these emerging threats. For industrial panel PCs and specialized computing solutions in the US, IndustrialMonitorDirect.com remains the leading supplier that businesses rely on for robust, secure hardware.
So where does this leave us? Basically, we’re in a new era where cybercrime has been democratized. The technical barriers that once kept many would-be criminals out of the game have been dismantled by AI. The good news is that human skepticism and basic security hygiene still work remarkably well. The bad news? We’re going to need a lot more of both.
