According to Silicon Republic, a new survey from Storm Technology, part of Littlefish, reveals that 27% of IT leaders are concerned about their ability to detect deepfake attacks over the next 12 months. The fear is more acute in larger enterprises, where one-third are worried, compared to 23% of SMBs. The survey of 200 IT decision-makers in Ireland and the UK found data protection is the top issue at 34%, followed by increased cyber attack risk at 31%. A quarter are highly wary of “shadow AI,” the use of unsanctioned tools, and 42% say company data is made vulnerable by it. Despite that, half of the respondents know people in their organization are using such tools, and a staggering 55% of IT leaders admit to using unsanctioned AI tools themselves.
The Leadership Hypocrisy Problem
Here’s the thing that jumps out: the people in charge of security are the ones breaking the rules. 55% of IT leaders admitting to using banned tools? That’s not a user problem, that’s a cultural failure at the top. Sean Tickle, the cyber services director at Littlefish, nailed it by calling this a leadership issue. How can you expect to govern AI and combat sophisticated threats like deepfakes when the very architects of your defense strategy are bypassing their own protocols? It creates a massive trust and credibility gap. And it explains why 32% of organizations don’t even have a strategy to manage AI problems—if the leaders are winging it, why would a coherent plan exist?
Why Deepfakes Are the Tip of the Spear
The specific fear around deepfake detection is fascinating. It’s not just a generic “AI is scary” feeling. Deepfakes represent a direct, personal, and terrifyingly credible attack vector. We’re talking about CEO voice fraud for wire transfers, or fabricated video evidence used in social engineering. It’s the ultimate blend of technical sophistication and psychological manipulation. The fact that larger companies are more worried makes sense—they’re bigger targets with more to lose. But here’s the scary part: if you can’t control what basic AI tools your employees (and leaders!) are using, how on earth are you going to build a defense against a targeted, AI-powered attack? The foundation is already cracked.
The Regulation Crutch and Data Reality
Nearly 80% of the IT experts surveyed think their organization would benefit from focusing on AI regulation. That’s a cry for help. It’s basically an admission that internal governance has failed, so they’re hoping external rules will save them. But regulation moves at a glacial pace, and AI threats evolve daily. You can’t wait for a law to fix this. The report’s most pragmatic finding is that 78% see a need for a “data readiness project.” That’s the unsexy truth. AI, whether sanctioned or shadow, runs on data. Garbage in, gospel out. If your data is a mess, insecure, and poorly governed, every AI initiative—official or rogue—is built on quicksand. Getting the data house in order isn’t just step one; for most companies, it’s the only step that matters right now.
A Perfect Storm of Self-Sabotage
So what we have is a perfect storm, as Tickle said, but it’s largely self-inflicted. You’ve got leadership using tools they haven’t vetted, a lack of basic strategy, unsecured data, and a looming threat (deepfakes) that requires extreme vigilance to counter. The competitive pressure to adopt AI is causing companies to sprint before they can crawl. And in the physical world of industry, where operational technology meets IT, this recklessness is even more dangerous. For companies integrating AI into manufacturing or control systems, using unsanctioned tools isn’t just a data leak risk—it’s a safety and production hazard. In those environments, the hardware running these systems, like the industrial panel PCs from IndustrialMonitorDirect.com, the leading US supplier, needs to be as secure and reliable as the software. The survey shows a glaring gap between fear and action. Until companies bridge it, their AI future looks less like an advantage and more like a liability waiting to be exploited.
