The AI FOMO is Real, and It’s Causing Security Headaches

The AI FOMO is Real, and It's Causing Security Headaches - Professional coverage

According to Inc, executives are rushing AI deployments due to Fear Of Missing Out (FOMO), often ignoring fundamental operational risks. This pressure is intense when competitors go “AI-First,” leading to urgent calls to “AI the Everything.” The article highlights the 2025 security breach of Drift, Salesloft’s AI chatbot, which affected over 700 customers due to basic security failures like absent multi-factor authentication and hardcoded credentials, not the AI itself. It also cites the 2024 Air Canada case, where a non-deterministic chatbot gave wrong policy info, leading to a legal loss. Furthermore, 68% of developers now use AI tools daily or weekly, raising concerns about “Almost Right Output” slipping through.

Special Offer Banner

The FOMO Train is Missing Its Brakes

Look, the pressure to implement AI is absolutely massive right now. Boards and investors see competitors doing it, and the CEO starts sweating. It’s a classic self-fulfilling prophecy. But here’s the thing: rushing to board that “speeding train,” as the article puts it, means you’re probably not checking the tracks or the engine. The Drift breach is a perfect, painful example. The criminals didn’t hack some fancy AI model; they stole old-fashioned credentials and asked the chatbot nicely for the data. It’s like installing a high-tech smart lock on a door made of cardboard. The fancy feature isn’t the weak point—the decades-old security basics you ignored are.

When Words Lose Their Meaning

This is where it gets really sneaky, and frankly, a bit infuriating. AI vendors are pulling a linguistic bait-and-switch. Your security team hears “red teaming” and thinks of organized penetration testing. The vendor often means they asked the bot not to say the N-word. Both are important, but they protect against completely different universes of risk. One secures your company’s crown jewels; the other protects the vendor’s reputation. When “vulnerability management” shifts from patching code flaws to preventing biased outputs, you have a serious communication breakdown. Executives are signing off on tools thinking they’re getting a secure product, when the testing only covered whether the chatbot is polite. That’s a dangerous gap.

The Problem of “Maybe”

The core technical issue is non-determinism. Basically, you don’t get the same answer twice, guaranteed. For a sales-booking bot? Maybe fine. For customer policy, legal terms, or—god help us—generating code? That’s a massive problem. The Air Canada case is the blueprint: a chatbot hallucinated a policy, a customer relied on it, and a tribunal said, “Tough luck, you own your bot’s output.” The cost was small, but the precedent is huge. Now mix in that 68% of devs use AI tools regularly. The “Immediately Verify Output” (IVO) practice is crucial, but as engineer Chris Swan notes, it leads to “Almost Right Output” that looks “good enough” and gets waved through. How many subtle bugs or security flaws are we baking in because the AI-generated code *seemed* right?

What Actually Needs to Happen

So what’s the fix? The advice in the article is spot-on, but it’s hard work. Executives need to stop being enchanted by the term “AI,” as advisor Wendy Nather warns. They must demand plain-language definitions from vendors and insist on business-level risk analysis. Ask the former Navy SEAL’s question: “What is the cost of wrong?” Assume the AI tool is already compromised from day one and model the damage. And there must always be a human in the loop to validate outputs before they hit customers or production code. This isn’t about being an AI skeptic. It’s about applying the same rigor we’d use for any critical software. Because as we’re seeing with AI in courts and elsewhere, the real-world consequences are no longer theoretical. The FOMO is real, but the fallout from getting it wrong is even more real.

Leave a Reply

Your email address will not be published. Required fields are marked *