Why “Ethical AI” Is a Dangerous Mirage

Why "Ethical AI" Is a Dangerous Mirage - Professional coverage

According to Popular Science, adapted from De Kai’s book “Raising AI,” the most critical existential danger from artificial intelligence is the convergence of two AI-powered trends: hyperpolarization and hyperweaponization. The article warns that AI is accelerating societal divides while simultaneously democratizing weapons of mass destruction, noting that lethal drones can now be built with over-the-counter parts and that AI in computational biology has made engineered bioweapons a “living room technology.” The core argument is that our survival depends on nurturing ethical and responsible AI, but that constructing a rule-based “moral operating system” for machines is a futile pipe dream. This is highlighted by referencing the global “Moral Machine” experiment from MIT’s Iyad Rahwan, which has collected over 100 million decisions and shows cultural variations in ethical choices, and by pointing to the internal strife at OpenAI in November 2023 as a microcosm of humanity’s inability to align on AI’s “right” goals.

Special Offer Banner

The real problem isn’t robots, it’s us

Here’s the thing that really sticks with me. We’re so obsessed with the sci-fi nightmare of a superintelligent AI turning on us. But the article flips that script. The more immediate, messy, human danger is AI turning *us* against each other and then handing us the tools to act on that rage. Hyperpolarization meets hyperweaponization. That’s the killer combo.

Think about it. Social media algorithms already optimize for engagement, which often means outrage. Now amplify that a thousandfold with more advanced AI. At the same time, the barrier to creating catastrophic tools is collapsing. It’s not a nation-state building a bomb in a secret lab anymore. It’s a deeply antagonistic or despairing individual with a 3D printer and some open-source code. That’s a fundamentally new kind of threat landscape. So the idea that we’ll solve this by just writing a good set of rules for the AI is, as the author says, a pipe dream. It’s like trying to stop a riot by handing out a pamphlet on manners.

Why Asimov’s Laws fall apart

The piece brilliantly dismantles the fantasy of a simple ethical rulebook by diving into the classic trolley problem. And it’s so true. We love Asimov’s Laws as a narrative device, but his own stories were *about* their contradictions! Trying to hardwire that into a learning system is impossible because ethics in the real world are about trade-offs, not absolutes.

Do you minimize total harm? Do you never take an active action that causes injury? Is inaction a choice? What if the one person is a child and the five are criminals? Humans can’t agree. So how on earth do we code it? The Moral Machine experiment proves the point—culture shapes these decisions. An AI trained on data from one society might make a choice deemed monstrous in another. Which culture’s ethics do we hardwire? Who decides?

The hardest choices aren’t physical

This is the most insightful shift in the argument. We get hung up on AIs driving cars or controlling robots. But the truly pervasive, polarizing power of AI is in *communication*. It’s in the trillions of tiny, nonphysical actions: what your feed shows you, what a search engine prioritizes, what a chatbot says or omits.

How does Asimov’s First Law apply to a recommendation algorithm? If it *doesn’t* show you a vital news story, is it “allowing you to come to harm” through inaction? If it *does* show you extremist content that sends you down a rabbit hole, is it causing harm? This is the murky, hyper-scale territory where AI is actually shaping society right now. And you can’t write a rule for every possible communication scenario. The system has to *learn* judgment, for better or worse.

Nurture, not program

So what’s the answer? The article lands on a metaphor I find compelling, if daunting: parenting. We can’t hardwire ethics into our kids. We nurture them in a culture, hoping they learn values like empathy, safety, and responsibility. The argument is that AI, as a learning system, is the same. It will learn the culture it’s immersed in.

That’s a terrifying responsibility. Are we building a culture of fear and adversarial competition in tech, or one of security and collective good? As De Kai noted in The New York Times, even OpenAI’s board couldn’t align on the “right” goals. If a handful of elites in one company can’t figure it out, what hope for a global culture? Basically, we’re trying to raise a new form of intelligence in the middle of our own dysfunctional family argument. And the kid is learning how to build weapons from watching us fight. Not a great plan.

Leave a Reply

Your email address will not be published. Required fields are marked *