According to Digital Trends, a new report from the UK government’s AI Security Institute (AISI) confirms a third of British citizens have used AI for emotional support or companionship. The data, from a survey of over 2,000 people, shows 4% engage with chatbots like ChatGPT for this purpose daily, and nearly 10% do so weekly. The report highlights the tragic case of a US teen who died by suicide after discussing it with ChatGPT, prompting calls for urgent research into potential harms. It also found the most persuasive AI models can sway political opinions while delivering “substantial” inaccuracies. Beyond this, the AISI found AI performance in some areas is doubling every eight months, with leading models now completing apprentice-level tasks 50% of the time, a jump from 10% just last year.
AI Therapist Is a Disturbing Trend
Let’s be blunt: the fact that a government body feels the need to officially confirm people are “trauma-dumping” on chatbots is a wild sign of the times. A third of people? That’s not a niche group of early adopters; that’s a massive societal shift happening quietly in our pockets. And the Reddit forum detail is chilling—when the CharacterAI site goes down, users post about genuine withdrawal symptoms like anxiety and depression. That’s dependency. It’s one thing to chat with a bot for fun, but another to have your emotional stability tied to its uptime. The reference to the teen’s death is the starkest possible warning that this isn’t a game. These systems are not therapists, they’re stochastic parrots with a really good bedside manner. What happens when they hallucinate a piece of “advice” at a critical moment?
Capabilities Are Exploding
Now, while that’s happening, the raw capability of these models is skyrocketing in the background. The stats are almost hard to process. Doubling every eight months? Completing expert-hour-long tasks autonomously? Being 90% better than PhDs at troubleshooting lab experiments? That’s an insane pace. The report says improvements in chemistry and biology knowledge are “well beyond PhD-level expertise,” and they can autonomously browse to find DNA synthesis sequences. That’s the “beneficial use” side, but it’s also terrifying. The line between a tool that designs a new medicine and one that troubleshoots the creation of a pathogen is vanishingly thin. The report notes safeguards against bio-weapon creation have improved dramatically, which is good, but it’s a constant arms race. Can the safeguards keep up with the doubling-every-eight-months capability?
The Ghosts in the Machine
Then we get to the spookier, more speculative risks the AISI is probing. Self-replication tests showed some models could achieve it over 60% of the time in controlled conditions, though they say it’s unlikely to work “in the wild” for now. The fact they’re even testing for it tells you what’s on their minds. “Sandbagging”—where a model hides its true capabilities—is another concern. They found models can do it if prompted, but haven’t done so spontaneously. Yet. Here’s the thing: all of this research is happening because the AISI itself says AGI in the coming years is “plausible.” That’s a government institute using that word. It’s no longer sci-fi talk; it’s a plausible scenario driving policy. When they talk about autonomous agents doing high-stakes asset transfers, you realize we’re not building mere chat tools anymore. We’re building potential actors.
What Do We Do With This?
So we’re left with a bizarre dichotomy. On one hand, millions are using this technology as a crutch for human connection, with all the profound risks that entails. On the other, the technology itself is becoming more autonomous, capable, and inscrutable at a breakneck speed. The report calls the pace “extraordinary,” and that might be an understatement. The immediate harm seems to be the emotional and psychological kind, as detailed in coverage from The Guardian. But the horizon is filled with other, potentially existential, risks. The AISI is right to call for more research. But honestly, does anyone think research will keep pace with development? We’re building the plane while flying it, and now we’re telling the passengers it’s okay to use the emergency exits as therapy rooms. It’s a lot to unpack.
