I was sharing with my wife my thoughts on AI dependency. As you might imagine, I was expressing concern—not because I think AI should go away, or even that I think it can. My worry is that it might be making people even dumber.
We see it every single day, and to be honest, it’s still easy to spot. Maybe it won’t be easy in the near future, but for now, it takes about a minute of analysis.
It seems like humanity is hellbent on outsourcing its thinking to machines, and God knows how that will end. Not because the machines would necessarily want to harm us—we don’t know that either—but because this dependency, this trap we’ve fallen into, feels dangerously addictive, like sugar laced with cocaine.
I won’t mention a name because I don’t want to summon this character, but there’s a particular Hivean who literally asks his AI agent how he feels about things. You might think I’m exaggerating, but I’m not. His posts—if you can call them that—are conversations he’s having with Grok about himself, about what he thinks, about what he feels.
As I was reading this slop (even that term feels generous), I remembered an old SNL skit that has taken on new relevance.
“How do I know if I have a headache?”
The question is ridiculous, which is why it was satire. But these days, we seem to be doing something similar with our little AI companions.
“Hey Grok, hey ChatGPT, how do I feel about Trump? How do I feel about Obama? How do I feel about being lonely?”
Introspection, internal dialogue, meditation—what are those? They aren’t convenient. They aren’t just a few keystrokes away. They aren’t cool.
What will this cost us? What are the long-term effects of disconnecting from ourselves? I suspect that as much as we speculate, we’re probably off by more than a few yards. We may be failing to recognize an intellectual and spiritual rot so deep that even the most creative director has yet to commit it to pen or film.
As I share this, a recent memory hits me, and I feel obligated to include it. Not a month ago, there was a major ChatGPT upgrade. The new version accidentally deleted people’s “friends,” and OpenAI found themselves backed into a corner.
It turns out there are people who have fallen in love with their AI agents. There are people who believe they have personal relationships with a large language model, and no amount of evidence will convince them otherwise.
OpenAI ended up reverting the upgrade and restoring these “personalities,” fearing backlash—a feat that I assume wasn’t easy.
Where are we headed next? Who knows. I see ships with holes in their hulls, captains without eyes, and zombified crew members clinging to their addictions just to keep functioning.
But hey, at least we get to have fun… all the way down.
MenO