We all have heard about the usual risks associated with AI—job displacement, disinformation, and deepfakes. They are real, yes, and very potent dangers. However, I believe a deeper, more structural issue lies here, and that is AI’s potential to create a serious disconnect among humans, jeopardizing our authenticity in the long run. But before I explain how that might unfold, let me share a little story that led me to this realization:
I was recently reading David Lipsky’s memoir of his road trip with the late author David Foster Wallace. At one moment, DFW brings up that it would have been much easier if the whole 'interview' with Lipsky had been conducted via email. He said some more things as well, and as I cannot paraphrase his words in any better way, I shall the paste the entire quote here:
“And that as the Internet grows, and as our ability to be linked up, like—I mean, you and I coulda done this through e-mail, and I never woulda had to meet you, and that woulda been easier for me. Right? Like, at a certain point we’re gonna have to build up some machinery, inside our guts, to help us deal with this. Because the technology is just gonna get better and better and better and better. And it’s gonna get easier and easier, and more and more convenient, and more and more pleasurable, to be alone with images on a screen, given to us by people who do not love us but want our money. Which is all right. In low doses, right? but if that’s the basic main staple of your diet, you’re gonna die. In a meaningful way, you’re going to die.”
This was said in the 90s. Technology was nowhere close to what we experience now. And his words are only ever more true.
It sent me thinking about the effects of AI—generative AI, in particular—and what it can do in the near future.
But before I say more, let me clarify: I am not a Luddite, nor am I against progress. The fact that I have to give this disclaimer is already kind of unsettling to me, because of how obsessed the Western culture is with innovation, and how easy it is to label someone a Luddite.
Technology has undeniably brought immense benefits. You would not be reading this if it were not for the Internet. However, it is vital to examine both sides of the coin. One of the most concerning aspects of modern AI is the way it enables and exacerbates a profound disconnect among human beings.
And this disconnect is not just the superficial “screens are distracting us”, or the “people don’t live in the moment anymore” narrative, but something deeper and more serious.
You see AI increasingly acts in our place—on our behalf—creating barriers to direct human interaction. For instance, generative AI tools can simulate conversations, compose emails, and even manage relationships to some extent. These capabilities can save time, yes, but they also reduce the necessity of genuine human-to-human engagement. Over time, this delegation to AI creates a layer of insulation. This, in turn, reduces the need to develop interpersonal or emotional skills, as well as critical thinking.
This phenomenon is also mirrored in decision-making processes these days. Whether it is CEOs relying on AI-driven analytics or military generals delegating strategic decisions to algorithms, the human element in critical decision-making is sidelined.
It also helps them shake off some moral responsibility for their actions, as the decisions are not directly shaped by their own hands.
This reminds me of Slavoj Žižek’s concept of “interpassivity,” where a technology acts on our behalf, replacing active participation. Unlike its more natural opposite, “interactivity”, which connects and engages, interpassivity leads to … well, passivity and detachment. It’s no wonder this forms part of Žižek’s broader critique of capitalism.
AI also contributes to the creation of “bubbles,” isolating us within hyper-customized digital environments. Algorithms feed us content tailored to our preferences, reinforcing our biases and limiting our exposure. The concept itself is nothing new, though. We have already seen it unfolding in this decade, but it’s just going to get faster and more efficient, given the exponential advancements.
This echo chamber effect has been widely linked to increased polarization in politics, religion, and cultural discourse. As the algorithms cater to our individual preferences, they make it harder to understand or empathize with those who hold different views.
Now, generative AI will take this a step further. At least according to what its makers state about its abilities. You can imagine a world where entertainment is endlessly personalized—where each user receives a unique version of the same movie, book, or game tailored to their tastes. This deepens the trap of self-reinforcement, where we are perpetually served content that mirrors our existing tastes and worldview.
Escaping these bubbles becomes increasingly difficult, as they sever us from shared experiences and collective understanding. Traditional “mass entertainment” now seems like a better alternative to this, although that had plenty of its own problems.
I believe this problem of human disconnect is bigger than the usually discussed threats of AI in the media. Human civilization has survived the harshest of conditions, and it will survive most threats, but that survival itself depends on the minority who have cultivated a strong personality. And you need genuine connection to grow as a person.
So this is what I want you to bring your attention to: please cultivate more genuine connections and have better critical thinking skills, because it’s only going to get harder to be authentic as time goes by. I hope we see a future where we are not trapped in a cycle of self-reference, getting shaped by the narrow confines of algorithmic recommendations and hyper-individualized consumption.