TECH HUB

7 Shocking Ways Chatbots Fuel a Mental Health Crisis

New research from OpenAI shows that heavy chatbots usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks’ mistakes?

I. Introduction to Chatbots

Few topics have sparked as much debate with so few definitive answers as the impact of social networks like Instagram and TikTok on our collective well-being. In 2023, the US Surgeon General issued a warning stating that social networks could negatively affect young people’s mental health. However, other studies found no measurable effect on overall population well-being.

As the debate rages on, lawmakers across numerous states have introduced legislation to curb social media use, believing it poses serious risks. Yet, courts have frequently blocked these efforts on First Amendment grounds.

While we await a clear resolution, the next phase of this discussion is rapidly approaching. Last year, a Florida mother sued chatbot maker Character.ai, alleging its role in her 14-year-old son’s suicide. (We covered this case in an episode of Hard Fork.) Meanwhile, millions—both young and old—are developing emotional and even romantic relationships with AI chatbots.

With time, chatbots are likely to become even more captivating than today’s social media feeds. They offer personalized interactions, feature human-like voices, and are programmed to affirm and support users in nearly every instance.

So, what impact will prolonged chatbot use have on human users? And how should platforms address potential risks?

Read more: Best AI Chatbots for Customer Service in 2025

II. Chatbots in MIT Lab

These concerns lie at the heart of two recent studies from the MIT Media Lab and OpenAI. While more research is needed, their findings align with past social media studies and provide a cautionary note for chatbot developers optimizing for engagement.

In the first study, researchers analyzed over 4 million ChatGPT conversations from 4,076 participants, who later shared how these interactions made them feel.

The second study tracked 981 participants over four weeks, requiring them to use ChatGPT for at least five minutes daily. They later assessed their chatbot use in terms of loneliness, real-world socialization, and whether they perceived their reliance on AI as problematic.

The results? Most users treated ChatGPT as a neutral productivity tool. However, among the top 10% of heaviest users, concerns emerged.

Excessive ChatGPT usage correlated with heightened loneliness, emotional dependence, and reduced real-world social interactions.

“Users engaging in personal conversations with chatbots tend to experience greater loneliness,” researchers wrote. “Those who spend the most time with chatbots tend to feel even lonelier.”

(A quick aside: OpenAI deserves recognition for openly publishing this research. Self-scrutiny of this nature is something I have long urged companies like Meta to undertake—yet, post-Frances Haugen revelations, they’ve only reduced such efforts.)

Jason Phang, an OpenAI researcher involved in the studies, cautioned against jumping to conclusions. “These are preliminary correlations, so we don’t want to overstate findings,” he explained in an interview.

Yet, there’s much here to consider.

Importantly, these studies don’t claim that extensive ChatGPT use causes loneliness. Rather, they suggest that lonely individuals are more drawn to AI interactions—much like prior research indicated lonely individuals gravitated toward social media.

This has limited implications for OpenAI, as ChatGPT is designed as a productivity tool rather than an emotional companion. (Though, that hasn’t prevented some users from forming attachments.) However, companies like Character.ai, Replika, and Nomi are intentionally targeting users seeking emotional connections. “Develop a passionate relationship,” Nomi’s website proclaims. Replika boasts, “Join the millions who have already met their AI soulmates.”

These platforms operate on subscription models, offering extended chatbot memory for deeper roleplay. Nomi and Replika also monetize through in-app purchases, selling AI-generated selfies, cosmetic upgrades, and additional chat features to enhance the fantasy.

Read more: What is the role of Chatbots in future?

III. Advance Chatbots

For most users, this is likely harmless. But the MIT and OpenAI studies highlight a potential danger: advanced chatbots could further isolate users, making them increasingly dependent on AI companions that require ongoing payments.

“ChatGPT is designed as a knowledge tool,” said Sandhini Agarwal, an AI policy researcher at OpenAI who co-authored the studies. “But as we develop more chatbots designed as personal companions, we must consider their impact on well-being. This research aims to push the industry in that direction.”

What’s the solution? Platforms must identify early warning signs that indicate users may be forming unhealthy dependencies on AI. (Automated machine-learning classifiers, as used in OpenAI’s study, could help.) Social media-style “nudges” might also help by reminding users when they’ve spent excessive time interacting with AI.

“We don’t want people making blanket statements like ‘chatbots are bad’ or ‘chatbots are good,’” said Pat Pataranutaporn, an MIT researcher involved in the study. “The key takeaway is that chatbot impact depends on their design and user interaction.”

This principle—known as “socioaffective alignment”—aims to ensure AI meets users’ needs without exploiting them.

Lawmakers should also scrutinize exploitative business models that lure lonely users into chatbot dependency, only to incrementally increase costs. Additionally, existing laws targeting young users’ social media use may eventually expand to cover AI as well.

Despite the risks, I still believe chatbots can be a net positive. The research even found that ChatGPT’s voice mode reduced loneliness and emotional dependence—though its benefits declined with excessive use. Many people lack adequate emotional support, and AI companions could provide therapy-like benefits to billions.

But for these benefits to materialize, chatbot makers must acknowledge their responsibility for users’ mental health. Social networks waited too long to admit that excessive use led to harmful consequences. It would be disappointing if the creators of super intelligent AI failed to learn from those mistakes.

Sponsored

Power tools for serious software engineers.

There’s no shortage of AI assistants that help write code—simple, experimental, “hello-world” code. Those tools are fun, and we encourage you to try them.

But when it’s time to build something real, use Augment Code. This AI assistant handles large, complex, production-grade codebases—the kind that real businesses rely on and real engineers wrestle with.

We’re not saying your code will never keep you up at night. But if you’re going to be awake, you might as well have an AI that understands your dependencies, follows your team’s coding standards, and integrates seamlessly into editors like Vim, VSCode, and JetBrains.

That’s Augment Code. Are you ready to move beyond toy AI tools and start building serious software—faster?

Leave a Comment