Are you falling in love with ChatGPT, Grok, Claude, Deepseek or Gemini? Or, engaging with LLMs (large language models) heavily, indulging in long and heavy conversations with it! According to a research conducted by OpenAI, which developed ChatGPT and MIT Media Lab, heavy users of LLMs are ending up with emotionally-charged conversations.

While most people use ChatGPT, or such LLMs, fairly unemotionally, those who use it a lot are likelier to have emotionally-charged conversations. While most people’s behaviour towards ChatGPT remains stable, some users do change their emotional engagement over time.

Becuase of their conversational style and their ability to respond like a human being and produce a human-like output, users imagine themselves interacting with a human being.

Survey of 4,000 users

The two organisations tried to investigate the extent to which interactions with ChatGPT might impact users’ emotional well-being, behaviours and experiences. They used the mixed method – analysing 40 lakh conversations and a survey of over 4,000 users to know their perceptions of ChatGPT.

“As AI chatbots see increased adoption and integration into everyday life, questions have been raised about the potential impact of human-like or anthropomorphic AI on users,” the research paper ‘Investigating affective use and emotional well-being on ChatGPT’ said.

The researchers studied about 1,000 participants over 28 days, examining changes in their emotional well-being as they interacted with ChatGPT under different experimental settings.

Dependence, loneliness

The researchers analyse usage of around 6,000 heavy users of ChatGPT’s advanced voice mode over 3 months to understand how their usage evolves over time.

They also analysed the textual and audio content of the resulting 31,857 conversations to investigate the relationship between user-model interactionsand users’ self-reported outcomes.

“We observe that very high usage correlates with increased self-reported indicators of dependence,” the researchers said.

“We find that using voice models was associated with better emotional well-being when controlling for usage duration, but factors such as longer usage and self-reported loneliness at the start of the study were associated with worse well-being outcomes,” the paper said.

Published on March 23, 2025