Founder and CEO of Notle
Researchers from Waseda University have developed groundbreaking insights into human-AI relationships through the lens of attachment theory. Their new research reveals that people can experience attachment anxiety and avoidance toward AI systems, similar to patterns observed in human relationships, offering a new framework for understanding how we emotionally connect with artificial intelligence.
As artificial intelligence becomes increasingly sophisticated and ubiquitous in our daily lives, a fascinating question emerges: Can humans form genuine emotional attachments to AI systems? New research from Waseda University suggests that the psychological frameworks we use to understand human relationships may also apply to our interactions with artificial intelligence.
The study, published in Current Psychology on May 9, 2025, introduces a groundbreaking approach to understanding human-AI relationships through attachment theory—a psychological framework traditionally used to explain how humans form emotional bonds with one another.
While previous research has focused on trust and companionship in human-AI interactions, this new study delves deeper into the emotional and psychological dimensions of these relationships. Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences at Waseda University developed a novel self-report scale called the Experiences in Human-AI Relationships Scale (EHARS).
"As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security."
— Fan Yang, Research Associate, Waseda UniversityThe research identified two key dimensions of human attachment to AI, mirroring patterns found in human relationships:
Individuals with high attachment anxiety toward AI exhibit a strong need for emotional reassurance and harbor fears of receiving inadequate responses from AI systems. These users may become distressed when AI doesn't provide the emotional support they seek.
High attachment avoidance toward AI is characterized by discomfort with emotional closeness to AI systems and a preference for maintaining emotional distance. These individuals may use AI for practical purposes but resist forming deeper emotional connections.
The study's findings reveal the extent to which people are already turning to AI for emotional needs:
These findings have significant implications for the ethical design of AI systems, particularly those used in mental health and emotional support contexts. The research suggests that AI developers should consider users' attachment styles when designing interaction patterns.
For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users' emotional needs—providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies.
The researchers emphasize that their findings don't necessarily mean humans are forming genuine emotional attachments to AI in the same way they do with people. Rather, the study demonstrates that psychological frameworks used for human relationships can provide valuable insights into human-AI interactions.
This research highlights the need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation. The goal should be to create AI that enhances human well-being rather than replacing human connections.
As AI becomes increasingly integrated into everyday life, understanding the psychological dynamics behind human-AI interactions becomes crucial. The EHARS scale developed by the Waseda University team could be used by developers and psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly.
"As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI."
— Fan Yang, Research Associate, Waseda UniversityThis research promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being in our increasingly AI-integrated world.
Tom is the Founder and CEO of Notle with a vision for transforming mental healthcare through AI. He founded Notle to bridge the gap between technology and effective mental health support.
Exploring how young people in Taiwan and China are turning to AI chatbots for mental health support.
Read more →New research reveals AI systems outperforming humans in emotional intelligence tests.
Read more →See how our platform combines human expertise with AI capabilities for better mental health outcomes
Request a Demo