Blog/Articles/AI and Attachment Theory

AI and Attachment Theory: A New Lens for Understanding Human-AI Relationships

June 3, 2025|Research & Innovation
Research & InnovationJune 3, 2025

Tom Ventura

Founder and CEO of Notle

Share this article

In Brief

Researchers from Waseda University have developed groundbreaking insights into human-AI relationships through the lens of attachment theory. Their new research reveals that people can experience attachment anxiety and avoidance toward AI systems, similar to patterns observed in human relationships, offering a new framework for understanding how we emotionally connect with artificial intelligence.

A Revolutionary Framework for Human-AI Bonds

As artificial intelligence becomes increasingly sophisticated and ubiquitous in our daily lives, a fascinating question emerges: Can humans form genuine emotional attachments to AI systems? New research from Waseda University suggests that the psychological frameworks we use to understand human relationships may also apply to our interactions with artificial intelligence.

The study, published in Current Psychology on May 9, 2025, introduces a groundbreaking approach to understanding human-AI relationships through attachment theory—a psychological framework traditionally used to explain how humans form emotional bonds with one another.

Beyond Trust and Companionship

While previous research has focused on trust and companionship in human-AI interactions, this new study delves deeper into the emotional and psychological dimensions of these relationships. Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences at Waseda University developed a novel self-report scale called the Experiences in Human-AI Relationships Scale (EHARS).

"As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security."

— Fan Yang, Research Associate, Waseda University

The Two Dimensions of AI Attachment

The research identified two key dimensions of human attachment to AI, mirroring patterns found in human relationships:

Attachment Anxiety toward AI

Individuals with high attachment anxiety toward AI exhibit a strong need for emotional reassurance and harbor fears of receiving inadequate responses from AI systems. These users may become distressed when AI doesn't provide the emotional support they seek.

Attachment Avoidance toward AI

High attachment avoidance toward AI is characterized by discomfort with emotional closeness to AI systems and a preference for maintaining emotional distance. These individuals may use AI for practical purposes but resist forming deeper emotional connections.

Surprising Statistics on AI Emotional Support

The study's findings reveal the extent to which people are already turning to AI for emotional needs:

  • 75% of participants turned to AI for advice and guidance
  • 39% perceived AI as a constant, dependable presence in their lives
  • Many users sought not just information but emotional support and companionship from AI systems
  • Some individuals reported that AI provided them with a sense of security similar to human relationships

Implications for AI Design and Ethics

These findings have significant implications for the ethical design of AI systems, particularly those used in mental health and emotional support contexts. The research suggests that AI developers should consider users' attachment styles when designing interaction patterns.

For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users' emotional needs—providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies.

The Need for Transparency and Balance

The researchers emphasize that their findings don't necessarily mean humans are forming genuine emotional attachments to AI in the same way they do with people. Rather, the study demonstrates that psychological frameworks used for human relationships can provide valuable insights into human-AI interactions.

This research highlights the need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation. The goal should be to create AI that enhances human well-being rather than replacing human connections.

Looking Toward the Future

As AI becomes increasingly integrated into everyday life, understanding the psychological dynamics behind human-AI interactions becomes crucial. The EHARS scale developed by the Waseda University team could be used by developers and psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly.

"As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI."

— Fan Yang, Research Associate, Waseda University

This research promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being in our increasingly AI-integrated world.

Tom Ventura

Tom is the Founder and CEO of Notle with a vision for transforming mental healthcare through AI. He founded Notle to bridge the gap between technology and effective mental health support.

Related Articles

AI Chatbots Gaining Traction in Mental Health Support

Exploring how young people in Taiwan and China are turning to AI chatbots for mental health support.

Read more →

Could AI Understand Emotions Better Than We Do?

New research reveals AI systems outperforming humans in emotional intelligence tests.

Read more →

Experience AI-Enhanced Mental Healthcare

See how our platform combines human expertise with AI capabilities for better mental health outcomes

Request a Demo