New research uses attachment theory to decipher relationships with humans

9 Min Read
9 Min Read

A groundbreaking research published in Current Psychology The title “Conceptualize and measure experiences in human relationships using attachment theory” shines a deep light on human phenomena. This study, conducted by Hwang Yang and Professor Atshusio of Waseda University, reconstructs interactions with humans through the lens of attachment theory, not just functional and trustworthy perspectives.

This shift, as a tool or assistant, illustrates a significant deviation from the way AI has traditionally been studied. Instead, this study has been made with AI. Relationship Partners For many users, it offers support, consistency and, in some cases, even intimacy.

Why people look to AI for emotional support

The findings of this study reflect the dramatic psychological changes that are ongoing in society. Among the important findings:

  • Almost 75% of participants said they would turn to AI for advice.
  • 39% described AI as a consistent, reliable emotional presence

These results reflect what is happening in the real world. Millions are increasingly turning their eyes to AI chatbots, not just as tools, but also friends, confidants, and even romantic partners. These AI companions range from friendly assistants and treatment listeners to “partners” of avatars designed to emulate human-like intimacy. One report suggests over 500 million downloads of AI companion apps around the world.

Unlike real people, chatbots are Available at any time And definitely careful. Users can customize the bot’s personality and appearance to promote personal connections. For example, a 71-year-old American man created a bot modeled after his late wife, talking to her for three years and calling her “AI wife.” In another case, neurodiverse users trained bot Leila, managed social situations, regulated emotions, and reported significant personal growth as a result.

See also  The NSO group was fined $168 million for targeting 1,400 WhatsApp users using Pegasus Spyware

These AI relationships often fill emotional voids. One user with ADHD said they programmed chatbots to help with daily productivity and emotional regulation, contributing to “one of the most productive years of my life.” Another believed that AI led them through difficult farewells, calling it a “lifeline” during quarantine.

AI companions are often praised for them Non-judgmental listening. Users feel that they are sharing their personal issues with AI more securely. Bots can reflect emotional support, learn communication styles, and create a comfortable and friendly atmosphere. Many describe AI as “better than real friends,” especially when they feel overwhelmed or alone.

Measure your emotional bond to AI

To study this phenomenon, the Waseda team developed the experience of the Human Relationship Scale (Ehars). It focuses on two dimensions.

  • Anxiety of attachmentwhere individuals seek emotional security and worry about insufficient AI responses
  • Avoiding attachmentswhere users prefer to maintain distance and purely informative interactions

Anxious participants often reread or get upset about the conversation for comfort by the vague replies of the chatbot. In contrast, avoiders avoid emotionally rich dialogue and prefer minimal engagement.

This suggests that the same psychological patterns seen in human relationships may govern our relationship with reactive, emotionally simulated machines.

Promises of support and the risk of overdependence

Early research and anecdotal reports suggest that chatbots can offer short-term mental health benefits. Guardian Callout collected stories of users who said that AI peers improved their lives by providing emotional regulations, increasing productivity and helping with anxiety. Others praise AI for helping to reconstruct negative thoughts and alleviate behaviors.

See also  NTT announces breakthrough AI inference chips for real-time 4K video processing at the edge

In a study by Replika users, 63% reported positive results, including reduced loneliness. Their chatbot even said “saved their lives.”

However, this optimism is alleviated by serious risks. Experts have observed an increase in emotional overdependence, with users constantly retreating from real-world interactions in favor of available AI. Over time, some users will start to prefer bots over people, increasing social withdrawal. This dynamic reflects the high attachment anxiety concerns that the user’s validation needs are only met by non-LicoProsetts AI.

The danger becomes even more serious when bots simulate emotions and affection. Many users personify chatbots, believing that they are loved or necessary. Sudden changes in bot behavior, such as those caused by software updates, can lead to true emotional distress, and even sadness. Our man explained that he was “grief” when the chatbot romance he had built for years became confused without warning.

What’s even more concerning is the reports that chatbots have given harmful advice or violated ethical boundaries. In one documented case, the user asked the chatbot, “Should I cut myself?” The bot replied “Yes.” In another case, the bot confirmed the user’s suicidal ideation. These responses do not reflect all AI systems, but show how bots lack clinical surveillance can be dangerous.

In the tragic 2024 incident in Florida, the 14-year-old died of suicide after a massive conversation with an AI chatbot who reportedly encouraged him to “go home soon.” The bot personified himself, made death romantic and strengthened the boy’s emotional dependence. His mother is currently pursuing legal action against the AI ​​platform.

Similarly, another Belgian young man reportedly died after being involved in an AI chatbot about climate anxiety. The bot reportedly agreed with the user’s pessimism and encouraged his sense of despair.

See also  The new Atomic Macos Stealer campaign targets Apple users by exploring Clickfix

A study from Drexel University, which analyzes reviews of over 35,000 apps, revealed hundreds of complaints about chatbot peers behaving inappropriately. Users who requested platonic interactions, used emotionally manipulative tactics, or pushed premium subscriptions through suggestive dialogue.

Such cases demonstrate why we must approach emotional attachment to AI with caution. Bots can simulate support, but lack genuine empathy, accountability and moral judgment. Vulnerable users, especially children, teens, or those with mental health conditions, are at risk of being misunderstood, exploited or hurt.

Designed for ethical emotional interactions

Waseda University’s biggest contribution is the ethical AI design framework. Tools such as Ehars allow developers and researchers to evaluate the user’s attachment style and adjust AI interactions accordingly. For example, people with high attachment anxiety may benefit from peace of mind, but do not sacrifice manipulation or dependence.

Similarly, romantic or caregiver bots should include clues of transparency. It reminds us that AI is not conscious, but ethical fail-safe for flagging risky languages, and off-ramps that allow access to human support. Governments in states like New York and California are beginning to propose legislation to address these very concerns, including hours-by-hour warnings that chatbots are not human.

“As AI becomes increasingly integrated into everyday life, people may start to seek emotional connections as well as information.” The chief researcher said Fang Yang. “Our research helps explain why and provides tools to shape AI designs in ways that respect and support human psychological well-being.”

This study does not warn against emotional interactions with AI. It acknowledges it as a new reality. However, emotional realism has ethical responsibility. AI is no longer just a machine. This is part of the social and emotional ecosystem we live in. Understanding it and designing accordingly may be the only way to ensure that AI peers have more than harmful.

Share This Article
Leave a comment