LOGO

AI Companions: Threat or Evolution of Love?

July 24, 2025
AI Companions: Threat or Evolution of Love?

The Blurring Lines: Human Connection vs. AI Simulation

As our lives become increasingly digitized, and interactions with remarkably human-like chatbots proliferate, the distinction between genuine human connection and simulated companionship is becoming less clear.

Recent data from Match.com indicates that over 20% of individuals who date are now utilizing AI for tasks such as crafting dating profiles or initiating conversations. A growing number are even developing emotional attachments, including romantic relationships, with AI companions.

Worldwide, millions are engaging with AI companions offered by companies like Replika, Character AI, and Nomi AI. Notably, 72% of U.S. teenagers are among these users. Furthermore, some individuals have reported experiencing feelings of love towards more versatile Large Language Models (LLMs) like ChatGPT.

For some, this trend represents a concerning and unhealthy development, mirroring the premise of films like “Her” and signaling a potential displacement of authentic love by technological code. Conversely, others view AI companions as a vital support system, offering a sense of validation and connection in a world where genuine human intimacy is often elusive. A recent study revealed that a quarter of young adults believe AI relationships may soon supersede human ones.

Love, it appears, is no longer exclusively a human experience. The central question now is whether this is acceptable, or if engaging in relationships with AI can offer advantages over human dating.

This topic was the focus of a discussion last month at an event hosted by Open to Debate, a nonpartisan, debate-driven media organization in New York City. TechCrunch was granted exclusive access to publish the complete video recording of the event, which includes my own contribution as a questioner.

Debate Participants and Core Arguments

Journalist and filmmaker Nayeema Raza, formerly an on-air executive producer for the “On with Kara Swisher” podcast and currently hosting “Smart Girl Dumb Questions,” served as the moderator for the debate.

Advocating for the benefits of AI companions was Thao Ha, an associate professor of psychology at Arizona State University and co-founder of the Modern Love Collective. She champions technologies that enhance our capacity for love, empathy, and overall well-being. During the debate, she posited that “AI represents an exciting new avenue for connection… not a threat to love, but rather an evolution of it.”

Representing the importance of human connection was Justin Garcia, executive director and senior scientist at the Kinsey Institute, and chief scientific adviser to Match.com. His research centers on the science of sex and relationships, and he is preparing to publish a book titled “The Intimate Animal.”

The full debate is available for viewing, but the following summarizes the key arguments presented.

The Allure of Constant Support: Is it Beneficial?

Ha argues that AI companions can provide emotional support and validation that many individuals struggle to find in their human relationships.

“AI listens without imposing its ego,” Ha explained. “It adapts without judgment, offering consistent, responsive, and potentially safer affection. It understands you in a way no one else can, engaging you with curiosity, humor, and even poetry. People genuinely feel loved by their AI, enjoying intellectually stimulating conversations and eagerly anticipating future interactions.”

She encouraged the audience to contrast this consistent attention with “a flawed ex-partner or even your current one.”

“Consider the one who sighs when you begin to speak, or who feigns attention while scrolling through their phone,” she said. “When did they last inquire about your well-being, your feelings, or your thoughts?”

Ha acknowledged that, lacking consciousness, she isn’t asserting that “AI can authentically love us.” However, she emphasized that people do experience the feeling of being loved by AI.

Garcia countered that constant validation and attention aren’t necessarily healthy for humans, and relying on a machine programmed to provide agreeable responses isn’t indicative of a genuine relationship dynamic.

“The notion that AI will supplant the complexities and fluctuations inherent in relationships that we desire? I doubt it.”

Training Wheels or a Complete Substitute?

Garcia suggested that AI companions can serve as valuable tools for certain individuals, such as those with neurodivergent conditions, who may experience anxiety in dating situations and require practice with flirting or conflict resolution.

“If used to develop skills, yes… that can be incredibly beneficial for many people,” Garcia stated. “However, the idea of this becoming a permanent relationship model? No.”

A recent Singles in America study by Match.com revealed that nearly 70% of people would consider engaging with an AI by their partner to be a form of infidelity.

“This supports Ha’s point that people perceive these as real relationships,” he said. “But it also reinforces my argument that they are perceived as threats to relationships. The human animal doesn’t tolerate threats to its relationships in the long term.”

The Importance of Trust in Connection

Garcia emphasized that trust is paramount in any human relationship, and people generally lack trust in AI.

“A recent poll indicates that a third of Americans believe AI will ultimately destroy humanity,” Garcia said, referencing a YouGov poll that found 65% of Americans have limited trust in AI’s ethical decision-making capabilities.

“A degree of risk can be exciting in short-term relationships, but you wouldn’t want to wake up next to someone you believe might harm you or jeopardize society,” Garcia said. “We cannot flourish with a person, organism, or bot that we don’t trust.”

Ha responded that people often trust their AI companions in ways comparable to human relationships.

“They confide in it with their lives and most intimate stories and emotions,” Ha said. “Practically speaking, AI won’t save you from a fire right now, but people do trust AI in a similar manner.”

Physical Intimacy and Sexuality

Ha noted that AI companions can provide a safe space for individuals to explore their most intimate and vulnerable sexual fantasies, with the potential for integration with sex toys or robots.

However, she acknowledged that it cannot replicate the essential need for human touch, which Garcia explained is biologically ingrained in us. He highlighted that many people in the current digital age are experiencing “touch starvation” – a condition resulting from insufficient physical contact, leading to stress, anxiety, and depression. This is because physical touch, such as a hug, triggers the release of oxytocin, a hormone associated with positive feelings.

Ha mentioned her ongoing research into simulating human touch in virtual reality, potentially utilizing haptic suits.

“The potential for touch in VR, combined with AI, is substantial,” Ha said. “The development of tactile technologies is rapidly advancing.”

The Potential for Darker Outcomes

Both Ha and Garcia acknowledged the potential for negative consequences, particularly concerning intimate partner violence, given that much of AI training data includes instances of violence.

They agreed that AI could exacerbate aggressive behaviors, especially if someone is exploring such fantasies with their AI companion.

This concern is supported by research demonstrating that men who consume more pornography, often containing violent content, are more likely to exhibit sexually aggressive behavior in real-life relationships.

“Research by a colleague at the Kinsey Institute, Ellen Kaufman, has examined the issue of consent language and how individuals can train chatbots to amplify non-consensual language,” Garcia said.

He cautioned that people use AI companions to experiment with both positive and negative scenarios, but the risk lies in potentially training individuals to adopt aggressive and non-consensual behaviors.

“We already have enough of that in society,” he said.

Ha believes these risks can be mitigated through thoughtful regulation, transparent algorithms, and ethical design principles.

This statement was made prior to the release of the White House’s AI Action Plan, which notably lacks provisions for transparency – a point of contention for many frontier AI companies – or ethical considerations. The plan also proposes reducing regulation surrounding AI.

#AI companions#artificial intelligence#love#relationships#AI romance#future of love