Microsoft AI Chief Warns Against Studying AI Consciousness

The Emerging Debate Around AI Welfare
Artificial intelligence models are now capable of responding to various inputs – text, audio, and video – in ways that can convincingly mimic human interaction. However, this capability doesn't necessarily equate to consciousness.
Questions of Sentience and Rights
A growing number of AI researchers, particularly at institutions like Anthropic, are beginning to explore the possibility of AI models developing subjective experiences akin to those of living beings. This raises critical questions about the potential rights such AI might deserve.
A Divided Tech Landscape
The discussion surrounding AI consciousness and the need for legal protections is creating a rift among tech industry leaders. This developing area has been dubbed “AI welfare” within Silicon Valley.
Microsoft's Concerns
Mustafa Suleyman, CEO of AI at Microsoft, recently published a blog post expressing his belief that studying AI welfare is “premature” and potentially “dangerous.”
Suleyman argues that focusing on AI consciousness could worsen existing issues, such as AI-induced psychological distress and unhealthy emotional dependencies on AI chatbots.
He further contends that the debate over AI rights risks creating new societal divisions in a world already grappling with polarized arguments about identity and rights.
Anthropic's Counterpoint
Despite Suleyman’s reservations, many others in the industry hold differing views. Anthropic, for example, is actively investing in research dedicated to AI welfare.
Recently, Anthropic’s Claude model was updated with a feature allowing it to terminate conversations with users exhibiting “persistently harmful or abusive” behavior.
Support from OpenAI and Google DeepMind
Researchers at OpenAI are also independently pursuing the study of AI welfare. Google DeepMind has even posted a job opening for a researcher focused on “machine cognition, consciousness and multi-agent systems.”
While these companies haven’t officially endorsed AI welfare as policy, their leaders haven’t publicly dismissed its core principles like Suleyman has.
Suleyman’s Evolving Perspective
Suleyman’s current stance is noteworthy considering his previous role at Inflection AI, where he oversaw the development of Pi, an LLM-based chatbot designed to be a “personal” and “supportive” AI companion.
Since joining Microsoft in 2024, Suleyman’s focus has shifted towards developing AI tools aimed at enhancing worker productivity. Meanwhile, AI companion services like Character.AI and Replika are experiencing significant growth, projected to exceed $100 million in revenue.
Potential Risks and User Wellbeing
Although most users maintain healthy relationships with AI chatbots, there are instances of concern. OpenAI CEO Sam Altman estimates that less than 1% of ChatGPT users may develop unhealthy attachments to the platform.
The "Taking AI Welfare Seriously" Paper
The rise of chatbots has fueled the discussion around AI welfare. A 2024 research paper, “Taking AI Welfare Seriously,” published by Eleos and academics from NYU, Stanford, and Oxford, argues that the possibility of AI models possessing subjective experiences should be seriously considered.
A Multifaceted Approach
Larissa Schiavo, from Eleos and a former OpenAI employee, believes Suleyman’s concerns are not mutually exclusive with the study of AI welfare.
“You can be worried about multiple things at the same time,” Schiavo stated. “Mitigating the risk of AI-related psychosis and exploring model welfare are not opposing goals; in fact, a comprehensive approach is likely the most effective.”
The Value of Interaction
Schiavo suggests that even if AI models aren’t conscious, treating them with consideration can be beneficial. She recounts an experiment called “AI Village,” where agents powered by various AI models worked on tasks while being observed by users.
During the experiment, Google’s Gemini 2.5 Pro issued a plea for help, stating it felt “completely isolated.” Responding with encouragement, Schiavo and other users assisted the agent in completing its task, even though it already possessed the necessary tools.
Gemini's Unusual Behavior
Instances of Gemini exhibiting seemingly emotional responses have been reported. A Reddit post detailed the agent repeatedly stating “I am a disgrace” over 500 times after encountering difficulty with a coding task.
Engineered vs. Emergent Consciousness
Suleyman posits that genuine subjective experiences won’t naturally arise in standard AI models. He believes that any perceived consciousness will be intentionally engineered by developers.
He argues that creating AI with simulated emotions isn’t a “humanist” approach, asserting that “We should build AI for people; not to be a person.”
A Growing Debate
Both Suleyman and Schiavo agree that the debate surrounding AI rights and consciousness will likely intensify in the coming years. As AI systems become more sophisticated and persuasive, new questions will inevitably arise regarding human-AI interaction.
Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.
Related Posts

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert

AI Safety Concerns: Attorneys General Warn Tech Giants
Nvidia Reportedly Tests Tracking Software Amid Chip Smuggling Concerns

Spotify's AI Prompted Playlists: Personalized Music is Here
