LOGO

AI Leaders & the AGI Debate: A Realistic View

March 19, 2025
Topics:AGIAI
AI Leaders & the AGI Debate: A Realistic View

A Disquieting Discussion on AI's Future

During a recent gathering of business leaders in San Francisco, a remark I offered prompted noticeable concern. I had simply inquired whether current artificial intelligence systems could eventually attain human-level intelligence (AGI) or even surpass it.

This topic proves to be more contentious than many realize.

The Optimistic Viewpoint on LLMs

As of 2025, numerous tech CEOs are advocating for the potential of large language models (LLMs), the technology behind chatbots such as ChatGPT and Gemini, to achieve human-level or even superior intelligence in the foreseeable future. These leaders posit that highly advanced AI will generate extensive and broadly accessible societal advantages.

For instance, Dario Amodei, CEO of Anthropic, asserted in an essay that exceptionally potent AI could emerge as early as 2026, exhibiting intelligence exceeding that of a Nobel laureate across most relevant disciplines. Meanwhile, Sam Altman, CEO of OpenAI, has recently stated his company possesses the knowledge to construct “superintelligent” AI, predicting it could “massively accelerate scientific discovery.”

Skepticism Among AI Pioneers

However, these optimistic projections are not universally accepted.

A number of other AI leaders are doubtful that today’s LLMs can reach AGI—let alone superintelligence—without significant, innovative breakthroughs. These individuals have traditionally maintained a lower public profile, but an increasing number are now voicing their perspectives.

In a recent publication, Thomas Wolf, co-founder and chief science officer of Hugging Face, characterized certain aspects of Amodei’s vision as “wishful thinking at best.” Drawing upon his doctoral research in statistical and quantum physics, Wolf believes that Nobel Prize-winning discoveries stem from formulating novel questions—a task at which AI currently excels—rather than simply answering existing ones.

Wolf contends that current LLMs are not equipped to handle such a challenge.

The Need for Deeper Evaluation

“I would welcome seeing this ‘Einstein model’ become a reality, but we must delve into the specifics of how to achieve it,” Wolf explained to TechCrunch. “That is where the discussion becomes truly compelling.”

Wolf stated his article was motivated by a perceived excess of hype surrounding AGI, coupled with insufficient rigorous assessment of the path to its realization. He suggests that, as it stands, AI could profoundly transform the world in the near term without necessarily attaining human-level or superintelligence.

Navigating the Hype and Criticism

A significant portion of the AI community has become captivated by the prospect of AGI. Those who question its feasibility are frequently labeled as “anti-technology” or otherwise biased and uninformed.

While some might categorize Wolf as a pessimist, he identifies as an “informed optimist”—someone dedicated to advancing AI while remaining grounded in reality. He is not alone among AI leaders in holding conservative predictions regarding the technology’s progress.

Cautious Assessments from Industry Leaders

Demis Hassabis, CEO of Google DeepMind, has reportedly informed staff that he believes the industry may be a decade away from developing AGI, citing numerous tasks that AI currently cannot perform. Yann LeCun, Meta’s Chief AI Scientist, has also expressed reservations about the capabilities of LLMs. Speaking at Nvidia GTC, LeCun dismissed the notion that LLMs could achieve AGI as “nonsense,” advocating for entirely new architectures as the foundation for superintelligence.

Exploring the Path to Creative AI

Kenneth Stanley, a former lead researcher at OpenAI, is actively investigating the intricacies of building advanced AI using existing models. He is now an executive at Lila Sciences, a new venture that secured $200 million in venture funding to facilitate scientific innovation through automated laboratories.

Stanley’s work centers on extracting original, creative ideas from AI models, a subfield known as open-endedness. Lila Sciences aims to develop AI models capable of automating the entire scientific process, beginning with the formulation of insightful questions and hypotheses that could lead to breakthroughs.

“I would have liked to have authored [Wolf’s] essay, as it accurately reflects my sentiments,” Stanley shared with TechCrunch. “His observation that extensive knowledge and skill do not necessarily equate to truly original ideas resonated with me.”

Creativity as a Key Component of AGI

Stanley posits that creativity is a crucial step toward AGI, but acknowledges that constructing a “creative” AI model is a complex undertaking.

Proponents like Amodei highlight methods such as AI “reasoning” models, which employ greater computational resources to verify their work and provide more consistent answers, as evidence that AGI is not far off. However, Stanley suggests that generating novel ideas and questions may necessitate a different form of intelligence.

“Reasoning is almost the opposite of [creativity],” he added. “Reasoning models focus on ‘Here’s the problem, let’s move directly toward the solution,’ which limits exploration and the potential for divergent, creative thinking.”

The Role of Subjectivity in AI Design

To create genuinely intelligent AI models, Stanley proposes that we must algorithmically replicate a human’s subjective preference for promising new ideas. Current AI models excel in academic fields with definitive answers, such as mathematics and programming. However, Stanley points out that designing an AI model for more subjective tasks requiring creativity—tasks without a single “correct” answer—is considerably more challenging.

“Scientists often avoid [subjectivity]—the term is almost taboo,” Stanley said. “But there’s no inherent obstacle to addressing subjectivity [algorithmically]. It’s simply another element of the data stream.”

Growing Attention to Open-Endedness

Stanley expresses satisfaction that the field of open-endedness is gaining traction, with dedicated research labs at Lila Sciences, Google DeepMind, and AI startup Sakana now focused on the problem. He is witnessing increased discussion about creativity in AI, but believes substantial further work is required.

AI Realists and the Path Forward

Wolf and LeCun would likely concur. Consider them the AI realists—leaders approaching AGI and superintelligence with serious, grounded inquiries about their viability. Their objective is not to dismiss advancements in AI, but rather to initiate a comprehensive discussion about the obstacles separating current AI models from AGI—and superintelligence—and to actively address those challenges.

#AGI#artificial intelligence#AI leaders#AI debate#artificial general intelligence#AI ethics