LOGO

why cohere’s ex-ai research lead is betting against the scaling race

October 22, 2025
why cohere’s ex-ai research lead is betting against the scaling race

The Shifting Landscape of AI Development

Artificial intelligence laboratories are currently engaged in a substantial effort to construct expansive data centers, potentially rivaling the size of Manhattan. These facilities represent multi-billion dollar investments and demand energy consumption comparable to that of a small city.

This intensive development is fueled by a strong conviction in the principle of “scaling”—the belief that increasing computational resources applied to existing AI training methodologies will ultimately result in highly intelligent systems capable of handling a wide array of tasks.

Challenges to the Scaling Paradigm

However, a growing number of AI researchers are suggesting that the scaling of large language models (LLMs) may be approaching its practical limits. They propose that further advancements in AI performance will necessitate alternative breakthroughs beyond simply increasing model size.

Sara Hooker, formerly VP of AI Research at Cohere and an alumna of Google Brain, is championing this perspective with her new venture, Adaption Labs.

Adaption Labs: A New Approach to AI

Co-founded with Sudip Roy, also a veteran of Cohere and Google, Adaption Labs is predicated on the idea that scaling LLMs is becoming an increasingly inefficient method for enhancing AI model performance.

Hooker, who departed from Cohere in August, quietly launched the startup earlier this month to accelerate recruitment efforts.

Continuous Adaptation and Efficient Learning

In a recent interview, Hooker explained that Adaption Labs is focused on developing AI systems capable of continuous adaptation and learning from real-world interactions, all while maintaining exceptional efficiency.

She refrained from disclosing specific details regarding the methodologies employed or whether the company utilizes LLMs or an alternative architectural design.

The Limitations of Current AI Models

“A pivotal moment has arrived where it’s evident that simply scaling these models—approaches focused on scaling, while appealing, are ultimately quite limited—has not yielded intelligence capable of effectively navigating or interacting with the world,” Hooker stated.

Adaptation is, according to Hooker, the “core of learning.” Consider the example of stubbing a toe; the experience prompts a more cautious approach in the future.

Reinforcement Learning and Real-World Application

AI labs have attempted to replicate this learning process through reinforcement learning (RL), enabling AI models to learn from errors within controlled environments.

However, current RL techniques do not facilitate real-time learning from mistakes in production systems—those already in use by customers—resulting in a continued cycle of repetitive errors.

The Cost of Customization

While some AI labs provide consulting services to tailor AI models to specific enterprise needs, these services come at a substantial cost. Reports indicate that OpenAI requires clients to commit to spending upwards of $10 million to access its fine-tuning consulting services.

Democratizing AI Development

“Currently, a small number of leading labs control the development of AI models, offering them in a standardized manner to all users, and adaptation is prohibitively expensive,” Hooker observed.

“I believe this situation is no longer necessary, and AI systems can learn efficiently from their environment. Demonstrating this capability will fundamentally alter the dynamics of AI control and ultimately determine who benefits from these models.”

Industry Skepticism and Emerging Trends

Adaption Labs represents the latest indication of growing industry skepticism regarding the unwavering faith in LLM scaling. Recent research from MIT suggests that the largest AI models may soon encounter diminishing returns.

A noticeable shift in sentiment is also apparent in San Francisco, with the AI community’s prominent podcaster, Dwarkesh Patel, hosting increasingly critical discussions with leading AI researchers.

Expert Perspectives on Scaling Limits

Richard Sutton, a Turing Award recipient and considered “the father of RL,” informed Patel in September that LLMs cannot truly scale due to their inability to learn from genuine real-world experiences.

Similarly, Andrej Karpathy, an early OpenAI employee, expressed reservations to Patel regarding the long-term potential of RL to enhance AI models this month.

Historical Concerns and Recent Breakthroughs

These concerns are not entirely new; in late 2024, some AI researchers voiced anxieties that scaling AI models through pretraining—where models learn patterns from extensive datasets—was yielding diminishing returns.

Pretraining had previously been a key factor in the advancements made by OpenAI and Google.

While those initial concerns regarding pretraining scaling have materialized, the AI industry has discovered alternative methods for model improvement.

Breakthroughs in AI reasoning models, which require additional time and computational resources to analyze problems before providing answers, have further expanded the capabilities of AI models in 2025.

The Future of AI: RL and Reasoning Models

AI labs appear convinced that scaling RL and AI reasoning models represent the next frontier. OpenAI researchers previously stated that they developed their first AI reasoning model, o1, due to its perceived scalability.

Researchers at Meta and Periodic Labs recently published a study exploring how RL could further enhance performance—a project reportedly costing over $4 million, highlighting the expense of current approaches.

Adaption Labs’ Vision

Adaption Labs, in contrast, aims to identify the next significant breakthrough and demonstrate that learning from experience can be considerably more cost-effective.

The startup was reportedly in discussions to secure a seed round of $20 million to $40 million earlier this fall, according to three investors who reviewed its pitch decks. The round has since closed, although the final amount remains undisclosed.

Hooker declined to comment on the funding details.

Expanding Access to AI Research

“We are positioned to be highly ambitious,” Hooker stated when discussing her investors.

Previously, Hooker led Cohere Labs, where she focused on training smaller AI models for enterprise applications. These compact AI systems now frequently outperform larger models on coding, mathematics, and reasoning benchmarks—a trend Hooker intends to continue.

She has also established a reputation for promoting broader access to AI research globally, recruiting talent from underrepresented regions such as Africa.

While Adaption Labs will soon establish an office in San Francisco, Hooker plans to hire personnel worldwide.

Implications of Adaptive Learning

If Hooker and Adaption Labs are correct about the limitations of scaling, the implications could be substantial. Billions of dollars have already been invested in scaling LLMs, based on the assumption that larger models will lead to general intelligence.

However, it is possible that true adaptive learning could prove not only more powerful but also significantly more efficient.

Marina Temkin contributed reporting.

#Cohere#AI#large language models#scaling#AI research#artificial intelligence