LOGO

AI as Co-Scientist: Experts Weigh In

March 5, 2025
AI as Co-Scientist: Experts Weigh In

Skepticism Surrounds Google's "AI Co-Scientist"

Google recently unveiled its “AI co-scientist,” an artificial intelligence platform intended to assist researchers in formulating hypotheses and designing research strategies. While presented as a means to facilitate novel discoveries, many experts express doubts about its practical utility and suggest it falls short of the company’s promotional claims.

Concerns from the Research Community

Sara Beery, a computer vision researcher at MIT, voiced skepticism, stating the tool appears unlikely to gain serious traction within the scientific community. She questioned the actual demand for such a hypothesis-generation system, as reported by TechCrunch.

Google’s initiative aligns with a broader trend among tech companies to promote AI as a catalyst for accelerating scientific progress, particularly in fields characterized by extensive literature, like biomedicine.

Bold Predictions vs. Current Capabilities

OpenAI CEO Sam Altman has posited that “superintelligent” AI could significantly expedite scientific discovery and innovation. Similarly, Anthropic CEO Dario Amodei has confidently predicted AI’s potential to contribute to cures for many forms of cancer.

However, a significant number of researchers currently find AI to be of limited practical assistance in guiding the scientific process. They characterize applications like Google’s AI co-scientist as largely promotional, lacking substantial empirical support.

Lack of Concrete Results

Google highlighted the AI co-scientist’s potential in areas such as identifying existing drugs for repurposing in the treatment of acute myeloid leukemia, a cancer affecting bone marrow. However, Favia Dubyk, a pathologist at Northwest Medical Center-Tucson in Arizona, deemed these results insufficiently detailed to be taken seriously.

Dubyk expressed to TechCrunch that while the tool might serve as a preliminary starting point, the absence of specific information raises concerns and diminishes trust in its reliability. The limited data provided makes assessing its true helpfulness challenging.

A History of Unsubstantiated Claims

This is not the first instance of Google facing criticism from the scientific community for publicizing purported AI breakthroughs without providing the necessary means for independent verification.

Previous Controversies

In 2020, Google asserted that its AI system for breast tumor detection outperformed human radiologists. A subsequent rebuttal published in the journal Nature by researchers from Harvard and Stanford criticized the lack of detailed methodology and code, arguing it compromised the research’s scientific validity.

Furthermore, scientists have previously criticized Google for downplaying the limitations of its AI tools designed for scientific fields like materials engineering.

The GNoME Example

In 2023, Google claimed that approximately 40 “new materials” had been synthesized with the assistance of its AI system, GNoME. However, an external analysis revealed that none of these materials were genuinely novel.

The Need for Rigorous Evaluation

Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, emphasized the necessity for rigorous, independent evaluation of tools like Google’s “co-scientist” across a variety of scientific disciplines. He noted that AI often excels in controlled settings but may encounter difficulties when implemented on a larger scale, as he told TechCrunch.

The Intricacies of Scientific Advancement

A significant hurdle in creating AI tools for scientific discovery lies in predicting the multitude of unforeseen variables. AI demonstrates potential in areas requiring extensive exploration, such as reducing a large number of possibilities. However, it remains uncertain if AI can achieve the innovative problem-solving necessary for groundbreaking scientific discoveries.

KhudaBukhsh highlighted that throughout history, pivotal scientific advancements, including the creation of mRNA vaccines, were propelled by human insight and dedication despite initial doubts. Current AI technology may not be capable of replicating this process.

AI's Focus and its Limitations

Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, suggests that tools like Google’s AI co-scientist are concentrating on the incorrect aspects of scientific work.

Sinapayen acknowledges the value of AI in automating complex or repetitive tasks, such as summarizing recent academic publications or adapting work to meet grant application guidelines. However, she observes limited interest within the scientific community for an AI collaborator that formulates hypotheses – a task that many researchers find intellectually rewarding.

“Hypothesis generation is often the most enjoyable aspect of scientific work, even for myself,” Sinapayen explained to TechCrunch. “There’s little incentive to delegate this fulfilling activity to a computer, leaving only the more challenging tasks. Many generative AI researchers appear to misinterpret the motivations behind human work, resulting in product proposals that automate the very elements people enjoy.”

Challenges in Verification and Practical Application

Beery pointed out that a frequently difficult stage in scientific investigation is the design and execution of studies and analyses to confirm or refute a hypothesis – a capability that currently exceeds the reach of most AI systems.

Naturally, AI cannot physically manipulate instruments to conduct experiments, and its performance often declines when dealing with problems involving scarce data.

“A substantial portion of the scientific process is inherently physical, involving data collection and laboratory experimentation, making complete virtual execution often impossible,” Beery stated. “A key limitation of systems like Google’s AI co-scientist, impacting its practical application, is the lack of contextual awareness regarding the lab, the researcher, their objectives, prior work, skills, and available resources.”

Ultimately, while AI offers valuable assistance in specific areas, replicating the full spectrum of human scientific ingenuity remains a considerable challenge.

Potential Dangers of Artificial Intelligence

The inherent limitations and potential hazards associated with AI – including its propensity for generating inaccurate information (hallucinations) – cause considerable concern among researchers regarding its application in critical areas of study.

Dr. KhudaBukhsh expresses apprehension that AI-driven tools might contribute to an increase in inconsequential data within scientific publications, rather than fostering genuine advancement.

This issue is already manifesting. Recent investigations have revealed a surge in fabricated, low-quality research papers generated by AI, saturating platforms like Google Scholar, Google's academic search engine.

“Without diligent oversight, research produced by AI has the potential to inundate the scientific community with studies of diminished quality or containing deceptive information,” KhudaBukhsh explained. “This could place undue strain on the peer-review system, which is already facing difficulties, particularly in rapidly expanding fields like computer science where submission rates have increased dramatically.”

Even meticulously planned research projects could be compromised by the unpredictable behavior of AI, according to Sinapayen. Although she acknowledges the potential benefits of AI assisting with tasks like literature reviews and data synthesis, she currently lacks confidence in its ability to perform these functions dependably.

“Currently, I would not delegate tasks such as these to existing AI systems,” Sinapayen stated, also voicing concerns about the training methodologies employed by many AI programs and their substantial energy consumption. “Even if all ethical considerations were addressed, the present state of AI is simply not dependable enough for me to rely on its outputs in any capacity.”

Concerns Regarding AI-Generated Research

  • Potential for flooding scientific literature with low-quality or misleading studies.
  • Overwhelming the peer-review process.
  • Unreliability of AI in tasks like literature review and synthesis.
  • Ethical issues related to AI training and energy consumption.

The scientific community remains cautious, emphasizing the need for careful monitoring and validation of AI-generated content to maintain the integrity of research.

#AI#artificial intelligence#co-scientist#research#science#experts