AI Startups Accused of Misusing Peer Review - Concerns Rise

Controversy Surrounds AI-Generated Research at ICLR Conference
A debate has emerged concerning studies created with artificial intelligence and submitted to this year’s International Conference on Learning Representations (ICLR), a prominent academic gathering centered on AI research.
AI Labs and Paper Submissions
At least three AI development companies – Sakana, Intology, and Autoscience – have stated they utilized AI in the creation of studies that were subsequently accepted for presentation at ICLR workshops. Workshop organizers at conferences like ICLR are typically responsible for reviewing submissions intended for publication within the workshop program.
Sakana proactively informed ICLR leadership prior to submitting its AI-authored papers and secured the consent of the peer reviewers involved. However, Intology and Autoscience did not follow this procedure, as confirmed by an ICLR representative to TechCrunch.
Criticism from the Academic Community
Numerous AI researchers expressed their disapproval on social media, characterizing the actions of Intology and Autoscience as an exploitation of the scientific peer review system.
Prithviraj Ammanabrolu, an assistant professor of computer science at UC San Diego, articulated this concern in a post on X, stating, “These AI scientist papers are leveraging peer-reviewed venues as a source of human evaluation, yet no consent was obtained to provide this unpaid labor.” He further added, “This diminishes my respect for all parties involved, regardless of the system’s capabilities. Disclosure to the editors is essential.”
The Burden of Peer Review
Peer review is a demanding, time-intensive process, largely conducted on a volunteer basis. A recent survey conducted by Nature revealed that 40% of academics dedicate between two and four hours to reviewing a single study.
The workload associated with this process is increasing. The number of papers submitted to NeurIPS, a leading AI conference, rose to 17,491 last year, representing a 41% increase from the 12,345 submissions received in 2023.
The Growing Issue of AI-Generated Content
The presence of AI-generated content in academic papers is already a recognized problem. One analysis indicated that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetically produced text.
However, the practice of AI companies utilizing peer review to benchmark and promote their technologies is a more recent development.
Positive Reviews and Subsequent Backlash
Intology highlighted the positive reception of its papers, stating on X that they “received unanimously positive reviews” and that reviewers commended the “clever idea[s]” presented in one of its AI-generated studies.
This claim was met with considerable criticism from academics.
Ashwinee Panda, a postdoctoral fellow at the University of Maryland, expressed on X that submitting AI-generated papers without obtaining permission from workshop organizers demonstrates a “lack of respect for human reviewers’ time.”
Panda further explained, “Sakana contacted us to inquire about participating in their experiment for the workshop I’m organizing at ICLR, and we declined. I believe submitting AI papers to a venue without contacting the reviewers is inappropriate.”
Skepticism Regarding AI-Generated Paper Quality
Many researchers question the value of subjecting AI-generated papers to the peer review process.
Sakana acknowledged that its AI produced “embarrassing” errors in citations and that only one of the three AI-generated papers it submitted would have met the standards for full conference acceptance. The company subsequently withdrew its ICLR paper, citing transparency and respect for ICLR’s established practices.
Calls for Regulated Evaluation
Alexander Doria, co-founder of the AI startup Pleias, suggested that the recent surge in undisclosed AI submissions to ICLR underscores the need for a “regulated company/public agency” to conduct “high-quality” evaluations of AI-generated studies for a fee.
Doria stated in a series of posts on X, “Evaluations should be performed by researchers who are fully compensated for their time. Academia should not be utilized to outsource free evaluations of AI technologies.”
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
