LOGO

Laude Institute Announces AI Grants - Slingshots Program

November 6, 2025
Laude Institute Announces AI Grants - Slingshots Program

Laude Institute Announces Inaugural Slingshots Grants for AI Research

The Laude Institute revealed the first recipients of its Slingshots grants on Thursday. These grants are intended to foster progress in both the scientific understanding and practical application of artificial intelligence.

Accelerating AI Research with Dedicated Resources

The Slingshots program functions as an accelerator, offering researchers access to resources often lacking in traditional academic environments. This includes financial support, substantial computational capabilities, and dedicated product and engineering assistance.

In return for these resources, grant recipients commit to delivering a tangible outcome. This could take the form of a newly founded startup, a publicly available open-source code repository, or another demonstrable result.

Initial Cohort Focuses on AI Evaluation

The first group of projects selected comprises fifteen initiatives. A significant emphasis within this cohort is placed on addressing the challenging issue of AI evaluation.

Several of these projects will be recognized by readers of TechCrunch. These include the Terminal Bench command-line coding benchmark and the newest iteration of the ongoing ARC-AGI project.

Novel Approaches to Established Evaluation Problems

Beyond familiar names, the cohort also features projects exploring innovative solutions to well-known evaluation challenges.

Formula Code, developed by researchers from CalTech and UT Austin, seeks to evaluate the capacity of AI agents to refine and improve existing code. Simultaneously, BizBench, originating from Columbia University, introduces a detailed benchmark specifically designed for “white-collar AI agents.”

Additional grants are dedicated to investigating new frameworks for reinforcement learning and techniques for model compression.

CodeClash: A Competitive Framework for Code Assessment

John Boda Yang, a co-founder of SWE-Bench, is also participating in the cohort. He leads the new CodeClash project.

Drawing inspiration from the success of SWE-Bench, CodeClash will evaluate code through a dynamic, competition-based system. Yang anticipates this approach will yield more comprehensive and insightful results.

The Importance of Broad Benchmarking

“Continued evaluation using established, third-party benchmarks is crucial for driving advancement,” Yang stated in an interview with TechCrunch.

He also expressed concern about a potential future where benchmarks become overly specific to individual companies, potentially hindering broader progress.

#AI grants#Laude Institute#Slingshots#artificial intelligence#research grants