Why Cohere’s ex-AI research lead is betting against the scaling race

Source: techcrunch
Author: Maxwell Zeff
Published: 10/22/2025
To read the full content, please visit the original article.
Read original articleThe article discusses a growing skepticism within the AI research community about the prevailing strategy of scaling large language models (LLMs) by increasing computational power and data center size. Sara Hooker, former VP of AI Research at Cohere and a Google Brain alumna, exemplifies this shift with her new startup, Adaption Labs. Hooker argues that merely scaling LLMs has become inefficient and unlikely to produce truly intelligent systems capable of adapting and learning continuously from real-world experiences. Instead, her company focuses on building AI that can adapt in real time, a capability current reinforcement learning (RL) methods fail to deliver effectively in production environments.
Hooker highlights that existing AI models, despite their size and complexity, do not learn from mistakes once deployed, limiting their practical intelligence. She envisions AI systems that can efficiently learn from their environment, which would democratize AI control and customization beyond a few dominant labs. This perspective aligns with recent academic findings and shifts in the AI community, including skepticism from prominent researchers
Tags
energyartificial-intelligenceAI-researchdata-centersmachine-learninglarge-language-modelsAI-scalability