Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit of these scaling laws lies a form of inference that involves no intelligence at all: with increasing compute and memory, a model can brute-force any verifiable task without learning anything from past experience. Universally optimal inference, pioneered by Solomonoff and Levin, requires no insight — only exhaustive search.
This raises a basic question: if scaling alone does not foster intelligence, what does? And if performance on downstream tasks is insufficient to measure intelligence, what is?
In this talk, I will point to the critical role of time in both analyzing and fostering the emergent reasoning behavior of AI agents. Building on insights that Solomonoff sketched in 1985 but that remained theoretical curiosities for decades, I will show that the value of learning is measured not by a reduction in uncertainty — the core of inductive learning and generalization — but by a reduction in the time needed to solve new tasks. A key result is that data can make a universal solver exponentially faster, with the speed-up tightly characterized by a single quantity: the algorithmic mutual information between past experience and the solution to unforeseen tasks.
Connecting these ideas to modern AI requires rethinking what computation means for systems powered by large language models. Unlike minimalistic models of computation such as Turing Machines, LLMs are stochastic dynamical systems whose computational elements — context, weights, activations, chain-of-thought — do not resemble a "program" in the ordinary sense. I will show that LLMs are instead maximalistic models of computation: universal, like Turing Machines, but operating through entirely different and in many ways antithetical mechanisms. Programming such systems can be achieved through two-level control strategies — open-loop planning and closed-loop feedback — in abstract space, a framework we have recently released Strands Agents open-source library (www.strandsagents.com).
Once time is properly accounted for, scaling laws reveal an inversion: beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth— a savant regime in which models improve while learning less. I will discuss what this means for how we build, evaluate, and scale AI agents.
Bio: Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology, his D.Ing from the University of Padova, Italy and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE.