AI just erased 200 pages from its own bible.
Stuart Russell’s artificial intelligence textbook—the one that’s been teaching computer science students since 1995—just got a major update for 2026. But here’s the twist: Russell didn’t just add new chapters. He cut roughly 200 pages of material that used to be considered essential, including detailed explanations of A* search algorithms that generations of students memorized.
This isn’t spring cleaning. It’s a signal.
The Old Metrics Are Breaking
For decades, we measured scientific progress the same way we measured everything else: by counting. More papers published. Faster processing speeds. Bigger datasets. Better accuracy scores. The numbers went up, and we called it progress.
But AI is doing something strange to this system. When a language model can generate thousands of research papers in an afternoon, what does publication count actually measure? When algorithms optimize themselves faster than humans can evaluate the results, what does “better” even mean?
Russell’s textbook edit hints at a deeper problem. The techniques he removed aren’t wrong—they’re just increasingly irrelevant. AI has moved past them so quickly that teaching them in detail wastes time students could spend learning what actually matters now.
Science at Machine Speed
Here’s what makes this uncomfortable: science has always been a human-paced activity. We publish papers so other humans can read them. We peer review so experts can verify claims. We build on previous work because that’s how knowledge compounds over time.
AI doesn’t need any of that. AlphaFold didn’t read every protein folding paper ever written—it learned patterns from data directly. GPT-4 doesn’t cite sources in its training process. These systems discover relationships that work without necessarily understanding why they work, at least not in ways humans can easily verify.
So how do we measure progress when the thing making progress doesn’t think like us?
What Counts Now
Some researchers are proposing new frameworks. Instead of counting publications, measure real-world impact. Instead of benchmarking against human performance, test whether AI systems can handle genuinely novel situations. Instead of optimizing for accuracy alone, evaluate robustness, fairness, and interpretability.
But even these metrics feel temporary. They’re human-designed measures for non-human intelligence. It’s like judging a fish by how well it climbs trees—the categories themselves might be the problem.
The Textbook Problem
Russell’s deleted pages represent something bigger than curriculum updates. They’re evidence that our knowledge is becoming stratified. There’s a growing gap between “things humans need to understand AI” and “things AI actually does.”
That gap creates a measurement crisis. If the people evaluating AI progress can’t fully understand how modern systems work, how do we know we’re measuring the right things? We end up relying on proxy metrics—benchmarks, leaderboards, demo videos—that might miss what actually matters.
The uncomfortable truth is that we’re entering an era where the rules for measuring scientific progress are being written by the very systems we’re trying to measure. Not explicitly, not consciously, but through the simple fact that AI moves faster than our evaluation frameworks can keep up.
Maybe that’s okay. Maybe science has always been this way—our methods evolving just behind our discoveries, scrambling to make sense of what we’ve already built. But it’s worth noticing when a 200-page deletion tells us more about the future than a thousand-page addition ever could.
đź•’ Published: