It’s the third edition of the new-look Moneyball. I’m Harry Law, a former researcher at Google DeepMind who writes Learning From Examples – a newsletter about the history of AI and its future. Each month, I work with the team at rule30 to help make sense of what’s been happening in AI over the last four weeks.
People like to remark that the speed of progress in AI has been head-spinning since the launch of ChatGPT in November 2022. I generally agree with that sentiment, though unlike some I doubt GPT-7 will be the model to enclose the Sun in a Dyson sphere. Nonetheless, trillions (with a t) of dollars are being spent as part of an infrastructure build-out deemed to be the latest front in China’s competition with the West.
In a single year, Google now spends more on physical capital like data centres ($85 billion) than the entire UK defence budget ($79 billion). In 2025, AI capex added more to U.S. growth than consumer spending. This is, by the way, despite the fact that the former is 6% of the economy and the latter makes up 70%.
This is the context in which President Trump unveiled his hotly trailed AI plan. Over 28 pages, the plan describes measures to speed up permits for data centres and recategorise energy assets as ‘covered components’ to allow for speedy parallel deployments. This is generally considered good news for American AI leadership, and comes amidst a few other announcements from the private sector:
Anthropic liked what it saw but said more energy was needed, arguing that China may have fewer chips but is racing ahead of the U.S. in terms of raw power generation capacity.
As it released GPT-5, OpenAI said it has reached an agreement with Oracle to develop 4.5 gigawatts of additional data centre capacity in the U.S.
Outside of the U.S., the company also said it was working with European datacentre outfit Nscale to build 230MW of capacity in Norway.
In a move as underwhelming as it is unsurprising, the UK announced it had struck a deal with OpenAI to, uh, ‘explore’ infrastructure investment in Blighty.
From one competition to another, Google DeepMind and OpenAI announced impressive results at the International Mathematical Olympiad.
OpenAI rushed out of the blocks to tell the world it had won a gold medal at this year’s competition, solving five out of the six problems posed by the organisers.
Google DeepMind followed shortly after, respecting official requests from the IMO for a media blackout before announcing that it had also taken a gold medal solving five out of the six problems.
The news seemed to surprise prediction markets, which I found strange given DeepMind won silver last time around. Whatever the case, AI watchers were suitably impressed.
And in research corner:
New work from the AI Security Institute argued that language models are likely to become more persuasive from post-training techniques rather than from scaling or personalisation.
Longtime LLM critic Melanie Mitchell drew on the well-known simulator theory to explain why some models can’t help but misbehave.
The folks at Anthropic described a phenomenon wherein longer chain of thought reasoning actually leads to lower accuracy.
Gustavs Zilgalvis wrote about the ‘fourth offset’ in national security, where AI provides military advantages to maintain American military strength.
Finally, some other links we liked the look of but didn’t fit anywhere else:
a16z interviews Dwarkesh Patel and Noah Smith about AGI, short timelines, and the economy.
The UK AI Security Institute introduced the Alignment Project, a new £15 million fund for research on urgent challenges in AI alignment and control.
Google DeepMind published a paper in Nature introducing Aeneas, the first model designed to help historians contextualise ancient inscriptions.
Ethereum creator and occasional AI commentator Vitalik Buterin responded to the popular AI 2027 forecast, arguing that the authors over-egg the risks posed by superintelligent AI.
After luring in researchers with mountains of cash, Meta shares what is quite possibly the most boring vision of ‘superintelligence’ imaginable.
That’s it for our third outing. Let us know what you think, and what we should look at in future editions.
rule30 is a quantitative venture capital fund. We've dedicated three years to developing groundbreaking AI and machine learning algorithms and systematic strategies to re-engineer the economics of startup investing.