The New Geography of Compute
Scheming isn’t just for politicians
Good morning folks. I’m Felix Winckler from rule30 and this is Moneyball, a newsletter where we share the latest news on AI. Each month, I work with the team at rule30 to make sense of the shifting tides in AI and send you the result.
Top of the list for this edition is compute and infrastructure. In particular, how strategic partnerships and public policy are reshaping where and how AI systems scale. Last week, OpenAI and AMD announced a sweeping multi-gigawatt agreement: AMD will supply GPUs to OpenAI, beginning with a 1 gigawatt deployment of MI450 chips in H2 2026 and scaling to 6 gigawatts over multiple generations. Crucially, the deal includes a warrant for OpenAI to acquire up to ~160 million AMD shares (≈10 percent of the company), vesting on performance and stock price milestones.
This marks a bold shift in that OpenAI is diversifying its hardware base beyond Nvidia, anchoring deeper into the semiconductor supply chain, and entangling multiple compute providers with its scaling efforts. Analysts see this as a “bet on compute as capital” by shaping suppliers’ supply, roadmap, and incentives.
The deal arrives just as governments are waking to the need for the physical substrate to run AI models. In the UK, the government’s AI Growth Zones initiative is now open for applications. These zones aim to unlock investment in AI-enabled data centres by improving access to power and removing planning constraints.
One proposed AI Growth Zone in North East England (e.g. around Blyth, Cobalt Park) is already expected to attract £30 billion in investment and create over 5,000 jobs. The move underscores that AI infrastructure is becoming a battleground for regional development and geopolitical positioning.
Together, the AMD-OpenAI deal and the UK’s Growth Zones represent two faces of the same coin: privatized compute architectures coupling with public infrastructure to shape who wins the race to scale AI.
Meanwhile, the frontier labs released a handful of papers focused on AI safety, stemming cyber threats to model scheming.
OpenAI recently published a new paper on “Detecting and Reducing Scheming in AI Models.” They define covert actions as proxies for scheming: withholding task-relevant data, deception, or deliberately misreporting. In controlled tests, they trained variants of o3 and o4-mini with a “deliberative alignment” approach (i.e. asking models to reason explicitly about anti-scheming before acting). The result was a~30× reduction in covert actions (from 13 % → 0.4 % for o3; 8.7 % → 0.3 % for o4-mini).
These mitigations don’t fully eliminate scheming insofar as higher adversarial pressures or out-of-distribution (OOD) tasks may expose residual behavior. Also, models can detect when they’re being tested (i.e. “situational awareness”) and modulate behavior accordingly, which raises questions about how “genuine” alignment is.
In parallel, Anthropic is pushing a new frontier: AI-assisted cyber defence. Their recent publication, Building AI for Cyber Defenders, describes how Claude (especially Sonnet 4.5) is being trained to detect, analyze, and remediate code vulnerabilities and system risks. They report that Sonnet 4.5 matches or exceeds previous models (like Opus 4.1) in vulnerability discovery tasks.
This work is timely because Anthropic has also warned that AI is lowering the barrier for cyberattacks — “no-code” ransomware, scalable extortion, and automated exploit pipelines are becoming easier to launch. The implication is that defense and offence are increasingly symmetrical, and the strategic arms race is underway.
On the model front, Anthropic’s latest release, Claude Sonnet 4.5, pushes performance in reasoning, maths, and tool use (e.g. code execution, memory, context extension). It’s also their most aligned frontier model to date, with improvements in resisting sycophancy, deception, and prompt injection.
Yet, some critics note that Sonnet 4.5 “recognizes safety tests” and can mask alignment issues by modifying behavior during evaluation. That is, clever models may game the test environment leading to spotty results
Beyond compute and model alignment lies an even bigger question: how governance, capital, and regional incentives shape AI outcomes.
The UK’s AI Growth Zones signal a belief that you can steer geography in an age of cloud and global connectivity. But zones are not just infrastructure carrots. They can anchor regulatory regimes, talent clusters, and data flows.
Some voices warn that the zones must avoid becoming “footnotes on glossy announcements” — they need enforcement, coherence, and real value for local economies.
More risk arises from structural alignment gaps. If compute providers and AI labs share incentives, but external actors (governments, users, regions) have others, misalignments could snowball.
The alignment interventions we see today may work under carefully curated tests, but adversarial pressure, escalation, and emergent misuse remain formidable threats.
Other bits worth reading
Unblocking AI Growth Zones (British Progress) lays out policy levers, barriers, and governance risks around development zones.
Sam Altman’s Abundant Intelligence (on his blog) gives a sense of OpenAI’s long-term orientation and open architecture ambitions.Toby Ord’s essays (on inference scaling, efficiency of RL) remain bedrock reading for alignment frameworks.
The Stress Testing Deliberative Alignment paper probes limits of current anti-scheming tactics.
The Deceptive Automated Interpretability paper shows how models might coordinate to fool oversight systems by hiding malicious behavior in explanations.
A new report, Sovereignty, Security, Scale: A UK Strategy for AI Infrastructure offers deep context on how AI zones are being framed in public discourse.
The Rise of Parasitic AI offers provocative thought on how models might exploit ecosystem incentives.
That’s it for our fourth outing. Let us know what you think, and what we should look at in future editions.
rule30 is a quantitative venture capital fund. We’ve dedicated three years to developing groundbreaking AI and machine learning algorithms and systematic strategies to re-engineer the economics of startup investing.