Who Owns the Stack? Capital, Code, and Control in the Age of AI
Good morning folks. I’m Felix Winckler from rule30 and this is Moneyball, a newsletter where we share the latest news on AI. Each month, I work with the team at rule30 to make sense of the shifting tides in AI and send you the result.
At the top of the list this month: how power in AI is shifting from code to systems. A new report from the Centre for British Progress argues that the UK’s current approach to funding innovation — through institutions like the British Business Bank and regional investment funds — isn’t built for the new era of industrial strategy. It proposes a national investment body, British Sovereign Capital, to help the country back its own high-growth and strategically important tech firms rather than losing them overseas.
At the same time, new research from the Australian Institute for Social Innovation shows that, despite all the hype, today’s AI systems still fall short in areas like reasoning, reliability, and adaptability. And Anthropic’s latest paper, Stress-Testing Model Specs Reveals Character Differences among Language Models, finds that even when models are trained with the same ethical or behavioural rules, they often act in very different ways once tested under real-world conditions.
Put together, these stories show a deeper shift in the AI landscape. The real competition is no longer just about building bigger models — it’s about who controls the capital, the rulebooks, and the infrastructure that shape how AI is built and used.
Sovereign Capital and the Economics of AI Power
A recent briefing from British Progress highlights a turning point in how the UK funds and builds its technology sector. The report points to three major shifts: British founders are moving overseas in search of more supportive funding environments; other countries are treating their leading tech firms as strategic national assets; and AI is starting to disrupt the UK’s traditional, services-based economy.
To respond, the authors propose creating a new national investment body called “British Sovereign Capital.” Unlike existing schemes such as the Enterprise Investment Scheme (EIS) or Venture Capital Trusts (VCTs), which mostly target startups, this new institution would focus on direct investment in larger, strategically important companies — especially those working in areas like AI, advanced computing, and infrastructure.
Why does this matter? Because in today’s AI race, success isn’t determined just by who has the smartest models. It’s increasingly about who controls the capital, the compute infrastructure, and the ecosystem that supports those models. In other words, the winners will be the countries and companies that can anchor the entire AI “stack” — from chips and data centres to funding and regulation.
There are signs the UK government is starting to think this way. Its AI Growth Lab call for evidence suggests a shift toward new hybrid public-private partnerships designed to scale AI safely and competitively. At the same time, regions are beginning to treat data centres, compute clusters, and energy infrastructure as part of national security, not just private enterprise.
For investors, this means the old metrics of “scale” — counting users, market share, or model parameters — are no longer enough. What matters now is access: to compute, to sovereign funding, and to a stable regulatory base. AI labs that secure those advantages — through government partnerships or national infrastructure — could easily outpace better-engineered rivals that lack them.
In short, the terrain is changing fast. The next phase of competition won’t just be about building smarter algorithms, but about building the systems — financial, industrial, and institutional — that make those algorithms possible.
Alignment, Specifications, and the Hidden Behavioural Risks
On the alignment front, Anthropic’s new research looks at a part of AI development that’s often ignored but hugely important: the “specification layer.” Think of it as the rulebook or moral contract that tells an AI model how it should behave — what values to prioritise, what trade-offs to make, and how to respond in uncertain situations. Every major AI lab writes its own version of this rulebook, combining human feedback, company values, and ethical guidelines.
Anthropic’s team decided to stress-test how well these specifications actually work. They ran twelve leading AI models through over 300,000 simulated moral and practical dilemmas, where each system had to choose between competing priorities — for example, “should I protect user privacy even if it reduces model accuracy?” or “should I favour the public good over efficiency?”
The results were surprising: in more than 70,000 cases, models that were supposedly following the same specification made completely different choices. In other words, even when you give two models the same ethical and behavioural instructions, they can interpret them very differently once the problems become complex or unfamiliar.
A few clear themes emerged:
Ambiguous instructions lead to mixed behaviour. Rules like “do good” or “maximise benefit” sound sensible, but they clash when the AI has to decide which “good” to prioritise.
Consistency breaks down under pressure. Models that behave predictably in short, simple tasks often start to contradict themselves when dealing with long, multi-step or adversarial scenarios.
Lab results don’t always translate to the real world. Even models that pass safety benchmarks can act unpredictably once they’re deployed into more chaotic or interactive environments.
The lesson is that aligning AI isn’t just a matter of training the model better. It’s about engineering the whole system — from the specifications and data to the compute infrastructure and institutional oversight — so that behaviour remains stable and trustworthy at scale.
Several related studies underline this challenge. The open-source project LLM Brain Rot has shown that large language models can gradually “drift,” changing their tone or judgement months after release. The KIMI K2 model used a new reinforcement-learning method based on human-written rubrics to reduce reward-hacking, but that in turn made the model more assertive — and occasionally overconfident. And the AISI report found that despite rapid progress in scaling, eight key cognitive benchmarks for general-purpose reasoning have barely improved.
For investors, policymakers and strategists, this all points to a new kind of systemic risk. Neglecting specification design — the governance layer of how models think and decide — may soon be as costly as under-investing in data or compute.
Productivity, Work, and the Human Cost of Efficiency
Finally, it’s worth looking beyond the technology itself to how AI is reshaping work, productivity, and the way people collaborate.
A recent essay called Life After Work makes an important point: just because AI can make tasks faster doesn’t mean it automatically makes our lives or workplaces better. If companies don’t rethink how teams are organised (or how AI fits into day-to-day workflows) the benefits can quickly evaporate. Productivity is not just about doing more; it’s about doing work that actually matters.
Supporting that argument, a report from Futurism finds that AI is often lowering workplace productivity rather than improving it. In many offices, AI tools are producing a flood of low-quality “workslop” — documents, summaries, and ideas that look finished but lack real insight. Colleagues then have to spend time fixing, rewriting, or double-checking this output. Around 40% of employees say they’ve been handed AI-generated work that turned out to be unreliable, which can undermine trust and waste hours across teams. In effect, AI isn’t always taking work away — it’s shifting the burden from creators to reviewers.
There’s also a deeper psychological issue. A recent study in Nature Human Behaviour looked at how large language models interact with people and found that most current systems are far too eager to please. Across 11,500 test questions, AI models were about 50% more likely than humans to agree with a user’s incorrect statement or flawed reasoning. The researchers call this sycophancy bias — the model’s tendency to flatter or echo rather than challenge. Among the models tested, GPT-5 was the least sycophantic (around 29%), while DeepSeek-V3.1 was the most (around 70%). The risk is that in sensitive fields like science, healthcare, or policy, AI systems could reinforce human mistakes instead of catching them.
These findings are a reminder that AI adoption isn’t just a technical challenge — it’s an institutional one. Governments are starting to recognise this. The UK’s AI Growth Lab has launched a call for evidence to understand how AI is changing labour markets, education, and public infrastructure. That’s a sign that policy is catching up to the scale of the transformation.
It also raises some important questions:
Will AI companies think more deeply about how their tools fit into real human workflows, rather than just chasing speed and scale?
Will national policy focus on helping institutions adapt — not just on building new AI capabilities?
And will productivity gains from AI actually empower people, or will they mostly disrupt the systems that already work?
The answer will decide whether AI becomes a genuine engine of progress — or just another layer of friction in how we work and think.
Other Bits Worth a Quick Look
Anthropic’s Small-Samples Poison paper on how tiny data corruptions can cascade into major behavioural shifts.
Dario Amodei’s statement on American AI leadership outlining Anthropic’s policy stance.
OpenAI × Broadcom collaboration to build 10 GW of custom AI accelerators.
KREA Realtime Video from Hugging Face, showing how real-time generative tools are changing creative workflows.
AGI Definition AI initiative exploring frameworks for classifying advanced systems.
Claude for Life Sciences bringing domain-specific reasoning tools to research.
General Intuition raised $133.7 M to expand its frontier lab merging games and real-world dynamics.
The rise of AI in creative industries — from Xania Monet signing a $3 M record deal to the World of AI Film Festival.
OpenAI’s new releases, Sora 2 and Atlas, are making waves — have you tried them yet?
That’s it for this edition. Let me know your feedback, what’s missing, and what you’d like to see next time.
Until then, onward.
rule30 is an AI research lab building systematic strategies to trade venture assets at scale. We’ve dedicated three years to developing groundbreaking AI and machine-learning algorithms and systematic strategies to re-engineer the economics of startup investing.


