Good morning folks. I’m Harry Law of Learning From Examples, a newsletter about the history of AI and its future. Each month, I work with the team at rule30 to help make sense of what’s been happening in AI over the last four weeks.
Top of the list for this edition is geopolitics. Chip exports entered a strange new era, when the Trump White House reversed course by striking a deal with Nvidia and AMD. The terms of the deal allow the firms to sell chips to China in exchange for 15% of revenue from those sales. In doing so, the move overturned years of bipartisan consensus in which American policymakers have sought to block China from accessing the most powerful American-made chips to power their AI systems.
It’s in some ways a strange choice given that America’s recent AI Action Plan, which we discussed in the last edition of Moneyball, seeks to help the United States ‘win’ the AI race against China. This deal likely complicates that goal because it gives China access to a core piece of the technology that they have so far struggled to replicate.
Computing power is one of the core inputs into the AI development process alongside data. With respect to the latter, the American AI lab Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who argued that the firm took pirated copies of their works to train its models. Justin Nelson, a lawyer for the authors described the verdict as “the largest copyright recovery ever”.
A federal judge dealt the case a mixed ruling in June when they said that training AI chatbots on copyrighted books wasn’t illegal, but that Anthropic wrongfully acquired millions of books through pirate websites. As part of the deal, the authors reportedly received $3,000 each in compensation. It’s a result that puts the labs in an interesting position: if they obtained the works legally, then they are likely to be covered; if not, then expect similar suits to follow.
Will this slow down the pace of AI progress? Probably not. Anthropic recently raised a $13 billion Series F funding round at a $183B post-money valuation, which means the group has plenty of cash to pay for the settlement. Bigger groups like Google and OpenAI are even better funded.
Elsewhere, researchers continued to make sense of the various quirks of AI models:
OpenAI released a new paper, ‘why language models hallucinate’, which proposed that the systems get things wrong because training and evaluation reward guessing instead of admitting uncertainty.
Several researchers put their name to a survey of AI reasoning. The authors categorise existing methods along two dimensions: (1) regimes, which define the stage at which reasoning is achieved; and (2) architectures, which determine the components involved in the reasoning process.
Google DeepMind claims that general agents contain ‘world models’. They note that any agent that generalises to multi-step goals must have learned a predictive model of its environment, that this model can be extracted and examined, and that more complex goals demand increasingly accurate world models.
Meanwhile, more work emerged seeking to understand how exactly people use AI in the real world:
A new Stanford paper, ‘Canaries in the coal mine’, shows entry-level jobs shrinking in AI-exposed occupations. The researchers reckon that employment for 22 to 25-year-olds in roles like software engineering and marketing has dropped, while older workers remain stable.
A new post says that ‘breakthroughs’ that people have claimed to discover in tandem with LLMs aren’t anything of the sort. The author suggests that “New ideas in science turn out to be wrong most of the time, so you should be pretty skeptical of your own ideas”.
Research from the University of Pennsylvania argues that today’s most powerful AI models can be manipulated using the same psychological approaches that tend to work well on humans.
And in industry news:
OpenAI is working on a LinkedIn competitor, the OpenAI Jobs Platform, which will “use AI to help find the perfect matches between what companies [and governments] need and what workers can offer”.
Anton Leicht argued that building an 'AI safety movement' is a mistake. He writes that “movement-building efforts are underway, but I think they'll change AI policy debate for worse: discredit current popular support, create liabilities for serious advocacy, and get captured by fringe positions.”
Anthropic and OpenAI released the results of a partnership focused on testing each others’ models for misalignment. They found some examples of concerning behaviour in all the models tested, but reported that – compared to the Claude 4 – OpenAI’s o3 looks “pretty robustly” aligned.
Finally, some other bits of reading worth a quick look:
Dan Williams of the University of Sussex put together an excellent reading list about the philosophy of AI.
The Institute for Decentralized AI launched to help work on AI safety solutions that resist centralised control.
Brendan McCord of the Cosmos Institute wrote on ‘Western Dynamism’, the idea that America’s strength flows from tension rather than unity.
Asterix magazine announced a new round of AI fellows.
Famous ‘AI scaling laws’ actually go back at least as far as Bell Labs in the 1990s.
The Oakland Ballers announced they will be the first team to be managed in-game by artificial intelligence.
And last but certainly not least, someone ran Meta’s Llama 2 large language model on a business card.
That’s it for our fourth outing. Let us know what you think, and what we should look at in future editions.
rule30 is a quantitative venture capital fund. We've dedicated three years to developing groundbreaking AI and machine learning algorithms and systematic strategies to re-engineer the economics of startup investing.