We are standing on the precipice of a shift as significant as the Industrial Revolution, yet many industry veterans are still asleep at the wheel. In a fascinating discussion on the Google DeepMind Podcast, host Hannah Fry sat down with Shane Legg, the Co-Founder and Chief AGI Scientist at DeepMind, to discuss the arrival of Artificial General Intelligence (AGI). Legg, who coined the term "AGI" two decades ago, lays out a timeline that suggests this technology isn't a distant sci-fi dream, it is an imminent reality that will fundamentally restructure our economy.
1. The "Minimal AGI" Countdown: 2028
For investors looking at 5-to-10-year horizons, this is the most critical data point. Shane Legg has maintained a prediction since 2009 that we have a 50/50 chance of achieving AGI by 2028. He isn't moving the goalposts; he's doubling down.
He defines "Minimal AGI" as an artificial agent capable of performing the cognitive tasks a typical human can do. While we aren't there yet, current models struggle with "continual learning" and visual reasoning (like understanding perspective in a drawing), Legg believes there are no fundamental blockers remaining. The transition from "sparks" of intelligence to a reliable, human-level agent is largely an engineering and data scaling challenge now.
"Is human intelligence going to be the upper limit of what's possible? I think absolutely not... We need to take seriously the idea that a big change does come."
2. The "Laptop Test" for Labor Disruption
From a value investing perspective, you need to look at your portfolio and ask: "Can this company’s labor force be automated?" Legg offers a terrifyingly simple heuristic: the Laptop Test.
If a job can be done entirely remotely, using a laptop, screen, and microphone, it is purely cognitive work. Legg suggests that while plumbers and physical laborers have a "robotics moat" protecting them for years to come, elite cognitive workers do not. High-powered lawyers, software engineers, and finance professionals are in the direct path of AGI. We are moving from AI as a "tool" (helping you write code) to AI as "labor" (writing the code for you).
"If you can do your work completely that way... then it's probably very much cognitive work. So if you're in that category, I think that advanced AI will be able to operate in that base."
3. The "Expert Blind Spot" (Contrarian Alpha)
One of the most unique insights Legg offered is that the general public often understands the magnitude of this shift better than the experts. This is a critical psychological insight for market timing.
Experts in specific fields (like law, medicine, or coding) often suffer from a bias where they believe their specific domain is too "deep" or "special" for a machine to touch. They focus on the current limitations. The public, however, looks at the broad trend lines, an AI that speaks 150 languages and passes the Bar Exam, and correctly identifies it as "intelligent." Legg implies that betting against the "naysaying experts" might be the winning play.
"Often people who are experts in a particular domain, they really like to feel that their thing is very deep and special, and this AI is not really going to touch them... I actually think many people in the general public are ahead of the experts."
4. Defining the "G" in AGI
There is a lot of noise in the market about what qualifies as AGI. Some say it's passing a standardized test; others say it's the ability to turn $100,000 into $1 million. Legg rejects these narrow economic definitions. For him, the value is in the Generality.
Current systems are "uneven", superhuman at storing knowledge (speaking 150 languages) but sub-human at simple logic puzzles. The investable moment arrives when that unevenness smooths out. Once an AI passes a battery of typical human tasks and survives "adversarial testing" (where teams try to break it for months and fail), we have reached the inflection point.
5. System 2 Thinking: The Safety Unlock
A major concern for institutional investors is reliability. You can't replace a compliance officer with a chatbot that hallucinates. Legg discusses the implementation of "System 2" thinking, a concept borrowed from psychologist Daniel Kahneman.
Instead of an AI just reacting with a "gut instinct" (System 1), DeepMind is working on "chain of thought" monitoring. This allows the AI to pause, reason through a problem, evaluate consequences, and then act. This doesn't just make AI smarter; it makes it safer and potentially more ethical than humans, because it can apply ethical rules consistently without emotional bias.
6. The Physics of Superintelligence
Legg argues that human intelligence is biologically limited by physics: our brains run on 20 watts of power, and our internal signals move at roughly 30 meters per second.
Digital intelligence, by contrast, operates at the speed of light, can scale to hundreds of megawatts, and processes data at frequencies billions of times faster than our neurons. Legg’s argument is that once we hit AGI (human level), we won't stay there. We will rapidly ascend to "Superintelligence" simply because the physical hardware of silicon is vastly superior to the "wetware" of biology.
"You've got six, seven, maybe eight orders of magnitude [advantage] in all four dimensions simultaneously... Is human intelligence going to be the upper limit of what's possible? I think absolutely not."
Conclusion & Call to Action
The overarching message from Shane Legg is that we are currently standing at the "knee of the exponential curve." Humans are notoriously bad at intuitively understanding exponential growth; we expect change to be linear.
For the investor, the implication is clear: The disruption coming for the "knowledge economy" is not a decades-away risk. It is a medium-term certainty. We are transitioning to a world where AI performs meaningful, productive work, decoupling economic output from human labor. The companies that own the infrastructure, the models, and the "System 2" reasoning capabilities will be the new oil barons, while purely cognitive service industries face an existential pivot.
For more of my insights on this topic, be sure to follow me.
