Demis Hassabis had a plan. Cure cancer. Crack protein folding. Use AI as a scientific instrument, not a consumer product. Then ChatGPT launched in November 2022, went viral overnight, and changed everything—including, he admits, the nature of the race he’s now running. “We’re in this sort of ferocious commercial pressure race that everyone’s locked into currently,” the Google DeepMind CEO told YouTuber Cleo Abram in a recent interview. “And then on top of that there’s geopolitical issues like the US-China race. So there’s multiple levels of pressure to move fast.”He’s not blaming OpenAI for pulling the trigger. He’s saying nobody saw the bullet coming.
Every major AI lab had something like ChatGPT. OpenAI just shipped it first—and the world wasn’t ready
Hassabis told Abram that the leading labs, DeepMind included, had “fairly equivalent systems at the time.” The gap wasn’t capability—it was nerve. OpenAI scaled it and put it out. “I think even they say it was kind of a research experiment,” he said. “They didn’t realize it would go so viral.”The researchers building these systems were, paradoxically, the least equipped to see their value. They were too close to the flaws—the hallucinations, the gaps, the things that still embarrassed them. They didn’t expect the general public to find genuine use in something so visibly imperfect. That miscalculation, made by almost every lab simultaneously, is what set off the race Hassabis now finds himself inside.He’s pragmatic about it. There are upsides—faster progress, democratized access, public familiarity with AI before the really consequential systems arrive. But the cost is real: the careful, methodical approach he’d envisioned is gone.
Hassabis wanted AI kept in the lab longer—building AlphaFolds, not chatbots
In an August 2025 interview with the Guardian, Hassabis was unusually candid about what he’d actually wanted. “If I’d had my way, we would have left it in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that,” he said.His original vision was closer to a CERN model—a global, collaborative, unhurried effort where each step toward AGI was understood before the next one was taken. Alongside that, specialized narrow systems like AlphaFold—which mapped over 200 million protein structures and won him a Nobel Prize in chemistry—could quietly deliver enormous benefits to humanity. No chatbot arms race required.That’s not the timeline we’re on.
The risks he’s most worried about aren’t today’s AI—they’re three to four years out
With the race underway, Hassabis is increasingly focused on what comes next. In the Cleo Abram interview, he outlined two concerns he thinks aren’t getting nearly enough attention. First: bad actors—from individuals up to nation states—repurposing tools built for beneficial ends. Second, and more unsettling: AI systems themselves drifting off course as they grow more capable and autonomous.“How do we make sure the guardrails are put in place so that they do exactly what they’ve been told?” he asked. “That is going to be an incredibly hard technical challenge.”This isn’t abstract worry. A excerpt in Colossus Magazine from Sebastian Mallaby’s book The Infinity Machine documents how Hassabis spent years trying to build formal safety oversight structures inside Google—independent boards, spin-out proposals, governance charters—and watched each one fail. His conclusion, arrived at slowly and reluctantly: structures don’t hold. Real influence over how AI gets deployed comes from being inside the room, not building fences around it.For someone who got into AI to answer the biggest questions in science, the chatbot era is a detour he never asked for. He’s making the best of it—but he hasn’t stopped thinking about what it cost.