Google DeepMind Chief Calls for Urgent AI Safety Research as Global Leaders Clash Over Governance

The chief executive of Google DeepMind has said that research into the dangers of artificial intelligence must be treated as a matter of urgency, as political and industry leaders gathered at one of the largest AI summits ever held struggled to reach consensus on how to manage the technology's rapid advancement.
Sir Demis Hassabis, speaking at the AI Impact Summit in Delhi, said the industry required what he described as intelligent regulation that focuses on genuine risks rather than broad restrictions. He identified two central threats posed by increasingly capable AI systems: exploitation by malicious actors, and the possibility that advanced systems could eventually become difficult for humans to control.
"Robust guardrails" against those risks were essential, he said, while also acknowledging the difficulty facing regulators attempting to keep pace with the speed at which AI is developing.
Sir Demis was candid about the limits of his own company's influence. When asked whether Google DeepMind had the authority to slow the pace of AI development to allow safety experts more time, he indicated that while his organisation had a meaningful role, it remained only one part of a much larger ecosystem.
His comments were echoed by OpenAI chief executive Sam Altman, who also called for urgent regulatory action during his address to the summit. Indian Prime Minister Narendra Modi similarly urged international cooperation, arguing that countries must work collectively to ensure the benefits of AI are widely shared.
The United States, however, took a markedly different position. Michael Kratsios, a technology adviser to the White House and head of the US delegation, stated clearly that the Trump administration rejects the concept of global AI governance, arguing that centralised oversight and bureaucratic structures would impede rather than support the technology's potential.
The summit, which brought together delegates from more than 100 nations including several heads of state, is expected to conclude with a shared statement from participating countries. The UK was represented by Deputy Prime Minister David Lammy, who argued that responsibility for AI safety does not rest with technology companies alone. Politicians, he said, must work alongside the industry, with public benefit and security taking priority.
On the question of geopolitical competition in AI development, Sir Demis offered a measured assessment. He said the United States and its western allies hold a slight lead over China in AI capability, but suggested that gap could narrow to a matter of months rather than years.
Despite uncertainty around governance, Sir Demis expressed confidence in the transformative potential of the technology. Over the next decade, he predicted AI would function as what he called a superpower for human creativity, dramatically expanding what individuals and organisations are capable of building and achieving.
He also addressed the question of education, arguing that a background in science, technology, engineering, and mathematics would continue to give people an advantage in working with AI systems, even as the technology itself handles more of the technical execution. As AI takes on tasks such as writing software code, he suggested that qualities such as creativity, judgement, and taste would become increasingly valuable for those working with these tools.
Sir Demis, who was awarded the 2024 Nobel Prize in Chemistry in recognition of his work in AI-driven protein structure prediction, said his approach has always sought to balance ambition with responsibility. He acknowledged that mistakes are inevitable but expressed confidence that his organisation approaches the challenge more carefully than most.
Industry Impact and Market Implications
The statements made at the AI Impact Summit carry significant implications for the direction of AI development, regulation, and investment across both public and private sectors.
The open disagreement between the United States and the broader international community on governance represents more than a diplomatic difference of opinion. It reflects a structural tension that could shape how AI products are developed, deployed, and sold across different jurisdictions for years to come. If major AI economies adopt divergent regulatory frameworks, technology companies operating globally may face a patchwork of compliance obligations that increase costs and complicate product rollouts. Multinational firms in particular could find themselves navigating conflicting requirements depending on where their systems are used.
The emphasis from figures such as Sir Demis Hassabis and Sam Altman on urgent safety research also signals that the leading AI laboratories are increasingly aware that public confidence in the technology is not guaranteed. Calls for regulation from within the industry, rather than solely from governments, suggest a degree of self-awareness about reputational risk as AI systems take on more consequential roles in areas such as healthcare, finance, and critical infrastructure.
From a market perspective, the geopolitical framing around the US-China AI race adds pressure on investors, governments, and companies to accelerate development rather than pause for reflection, which may complicate efforts to establish meaningful safety standards. Countries that adopt proactive but proportionate regulatory frameworks could gain a competitive advantage by attracting AI investment from companies seeking legal certainty, while those that move too slowly or too aggressively risk either stifling innovation or allowing harm.
The observation that creativity, judgement, and taste may become differentiating skills as AI handles more technical work has notable implications for the labour market and for education policy. Institutions that continue to emphasise rote technical training without developing higher-order thinking and interdisciplinary skills may find their graduates less prepared for an AI-augmented economy than those that take a broader approach to STEM and the humanities alike.
















