Last month in the House of Lords, we debated an issue that should worry every serious policymaker in this country: the rapid advance of superintelligent artificial intelligence, and the alarming lack of urgency from government in dealing with it.
This was the second debate on the subject in a matter of weeks. That is not a coincidence. It reflects a growing unease that Britain — and much of the world — is drifting towards a technology that could outpace our ability to control it.
This is not science fiction. Real harm is already happening. An advanced AI system developed by Anthropic was recently used in a Chinese state-sponsored cyberattack, with the majority of tasks carried out autonomously by the AI itself. In plain terms, machines are already being trusted to conduct hostile operations at scale.
AI does not recognise borders. It does not respect sovereignty. And once systems reach a point where they can outthink experts, compromise critical infrastructure or destabilise states, the damage will not be easily undone. That is why I am deeply concerned that the Government appears to be taking a “wait and see” approach.
Waiting is not neutral. It is a decision — and a reckless one.
There is international momentum, if we choose to grasp it. The Kuala Lumpur Declaration on responsible AI, agreed through the Commonwealth Parliamentary Association, shows that serious global co-operation is possible. Too often we overlook the Commonwealth as a force for leadership, when in fact it offers an immediate platform for setting shared rules and expectations.
Beyond governments, the warning lights are flashing everywhere. More than 800 leading figures and over 100,000 members of civil society have now called for a halt on the development of superintelligent AI until there is proper scientific and public consensus on safety. Even the technology companies themselves are sounding the alarm. The head of Google DeepMind, based here in the UK said at Davos last week that he would support a global pause if others did the same.
Perhaps most striking of all was the warning from Geoffrey Hinton, one of the pioneers of AI, who said plainly that international regulation is needed to stop this technology being abused. He compared the situation to chemical weapons — tools so dangerous that the world eventually agreed they must be constrained by international law.
Despite today’s fractured geopolitics, that comparison matters. We did not abandon the Geneva Convention because the world was unstable. We created it because the risks were too great to ignore.
The United Kingdom has a real opportunity to lead. The AI Safety Summit at Bletchley Park and the creation of AI security institutes give us credibility and convening power. We can help shape international agreements now — or we can wait until something goes catastrophically wrong.
We have been here before. With social media, governments delayed, dithered and reassured themselves that the market would self-correct. Only after serious harm — to children, to mental health, to democratic debate — did regulation finally arrive. We are still playing catch-up.
With superintelligent AI, there may be no second chance.
That is why I called on the Government to formally recognise extinction-level risk from superintelligent AI as a national security priority, and to lead international efforts towards a moratorium on its development, backed by verifiable commitments from all major AI-developing nations.
This is not about being anti-technology. It is about being pro-human, pro-security and pro-common sense. Britain can lead the world in putting safety first — or it can drift, and hope for the best.
Hope is not a strategy.
Join our free subscriber list [and pass it on to anyone who might benefit.]
Discover more from PeopleMatter.TV
Subscribe to get the latest posts sent to your email.



