LEADING AI nations, convened for the first time by the UK and including the USA, China and the European Union, have reached an historic agreement to establish a shared understanding of the opportunities and risks posed by frontier AI and the need for governments to work together to meet the most significant challenges.
The Bletchley Declaration on AI Safety sees 28 countries agree to the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe and responsible way.
The Declaration fulfils key objectives of the AI Summit, which ended yesterday (Thursday), in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration.
Countries agreed substantial risks may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concern caused by cybersecurity, biotechnology and disinformation risks.
The Declaration sets out agreement that there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
Countries also noted the risks beyond frontier AI, including bias and privacy.
Recognising the need to deepen the understanding of risks and capabilities that are not fully understood, delegates have also agreed to work together to support a network of scientific research on Frontier AI safety.
This builds on Prime Minister Rishi Sunak’s announcement last week for the UK to establish the world’s first AI Safety Institute, complementing existing international efforts including at the G7, OECD, Council of Europe, United Nations and the Global Partnership on AI.
The Declaration details that the risks are “best addressed through international cooperation”. As part of agreeing a process for international collaboration on frontier AI safety, The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months.
France will then host the next in-person Summit next year.
The Declaration, building upon last week’s announcement of the UK’s emerging processes for AI safety, also acknowledges that those developing these unusually powerful and potentially dangerous frontier AI capabilities have a particular responsibility for ensuring the safety of these systems, including by implementing systems to test them and other appropriate measures.
Mr Sunak said: “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI, helping ensure the long-term future of our children and grandchildren.
“The UK is once again leading the world at the forefront of this new technological frontier by kickstarting this conversation, which will see us work together to make AI safe and realise all its benefits for generations to come.
Technology Secretary Michelle Donelan pictured above said the Declaration was an important initial step. “We have always said that no single country can face down the challenges and risks posed by AI alone, and the landmark Declaration marks the start of a new global effort to build public trust by ensuring the technology’s safe development.”
Ms Donelan will face questions on the summit and its outcomes when she goes before the Science Innovation and Technology Select Committee on Wednesday (November 8).
The committee is meeting to discuss the cyber resilience of the UK’s critical national infrastructure.
Michelle Donelan, the Technology Secretary, at the AI Safety Summit. Photographer/UK Government
Select Committee chairman Greg Clarke said: “The Prime Minister has set out his intention for the UK to be a global leader in the safe development and deployment of artificial intelligence, and this successful summit underlines the UK’s convening power.”
The Bletchley Declaration correctly identifies potential risks associated with the development of AI technology, he added. But existential risk from frontier AI is just one of the 12 governance challenges that the Select Committee set out.
“Many of the here and now challenges need an urgent response.,” Mr Clarke said.
These include the potential for current and future AI applications to exacerbate biases, to fake people’s words, to allow personal data to be identified and the urgent question of whether models should be required to be open source or proprietary.
“The government must address these here-and-now issues as a priority, and its response to my committee’s interim report on AI – which we expect promptly now this summit has concluded – would be an ideal place to start,” Mr Clarke said.
“We also hope to see an urgent response to the White Paper consultation, which closed in June of this year.