Starting off November 2023, the UK government hosted a global summit on Artificial Intelligence (AI) Safety to scour the role of AI in transforming politics, economy, society, and international affairs. The summit was marked by the presence of about 150 global representatives and diplomats, including government leaders, ministers, industry and academia experts and civil society leaders.
The UK government’s concerns over unregulated AI development rose out of the rapid growth of AI companies in the country, almost twice as many in any other European country. The sector employs more than 50,000 people and contributes £3.7 billion to the economy per year.
A key objective of the summit was “frontier risks” presented by AI which refers to the risks arising from the training and development of the most advanced models. This is a crucial step forward from the open letter signed by over 350 industry leaders warning against the potential existential threat to humanity posed by AI in May 2023. It aimed for the attendees to “work towards a shared understanding of risks and coordinate a global effort to minimise them”, according to the UK government’s website.
Following the signing of the Bletchley Declaration on the risks of AI development, the United Nations confirmed an expert AI panel on the International Panel on Climate Change and called for the collaboration of major tech companies in testing models before release.
Consensus or division
While the existential risk to humanity presented by AI was recognised by politicians and executives, arguing the unruly potential of disinformation, persons including Nick Clegg at Meta argued that beyond the immediate threat to democratic polls, the existential fears are “overplayed”. For politicians and bureaucrats, the immediate fear is over the upcoming elections in the US, India, and the UK which would be affected by the unregulated use of AI.
Still, concerns over AI regulations were deliberated where countries in the EU are closer to passing AI acts and the UK officials do not believe in the immediacy of these regulations. According to reports by The Guardian, the officials agreed, “What we need most of all from the international stage is a panel like the International Panel on Climate Change, which at least establishes a scientific consensus about what AI models are able to do”.
Amongst other notable conversations were the concerns on discrimination against people based on algorithms, replacement of jobs, environmental impact from data centres and subversion of democracy through misinformation and disinformation.
The Bletchley Declaration: Outcomes
The summit was successful in identifying the risk areas in the development of AI, specifically in terms of the development power held in the hands of the private sector. UK Prime Minister Rishi Sunak remarked: “We shouldn’t rely on them (companies developing AI) marking their own homework. Only governments can properly assess the risk to national security”. Subsequently, the UK’s efforts in leading government regulation over AI were noted; “The UK’s answer is not to rush to regulate. We’re building world leading capability to understand and evaluate the safety of AI models within government”.
Complementarily, the potential of AI to enhance growth in sectors including education, health care, access to justice, environmental protection and others is significant. Regulation at the domestic level cannot then fruitfully manage risks resulting in an “arms race” between countries and defeating its purpose.
The Bletchley Declaration therefore is an effort to avoid threats to life and limb and preserve human rights and UN Sustainable Development Goals. The UK noted the importance of the document, “The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration”. Signed by all 27 countries in attendance, the question of practice remains on the interpretation of the values of the declaration, for instance, “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed” leaves room for interpretation and extent of application.
In addition to the declaration, the UK announced a new AI Safety Institute to carry out safety evaluations of frontier systems and a body chaired by scientist Yoshua Bengio to report on risks and capabilities. Further, companies in possession of these frontier systems agreed to make them available for scrutiny. It also opened dialogue with non-democracies such as China on AI norms.
Still, the summit has inadequately covered the dangers of AI development as argued by analysts. Having a narrow focus and the superficial language used in the document have also been criticised. Concluding the summit’s conversations, UK Prime Minister Rishi Sunak and Elon Musk discussed the topic, where Musk remarked, “We live in the most interesting times. And I think this is 80% likely to be good, and 20% bad, and I think if we're cognisant and careful about the bad part, on balance actually it will be the future that we want”.
Share This Post On
Leave a comment
You need to login to leave a comment. Log-in