Highlights:
- The declaration underscores urgent AI risks, including machine learning models being used for deceptive content generation.
- The declaration also discusses the responsibility of private companies, especially those developing cutting-edge AI models, in ensuring safe technology use.
The United States, the United Kingdom, China, and 25 other nations have all signed a declaration emphasizing the need to address the possible risks associated with artificial intelligence.
During the recent high-profile Summit on AI Safety in London, the document known as the Bletchley Declaration was unveiled. This is the first of several planned events centered around AI risks and mitigation strategies. South Korea is scheduled to host a second summit on the subject in six months, and France will host a third summit of a similar nature approximately a year from now.
The declaration is a 1,300-word document listing some possible risks associated with advanced AI models and possible solutions. The European Union, a signatory to the declaration along with 28 other countries, indicated that artificial intelligence is not just extensively utilized at present. However, it is also anticipated to further increase in prevalence. The signatories declare, “This is, therefore, a unique moment to act and affirm the need for the safe development of AI.”
The declaration continues by listing several AI risks that are considered especially pressing. One is the possibility of creating misleading content with machine learning models. “Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks,” the signatories also highlighted.
The nations who supported the declaration have declared their intention to collaborate in order to mitigate those risks. They plan to increase the number of participating countries and enrich the scope of current AI safety collaborations as part of this initiative.
The declaration lists numerous important goals for the undertaking. “Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding” will be the top priority for the participating nations. In order to address concerns pertaining to AI, they will also work to implement “respective risk-based policies across our countries.”
Participants in the initiative will also “resolve to support an internationally inclusive network of scientific research on frontier AI safety.” In light of the London gathering that provided the declaration’s details, the White House declared the opening of a new research institute that will create technical tools for identifying and reducing the risks associated with artificial intelligence. A similar institute was previously planned in detail by the United Kingdom.
The declaration also discusses how private businesses, especially those creating cutting-edge AI models, can help guarantee that the technology is applied safely. The declaration states, “We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems.”
Chief Technology Officer at the security consultancy NCC Group, Siân John, said, “This week’s Bletchley Declaration – alongside the G7’s Hiroshima Process and domestic moves like the White House’s Executive Order – represent critical steps forward toward securing AI on a truly global scale. We are particularly heartened to see commitments from the Bletchley signatories to ensure that the AI Safety Summit is not just a one-off event, but that participants will convene again next year in South Korea and France, ensuring continued international leadership. In doing so, it will be important to set clear and measurable targets that leaders can measure progress against.”
However, it doesn’t address every issue with AI. A Researcher at the software-as-a-service security firm AppOmni Inc., Joseph Thacker, noted, “The declaration doesn’t cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn’t capable of anything critically dangerous.”