The world has moved a step closer to unifying its goals and values related to artificial intelligence following a meeting of Council of Europe ministers of justice.
The United States, the European Union and the United Kingdom are expected to sign the Framework Convention on AI on Sept. 5. The treaty highlights human rights and democratic values as key to regulating public and private-sector AI models.
United in AI
The Framework Convention on AI would be the world’s first international treaty on AI that is legally binding to its signatories. It would hold each signatory accountable for any harm or discrimination caused by AI systems.
It also mandates that the outputs of said systems respect citizens’ equality and privacy rights, giving victims of AI rights violations legal recourse. However, should there be any violations, consequential reinforcement, such as fines, has yet to be implemented.
Additionally, compliance with the treaty is currently only enforced via monitoring.
The UK’s minister for science, innovation and technology, Peter Kyle, said that signing the treaty would be an “important” first step globally.
“The fact that we hope such a diverse group of nations is going to sign up to this treaty shows that actually, we are rising as a global community to the challenges posed by AI.”
The treaty was initially drafted two years ago, with contributions from over 50 countries, including Canada, Israel, Japan and Australia.
Related: Ex-OpenAI chief scientist raises $1B for startup with only 10 employees
While it may be the first instance of an international treaty on AI, individual nations have been actively considering implementing more localized AI regulations.
Global AI rules
Over the summer, the EU became the first jurisdiction to implement sweeping rules that govern the development and deployment of AI models, particularly high-level ones equipped with large amounts of computing power.
The EU AI Act came into force on Aug. 1 and introduced significant regulations for AI through phased implementation and key compliance obligations.
Though the bill was developed with safety in mind, it has also been a contention point among developers in the AI space, who claim it stifles innovation in the region.
Meta — Facebook’s parent company — which is responsible for building one of the biggest large language models, Llama2, has stopped rolling out its latest products in the EU because of regulations that prevent Europeans from having access to the latest and most advanced AI tools.
Tech firms banded together in August to pen a letter to EU leaders, asking for more time to comply with the regulations.
AI in the US
In the US, Congress has not yet implemented a nationwide framework for AI regulation. However, the Biden administration has established committees and task forces for AI safety.
Meanwhile, in California, lawmakers have been busy drafting and passing AI regulations. In the last two weeks, two bills have been pushed through the State Assembly and are awaiting a final decision for California Governor Gavin Newsom.
The first regulates and penalizes the creation of unauthorized AI-generated digital replicas of deceased personalities.
The second is a highly controversial bill opposed by many of the leading AI developers, which makes safety testing mandatory for the majority of the most advanced AI models and the need to procure outlines on a “kill switch” for such models.
California’s AI regulation is important because it is home to the headquarters of some of the world’s leading developers, such as OpenAI, Meta and Alphabet.