UK signs treaty to implement AI safeguards

1 min read

Last week the UK government signed the first international treaty on artificial intelligence in a move that looks to prevent the misuse of the technology.

Credit: Suriya - adobe.stock.com

Worries have been growing over the use of AI to spread misinformation or use biased data to make decisions.

The treaty was also signed by the EU, US and Israel, and it’s a legally binding agreement that calls on countries to implement safeguards against any threats posed by AI to human rights, democracy and the rule of law.

The treaty, called the framework convention on artificial intelligence, was drawn up by the Council of Europe, an international human rights organisation.

While AI is seen as being able to radically improve public services and economic growth there are worries that the technology could be used to erode key values.

According to the Council of Europe, the treaty is intended to “fill any legal gaps that may result from rapid technological advances”.

Recent breakthroughs in AI have triggered a rush towards implementing regulations that can mitigate the technology’s potential flaws, but this has created in the process a patchwork of regulations and agreements. Last week’s agreement is being seen as an attempt to create a global framework that protects personal data; is non-discriminatory; delivers safe development; and human dignity. It covers the use of AI by public authorities and the private sector.

Companies or public bodies that use AI systems must assess its potential impact on human rights, democracy and the rule of law, and they need to make that information available to the public. The public must also be able to challenge decisions made by AI systems and be able to lodge complaints with authorities.

The UK is currently looking at how the various provisions are covered by existing legislation and is drawing up a consultation on a new AI bill.

While the treaty was welcomed by many others doubt that it will provide any meaningful headway in tackling the real-world AI threats. It is seen as being too general and unspecific in terms of the harms to be addressed and actions to be taken.

Despite that there is a growing demand for better AI regulation and many organisations are working to better comply with future regulation and reduce the risk of misusing AI generally, and they are doing that by introducing greater auditability and reproducibility when it comes to data.