The agreement is intended to encourage companies to create AI systems that are "secure by design,” designing and using AI in such a way that they keep customers and the wider public safe from misuse.
However, as with so much that concerns AI, this agreement is non-binding and is simply a list of recommendations – whether that’s protecting data or vetting software suppliers. It does represent, though, an agreement that puts security at the heart of AI development.
In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.
Many experts are concerned that AI companies are moving too fast when it comes to developing artificial general intelligence, so this attempt to make AI more secure in terms of how it is designed should be welcomed.
However, the bigger question as raised by the recent ‘crisis’ at OpenAI is surely what AI is being developed for and that requires much greater transparency from board-level downwards if we are to have confidence in the development of safe and secure AI.