The US chipmaker has benefitted from having taken a punt on building the semiconductors that power artificial intelligence technology and has been described by Thomas Hayes, the chair of Great Hill Capital as the ‘the poster child for AI at the moment.’ The rush to buy the company’s stock came after it reported that it expected to bring in $11bn in revenue in the upcoming quarter – $4bn more than Wall Street expected.
But the boom in AI has been accompanied by a growing number of technologists raising concerns as to the impact of the technology, warning that AI should be considered as being a societal risk and prioritised in the same class as pandemics and nuclear wars – their concerns are real and need to be addressed sooner, rather than later.
Leaders of the G7 nations have called for the development of technical standards to keep AI “trustworthy”, covering areas such as governance, copyrights, transparency and the threat of disinformation, and the EU technology chief, Margrethe Vestager, has urged the US and EU to push the AI industry to adopt a voluntary code of conduct within months to provide safeguards while new laws are developed.
And while leading figures have called for restrictions on the rapid development of AI, research by the Prospect trade union found that almost 60% of people would like to see regulation of generative AI technologies such as ChatGPT when it comes to the workplace.
So, what can regulators do? The economic and societal benefits of AI are immense but it’s a rapidly developing technology about which we know less than we should. Should there be pause in AI development or a limit as to how much computing power is used to train AI?
Other industries have been regulated, so why not AI?
The problems are that AI is a global issue that concerns the US, China and the rest of the world and it can be used anywhere and everywhere, from helping to create new drugs to better management of railways.
Ideally, it requires a global response but that’s unlikely. Perhaps as we stumble towards greater regulation of this game-changing technology – and its coming - we could start by requiring greater transparency from the AI companies themselves to better understand what they are developing and how.