Using an AI “judge” these researchers found that the same verdicts were being reached as had been the case with human judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy.
Ignoring the fact that it didn’t agree with 20 percent of the human verdicts it was interesting to see Professor Mike Hinchey, president of the global ICT professional association, IFIP who is presenting at the ITU World Telecommunication Standardisation Assembly (WTSA-16) in Tunisia raise the issue of “How do we trust AI systems?”
Professor Hinchey, who has worked with AI systems for over 15 years says that in an era of rapid developments in driverless cars, AI-enhanced shopping sites and algorithmic trading on financial markets, more and more important decisions are being made without human involvement.
“The challenge,” according to the Professor, “is how to trust those decisions, particularly in a situation where machine learning means that the computer might make a completely different decision from one context to another. If we are going to empower machines to act on our behalf, then we must be clear about the constraints we want to enforce by specifying a range of behavioural rules we will accept and those we won’t.”
While recognising the enormous investment being made in AI systems like driverless cars, he suggested that the jury was very much still out on whether these systems would ever be fully implemented.
“Technology like driverless cars only really works if everyone applies the rules consistently. Robots will, but humans might not and where both humans and robots interact, problems could arise because of the different ways in which they interpret the rules,” he explained.
One thing we can all agree on, though, is that more research is needed to understand the nuances of AI systems as their influence in our lives continues to grow.